id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
9,471,618 | https://en.wikipedia.org/wiki/Wildlife%20of%20Myanmar | The wildlife of Myanmar includes its flora and fauna and their natural habitats.
Flora
Like all Southeastern Asian forests, the forests of Myanmar can be divided into two categories: monsoon forest and rainforest. Monsoon forest is dry at least three months a year, and is dominated by deciduous trees. Rainforest has a rainy season of at least nine months, and are dominated by broadleaf evergreen.
In the region north of the Tropic of Cancer, in the Himalayan region, subtropical broadleaf evergreen dominates to an elevation of 2000 m, and from 2000 m to 3000 m, semi-deciduous broadleaf dominates, and above 3000 m, evergreen conifers and subalpine forest are the primary fauna until the alpine scrubland.
The area from Yangon to Myitkyina is mostly monsoon forest, while peninsular Malaysia south of Mawlamyine is primarily rainforest, with some overlap between the two. Along the coasts of Rakhine State and Tanintharyi Division, tidal forests occur in estuaries, lagoons, tidal creeks, and low islands. These forests are host to the much-depleted Myanmar Coast mangroves habitat of mangrove and other trees that grow in mud and are resistant to sea water. Forests along the beaches consist of palm trees, hibiscus, casuarinas, and other trees resistant to storms.
Fauna
Myanmar is home to nearly 300 known mammal species, 300 reptile species, and about 1000 bird species. There are also many non-marine molluscs in Myanmar.
See also
Deforestation in Myanmar
References
Sources
Myanmar
Biota of Myanmar | Wildlife of Myanmar | [
"Biology"
] | 314 | [
"Biota by country",
"Wildlife by country",
"Biota of Myanmar"
] |
9,471,628 | https://en.wikipedia.org/wiki/Wildlife%20of%20the%20Maldives | The wildlife of the Maldives includes the flora and fauna of the islands, reefs, and the surrounding ocean.
Recent scientific studies suggest that the fauna varies greatly between atolls following a north–south gradient, but important differences between neighbouring atolls were also found (especially in terms of sea animals), which may be linked to differences in fishing pressure – including poaching.
Ecology
The land-based biotopes of the Maldives are highly endangered. The little land available in the country is being swiftly developed. Formerly uninhabited islands were only occasionally visited, but now almost no untouched uninhabited islands remain. Many of the natural habitats of local species have been severely threatened or destroyed during the past decades of development.
Coral reef habitats have also been damaged, as the pressure for land has brought about the creation of artificial islands. Some reefs have been filled with rubble with little regard for the changes in the currents on the reef shelf and how the new pattern would affect coral growth and its related life forms on the reef edges.
Plants
The Maldives have a rich variety of plant life despite the lack of fertile soils. There are three plant communities. The first is the foreshore, which is closest to the ocean and mostly bare except for hardy creeping vines such as Ipomoea species. The next is the beach crest, which is slightly more protected from the tides. Scaevola taccada, Pemphis acidula, Tournefortia argentea, and Guettarda speciosa are very common and often dominant there. Finally, the inner island habitats are the most protected. Sometimes dense coconut plantations and moist soils allow the growth of understory trees, like Morinda citrifolia or Guettarda speciosa. On northern islands, Hibiscus tiliaceus or Premna serratifolia form pure stands. Mixed forests are also common. Out of the vascular plants of the Maldives, 260 grow in the wild and are either native or naturalized, while an additional 323 are cultivated.
Mangrove forests
Mangroves are found in brackish or muddy areas of the Maldives. Fourteen species over ten genera are native to the Maldives, including one fern, Acrostichum aureum.
Flora gallery
Vertebrates
Fish
There is a wide diversity of sea life in the Maldives, with corals and over 2,000 species of fish, ranging from colourful reef fish to the blacktip reef shark, moray eels, and a wide variety of rays: manta ray, stingray, and eagle ray. The Maldivian waters are also home to the whale shark. The waters around the Maldives are abundant in rare species of biological and commercial value, with tuna fisheries being one of the traditional commercial resources. In the few ponds and marshes there are freshwater fish, like milkfish (Chanos chanos) and smaller species. The tilapia or mouth-breeder was introduced by a UN agency in the 1970s.
Sealife gallery
Reptiles and amphibians
Since the islands are very small, land-based reptiles are rare. There is a species of gecko, as well as one species of agamid lizard, the oriental garden lizard (Calotes versicolor), the white-spotted supple skink (Riopa albopunctata), the Indian wolf snake (Lycodon aulicus) and the brahminy blind snake (Ramphotyphlops braminus).
In the sea there the green sea turtle (Chelonia mydas), hawksbill sea turtle (Eretmochelys imbricata) and leatherback sea turtle (Dermochelys coriacea), that lay eggs on Maldivian beaches. Sea snakes such as the yellow-bellied sea snake (Hydrophis platurus) that live in the Indian Ocean are occasionally cast up on the shore after storms, where they are rendered helpless and unable to return to the sea. Saltwater crocodiles (Crocodylus porosus) have also been known to reach the islands and dwell in marshy areas.
The southern burrowing frog (Sphaerotheca rolandae) is found in a few islands, while the Asian common toad (Duttaphrynus melanostictus) has a more widespread presence.
Birds
The location of this Indian Ocean archipelago means that its avifauna is mainly restricted to pelagic birds. Most of the species are Eurasian migratory birds, only a few being typically associated with the Indian sub-continent. Some, like the frigatebird are seasonal. There are also birds that dwell in marshes and island bush, like the grey heron and the moorhen. White terns are found occasionally on the southern islands due to their rich habitats.
Birds
Mammals
There are very few land mammals in the Maldives. Only a flying fox and a shrew are endemic.
Cats, rats, and mice have been introduced by humans, often invading the uninhabited areas of islands and becoming pests.
Bringing dogs to the Maldives is strictly forbidden.
In the ocean surrounding the islands there are several species of whales and dolphins. Occasionally stray seals from sub-Antarctic waters have been recorded on the islands.
Invertebrates
Corals
The islands of the Maldives themselves have been built by the massive growth of coral, a colony of living polyps beings.
Coelenterata
There are many kinds of anemones and jellyfish in Maldivian waters.
Arthropods
There are four species of lobsters and many species of crabs in the Maldives. Some crabs live in the water, but many, like the ghost crab, live on the beach burrowing holes in the sand by the waterline. Fiddler crabs are common on muddy reef shelves.
Land crabs, like the hermit crab, live under the leaves of shore bushes. Some are domestic pests, living in holes in houses.
Some prawns and shrimp live near the islands but are not fished commercially.
There is a kind of centipede, as well as millipedes, and a small scorpion.
Several species of spiders are found in the Maldives. Spiders exhibit remarkable affinity with those found in the southwestern coast of Indian mainland and Sri Lanka. A pioneering work on spiders of the Maldives was conducted by Reginald Innes Pocock in 1904 in the work Fauna and Geography of Maldives. A few common spiders include the brown huntsman spider (Heteropoda venatoria), Plexippus paykulli, Argiope anasuja, and lynx spiders, and black widows are very occasionally seen on Hulhumalé island and Malé International Airport.
Mollusks
Octopuses, squids, and clams are common on Maldivian reefs. The giant clam (Tridacna gigas) is common on the reef shelf.
Echinoderms
The Maldive reefs teem with starfish, brittle stars, and sea urchins. Sea cucumbers are now a source of income, being exported to east Asian markets, however, they were not traditionally fished locally. Recently sea cucumbers have been over harvested in the Maldives, most probably by illegal poaching.
See also
List of mammals of the Maldives
List of birds of the Maldives
Marine wildlife of Baa Atoll
References
Sources
Biota of the Maldives
Maldives | Wildlife of the Maldives | [
"Biology"
] | 1,467 | [
"Biota by country",
"Wildlife by country",
"Biota of the Maldives"
] |
9,472,177 | https://en.wikipedia.org/wiki/Homer%20Burton%20Adkins | Homer Burton Adkins (16 January 1892 – 10 August 1949) was an American chemist who studied the hydrogenation of organic compounds. Adkins was regarded as top in his field and a world authority on the hydrogenation of organic compounds. Adkins is known for his wartime work, where he experimented with chemical agents and poisonous gasses. Renowned for his work, Adkins eventually suffered a series of heart attacks and died in 1949.
Early life and work
Adkins was born on January 16, 1892, in Newport, Ohio, the son of Emily (née Middleswart) and Alvin Adkins. He grew up on a farm with his brother and sister. After attending and finishing high school in Newport, he entered Denison University. Having a reputation as a tall and shy boy, Adkins graduated in three and a half years. Adkins then spent three years at Ohio State University. He took his master's degree in 1916, and his Ph.D. in 1918, under the direction of William Lloyd Evans. After receiving his degree, he began work as a research chemist for the United States Department of War. In the following academic year, Adkins served as an instructor in organic chemistry at Ohio State University and in the summer of 1919 he was a research chemist with E. I. Du Pont De Nemours and Company.
In 1919, Adkins came to the University of Wisconsin–Madison. He remained there each year until his death in 1949, except for two summers he spent working in industry at the Bakelite Corporation in 1924, and 1926 and for responsibilities from 1942 to 1945 as administrator and research director in the war program of the National Defense Research Committee and the Office of Scientific Research and Development. Adkins was a lecturer to graduate students in a course entitled "Survey of Organic Chemistry," but he also kept contact with students in elementary and continued for most of the time to give lectures in the first course in organic chemistry.
In 1919, Adkins began his thesis on the rates of oxidation by potassium permanganate of acetaldehyde, oxalic acid and the rates of reactions having to do with different additions of temperature and molarity. His overall interest was the nature of the intermediate of these reactions. Soon after, he wrote a second paper concerning reaction rates and a third involving the catalytic addition of oxides on esters. His whole research began to revolve around the nature of a product resulting from a reaction depending on the catalyst given. The study on catalysts led to his most important contribution, the hydrogenation of an ester to an alcohol with the use of a copper chromite catalyst. After the study on copper chromite, Adkins delved further and further into hydrogenation reactions and the use of catalysts. A new reaction came out of his research in which hydrogen is added to a double bond on a catalytic surface, the given molecule then splits off into two separate molecules.
Adkins published many books on top of his position as a lecturer and successful researcher. He published his most recognizable book, “Reactions of Hydrogen”, referring to his extensive studies and pioneering of Hydrogenation. He also co-published many of the books students in Organic Chemistry classes studied with. A widely known textbook, “Elementary Organic Chemistry” was co-written by Adkins and published by McGraw-Hill Book Company.
World War II
Throughout World War II, Adkins focused his research on wartime necessities. Many feared that poisonous gases would be used extensively in World War II as they had been during World War I. Adkins' laboratory at Wisconsin engaged in chemical warfare research. Classified documents at the time revealed Adkins and his colleagues describing their research on agents to produce blistering, vomiting, tearing and sneezing. Adkins also studied the removal of the effects of poison agents by using multiple different kinds of chemicals and ointments, combined with protective clothing for soldiers. Due to the magnitude and effect of his work, Adkins was a recipient of the Medal for Merit in 1948 for his wartime studies.
Adkins was a world authority on the hydrogenation of organic compounds and he developed the Adkins catalyst partly based on interrogation of German chemists after World War II in relation to the Fischer–Tropsch process. Adkins also coined the word hydrogenolysis to describe the chemical reaction in which a molecule is broken into smaller molecules by the reaction of hydrogen. He developed the Adkins–Peterson reaction with Wesley J. Peterson.
Later life and death
While he was a graduate student at the Ohio State University, Adkins married Louise Spivey, who was a classmate of his at Denison and who was teaching high school mathematics. The pair had three children: Susanne, Nance, and Roger. He had three grandchildren from Susanne.
Adkins enjoyed leisure activities, such as golfing, as he found it was relaxing and he needed the exercise.
Teaching, maintaining a large research program, and war time pressure took a heavy toll on Adkins' strength. While playing a game of golf in the late spring of 1949, Adkins suffered a minor heart attack. He concluded that as soon as he had the chance to get treatment, he would. After a meeting with other interested chemists regarding his research, Adkins suffered a larger heart attack and was hospitalized for roughly a month. Adkins' condition seemed to improve, so he was sent home. His health suddenly again began failing rapidly. Weakened and bed-ridden, Adkins died in Madison, Wisconsin, on August 12, 1949.
Awards and honours
During his lifetime, Adkins received many honors. He received an honorary Doctor of Science degree from Denison University, his alma mater, in 1938. President Harry S. Truman awarded Adkins with the Medal for Merit. In 1942, Adkins was accepted as a member of the National Academy of Sciences.
Legacy
After his death, Adkins' many former students and friends founded the Homer Adkins Fellowship, which supported a graduate student in chemistry at the University of Wisconsin.
President Edwin B. Fred of the University of Wisconsin said, "He was recognized as one of the leading chemists that America has produced. He was the kind of man who makes a University distinguished." President James B. Conant of Harvard said, "the academic world has suffered an irreparable loss."
Notes
Bibliography
External links
Genealogy database entry
National Academy of Sciences Biographical Memoir
1892 births
1949 deaths
American organic chemists
Denison University alumni
Medal for Merit recipients
People from Washington County, Ohio
Members of the United States National Academy of Sciences
Ohio State University College of Arts and Sciences alumni
Chemists from Ohio | Homer Burton Adkins | [
"Chemistry"
] | 1,340 | [
"Organic chemists",
"American organic chemists"
] |
9,472,356 | https://en.wikipedia.org/wiki/Wildlife%20of%20Iran | The wildlife of Iran include the fauna and flora of Iran.
One of the most famous animals of Iran is the critically endangered Asiatic cheetah (Acinonyx jubatus venaticus), which today survives only in Iran.
History
The animals of Iran were described by Hamdallah Mustawfi in the 14th century. In the 18th and 19th centuries, Samuel Gottlieb Gmelin and Édouard Ménétries explored the Caspian Sea area and the Talysh Mountains to document Caspian fauna. Several naturalists followed in the 19th century, including Filippo de Filippi, William Thomas Blanford, and Nikolai Zarudny who documented mammal, bird, reptile, amphibian and fish species.
Flora
More than one-tenth of the country is forested. The most extensive forest is found on the mountain slopes rising from the Caspian Sea, with stands of oak, ash, elm, cypress, and other valuable trees. On the plateau proper, areas of scrub oak appear on the best-watered mountain slopes, and villagers cultivate orchards and grow the plane tree, poplar, willow, walnut, beech, maple, and mulberry. Wild plants and shrubs spring from the barren land in the spring and afford pasturage, but the summer sun burns them away. According to FAO reports, the major types of forests that exist in Iran and their respective areas are:
Caspian forests of the northern districts (33,000 km2)
Limestone mountainous forests in the northeastern districts (Juniperus forests, 13,000 km2)
Pistachio forests in the eastern, southern and southeastern districts (26,000 km2)
Oak forests in the central and western districts (100,000 km2)
Shrubs of the Dasht-e Kavir districts in the central and northeastern part of the country (10,000 km2)
Sub-tropical forests of the southern coast (5,000 km2) like the Hara forests.
More than 8,200 plant species are grown in Iran. The land covered by Iran's natural flora is four times that of the Europe's.
Fauna
Iran's living fauna includes 34 bat species, Indian grey mongoose, small Indian mongoose, golden jackal, Indian wolf, foxes, striped hyena, leopard, Eurasian lynx, brown bear and Asian black bear. Ungulate species include wild boar, urial, Armenian mouflon, red deer, and goitered gazelle.
Domestic ungulates are represented by sheep, goat, cattle, horse, water buffalo, donkey and camel. Bird species like pheasant, partridge, stork, eagles and falcons are also native to Iran.
Endangered
As of 2001, 20 of Iran's mammal species and 14 bird species were endangered. Endangered species in Iran include the Baluchistan bear, Asiatic cheetah, Caspian seal, Persian fallow deer, Siberian crane, hawksbill turtle, green turtle, Oxus cobra, Latifi's viper, dugong, Panthera pardus tulliana, Caspian Sea wolf, and dolphin. At least 74 species of Iranian wildlife are listed on the IUCN Red List, a sign of serious threats to the country's biodiversity. Majlis have shown disregard for wildlife by passing laws and regulations such as the act that lets the Ministry of Industries and Mines exploit mines without the involvement of the Department of Environment, and by approving large national development projects without demanding comprehensive study of their impact on wildlife habitats.
The leopard's main range overlaps with that of bezoar ibex, which occurs throughout Alborz and Zagros mountain ranges, as well as smaller ranges within the Iranian Plateau. The leopard population is very sparse, due to loss of habitat, loss of natural prey, and population fragmentation. Apart from bezoar ibex, wild sheep, boar, deer, and domestic livestock constitute leopard prey in Iran.
Extinct
Aurochs (unknown date post-dating the Neolithic).
Cave hyena, native to Iran during the Last Glacial Period (as evidenced by remains in sites such as Wezmeh Cave), became extinct at an unknown date.
Narrow-nosed rhinoceros native to Iran during the Last Glacial Period, became extinct at an unknown date.
Hydruntine, extinct species of wild ass. Youngest records in Iran date to the 2nd millennium BC.
The Syrian elephant roamed southern Iran, before vanishing there in ancient times.
The Asiatic lion was recorded only in Iran's Khuzestan and Fars provinces. The last sighting occurred in 1957 in the Dez River valley. In the 1970s, Arzhan National Park was considered as a site for its reintroduction.
The Caspian tiger used to occur in the northern region around the Caspian Sea, and in the Trans-Caucasian and Turkestani regions of the Union of Soviet Socialist Republics, before 1960. The last tiger in Iran was reportedly sighted in Golestan National Park in 1958.
See also
List of birds of Iran
List of mammals of Iran
List of non-marine molluscs of Iran
World Network of Biosphere Reserves in Asia and the Pacific
List of national parks and protected areas of Iran
Geography of Iran
Environmental issues in Iran
International rankings of Iran
Wildlife of Afghanistan
Wildlife of South Asia
Wildlife of Iraq
References
External links
Fauna of Persia, Encyclopædia Iranica
Department of Environment of Iran
Asian Leopard Specialist Society, Iran: Research, Conservation and Management
Flora of Iran by Pr Ahmad Ghahreman
Flora of Iran
Iranian Cheetah Society (ICS)
Status of the Persian Leopard in Iran
Skin of a Persian lioness, belonging to an endangered subspecies of lions, brought to Dublin by King Edward VII in 1902.
Iran
Biota of Iran | Wildlife of Iran | [
"Biology"
] | 1,162 | [
"Biota by country",
"Wildlife by country",
"Biota of Iran"
] |
9,472,420 | https://en.wikipedia.org/wiki/Wildlife%20of%20Afghanistan | Afghanistan has long been known for diverse wildlife. Many of the larger mammals in the country are categorized by the International Union for Conservation of Nature as globally threatened. These include the snow leopard, Marco Polo sheep, Siberian musk deer, markhor, urial, and the Asiatic black bear. Other species of interest are the ibex, the gray wolf, and the brown bear, striped hyenas, and numerous bird of prey species.
Most of the Marco Polo sheep and ibex are being poached for food, whereas wolves, snow leopards and bears are being killed for damage prevention.
A leopard was recorded by a camera-trap in Bamyan Province in 2011. The long-lasting conflict in the country badly affected both predator and prey species, so that the national population is considered to be small and severely threatened. Between 2004 and 2007, a total of 85 leopard skins were seen being offered in markets of Kabul. Contemporary records do not exist for any of the smaller cat species known to have been present in the country, all of which were threatened already in the 1970s by indiscriminate hunting, prey depletion and habitat destruction.
Sampling Afghanistan's wildlife
Altai weasel (Mustela altaica)
Asiatic black bear (Ursus thibetanus)
Asiatic brown bear (Ursus arctos)
Eurasian otter (Lutra lutra)
Geoffroy's bat (Myotis emarginatus)
Gray wolf (Canis lupus)
Hare (Lepus tolai)
Ibex (Capra sibirica)
Kashmir cave bat (Myotis longipes)
Leopard (Panthera pardus)
Lesser horseshoe bat (Rhinolophus hipposideros)
Long-tailed marmot (Marmota caudata)
Lynx (Lynx lynx)
Marco Polo sheep (Ovis ammon polii)
Markhor (Capra falconeri)
Mehely's horseshoe bat (Rhinolophus mehelyi)
Mouflon (or urial) (Ovis orientalis)
Pallas' cat (Otocolobus manul)
Pikas (Ochotona spp)
Red fox (Vulpes vulpes)
Sind bat (Eptesicus nasutus)
Snow leopard (Panthera uncia)
Stoat (Mustela erminea)
Stone marten (Martes foina)
Wild goat (Capra aegagrus)
Zarudny's jird (Meriones zarudnyi)
Extinct wildlife
The Asiatic cheetah is considered to be extirpated in Afghanistan since the 1950s. Two cheetah skins were seen in markets in the country, one in 1971, and then in 2006. The latter was reportedly from Samangan Province.
The Caspian tiger used to occur along the upper reaches of Hari-Rud near Herat to the jungles in the lower reaches of the river until the early 1970s.
Uncertain is the historical presence of the Asiatic lion in the country, as locality records are not known. It is thought to have been present in southwestern and southern Afghanistan. In March 2017, the Afghan Border Police (ABP) seized six white lions at the Wesh–Chaman border crossing in Spin Boldak before being smuggled into neighboring Pakistan. The origin of the lions was unclear at first, but the ABP said that they were from Africa. In April 2017, four of the lions were taken to Kabul Zoo. The other two were still somewhere in Kandahar Province.
See also
Geography of Afghanistan
List of protected areas of Afghanistan
Notes
References
External links
, Textbook Travel, May 28, 2023
, Arab News, April 15, 2019
Biota of Afghanistan
Afghanistan | Wildlife of Afghanistan | [
"Biology"
] | 743 | [
"Biota by country",
"Wildlife by country",
"Biota of Afghanistan"
] |
9,472,437 | https://en.wikipedia.org/wiki/Connected%20Mathematics | Connected Mathematics is a comprehensive mathematics program intended for U.S. students in grades 6–8. The curriculum design, text materials for students, and supporting resources for teachers were created and have been progressively refined by the Connected Mathematics Project (CMP) at Michigan State University with advice and contributions from many mathematics teachers, curriculum developers, mathematicians, and mathematics education researchers.
The current third edition of Connected Mathematics is a major revision of the program to reflect new expectations of the Common Core State Standards for Mathematics and what the authors have learned from over twenty years of field experience by thousands of teachers working with millions of middle grades students. This CMP3 program is now published in paper and electronic form by Pearson Education.
Core principles
The first edition of Connected Mathematics, developed with financial support from the National Science Foundation, was designed to provide instructional materials for middle grades mathematics. It was based on the 1989 Curriculum and Evaluation Standards and the 1991 Professional Standards for Teaching Mathematics from the National Council of Teachers of Mathematics. These standards highlighted four core features of the curriculum:
Comprehensive coverage of mathematical concepts and skills across four content strands—number, algebra, geometry and measurement, and probability and statistics.
Connections between the concepts and methods of the four major content strands, and between the abstractions of mathematics and their applications in real-world problem contexts.
Instructional materials that transform classrooms into dynamic environments where students learn by solving problems and sharing their thinking with others, while teachers encourage and support students to be curious, to ask questions, and to enjoy learning and using mathematics.
Developing students' understanding of mathematical concepts, principles, procedures, and habits of mind, and fostering the disposition to use mathematical reasoning in making sense of new situations and solving problems.
These principles have guided the development and refinement of the Connected Mathematics program for over twenty years. The first edition was published in 1995; a major revision, also supported by National Science Foundation funding, was published in 2006; and the current third edition was published in 2014. In the third edition, the collection of units was expanded to cover Common Core Standards for both grade eight and Algebra I.
Each CMP grade level course aims to advance student understanding, skills, and problem-solving in every content strand, with increasing sophistication and challenge over the middle school grades. The problem tasks for students are designed to make connections within mathematics, between mathematics and other subject areas, and/or to real-world settings that appeal to students.
Curriculum units consist of 3–5 investigations, each focused on a key mathematical idea; each investigation consists of several major problems that the teacher and students explore in class. Applications/Connections/Extensions problem sets are included for each investigation to help students practice, apply, connect, and extend essential understandings.
While engaged in collaborative problem-solving and classroom discourse about mathematics, students are explicitly encouraged to reflect on their use of what the NCTM standards once called mathematical processes and now refer to as mathematical practices—making sense of problems and solving them, reasoning abstractly and quantitatively, constructing arguments and critiquing the reasoning of others, modeling with mathematics, using mathematical tools strategically, seeking and using structure, expressing regularity in repeated reasoning, and communicating ideas and results with precision.
Implementation challenges
The introduction of new curriculum content, instructional materials, and teaching methods is challenging in K–12 education. When the proposed changes contrast with long-standing traditional practice, it is common to hear concerns from parents, teachers, and other professionals, as well as from students who have been successful and comfortable in traditional classrooms. In recognition of this innovation challenge, the National Science Foundation complemented its investment in new curriculum materials with substantial investments in professional development for teachers. By funding state and urban systemic initiatives, local systemic change projects, and math-science partnership programs, as well as national centers for standards-based school mathematics curriculum dissemination and implementation, the NSF provided powerful support for the adoption and implementation of the various reform mathematics curricula developed during the standards era.
In addition to those programs, for nearly twenty years, CMP has sponsored summer Getting to Know CMP institutes, workshops for leaders of CMP implementation, and an annual User's Conference for the sharing of implementation experiences and insights, all on the campus of Michigan State University. The whole reform curriculum effort has greatly enhanced the field's understanding of what works in that important and challenging process—the clearest message being that significant lasting change takes time, persistent effort, and coordination of work by teachers at all levels in a system.
Research findings
Connected Mathematics has become the most widely used of the middle school curriculum materials developed to implement the NCTM Standards. The effects of its use have been described in expository journal articles and evaluated in mathematics education research projects. Many of the research studies are master's or doctoral dissertation research projects focused on specific aspects of the CMP classroom experience and student learning. But there have also been a number of large-scale independent evaluations of the results of the program.
In the large-scale controlled research studies the most common (but by no means universal) pattern of results has been better performance by CMP students on measures of conceptual understanding and problem solving and no significant difference between students of CMP and traditional curriculum materials on measures of routine skills and factual knowledge. For example, this pattern is what the LieCal project found from a longitudinal study comparing learning by students in CMP and traditional middle grades curricula:
(1) Students did not sacrifice basic mathematical skills if they were taught using a standards-based or reform mathematics curriculum like CMP; (2) African American students experienced greater gains in symbol manipulation when they used a traditional curriculum; (3) the use of either the CMP or a non-CMP curriculum improved the mathematics achievement of all students, including students of color; (4) the use of CMP contributed to significantly higher problem-solving growth for all ethnic groups; and (5) a high level of conceptual emphasis in a classroom improved the students’ ability to represent problem situations.
Perhaps the most telling result of all is reported in the 2008 study by James Tarr and colleagues at the University of Missouri. While finding no overall significant effects from use of reform or traditional curriculum materials, the study did discover effects favoring the NSF-funded curricula when those programs were implemented with high or even moderate levels of fidelity to Standards-based learning environments. That is, when the innovative programs are used as designed, they produce positive effects.
Historical controversy
Like other curricula designed and developed during the 1990s to implement the NCTM Standards, Connected Math was criticized by supporters of more traditional curricula. Critics made the following claims:
Reform curricula like CMP pay too little attention to the development of basic computational skills in number and algebra;
Student investigation and discovery of key mathematical concepts and skills might lead to critical gaps and misconceptions in their knowledge.
Emphasis on mathematics in real-world contexts might cause students to miss abstractions and generalizations that are the powerful heart of the subject.
The lack of explanatory prose in textbooks makes it hard for parents to help their children with homework and puts students with weak note-taking abilities, poor handwriting, slow handwriting, and attention deficits at a distinct disadvantage. Additionally, with limited explanatory written materials, students who miss one or more days of school will struggle to catch up on missed materials.
Small-group learning is less efficient than teacher-led direct instructional methods, and the most able and interested students might be held back by having to collaborate with less able and motivated students.
The CMP program does not take into account the needs of students with minor learning disabilities or other disabilities who might be integrated into general education classrooms but still need extra help and need associated or modified learning materials.
The publishers and creators of CMP have stated that reassuring results from a variety of research projects blunted concerns about basic skill mastery, missing knowledge, and student misconceptions resulting from use of CMP and other reform curricula. However, many teachers and parents remain wary.
References
External links
Connected Mathematics Project http://connectedmath.msu.edu/
Pearson http://www.connectedmathematics3.com
Common Core State Standards http://www.corestandards.org/Math
Education reform in the United States
Mathematics education
Mathematics education reform
Standards-based education
Algebra education | Connected Mathematics | [
"Mathematics"
] | 1,699 | [
"Algebra education",
"Algebra"
] |
9,473,033 | https://en.wikipedia.org/wiki/Ministry%20of%20Foreign%20Affairs%20%28Netherlands%29 | The Ministry of Foreign Affairs (; BZ) is the Netherlands' ministry responsible for foreign relations, foreign policy, international development, international trade, diaspora and matters dealing with the European Union, NATO and the Benelux Union. The ministry was created in 1798, as the Department of Foreign Affairs of the Batavian Republic. In 1876, it became the Ministry of Foreign Affairs.
The Minister of Foreign Affairs is the head of the ministry and a member of the Cabinet of the Netherlands, the incumbent minister is Caspar Veldkamp. The Minister for Foreign Trade and Development Aid is a minister without portfolio within the Ministry of Foreign Affairs. Its incumbent minister is Reinette Klever.
History
The Ministry was formed in 1798 as the Department of Foreign Affairs. Since 1965 a special Minister for International Development has been appointed in each government with the exception of the First Balkenende cabinet and the First Rutte cabinet).
Responsibilities
The Ministry is responsible for the foreign relations of the Netherlands and its responsibilities are as follows:
to maintain relations with other countries and international organisations.
to promote cooperation with other countries.
to help developing countries accelerate their social and economic development through international cooperation.
to promote the interests of Dutch nationals and the Netherlands abroad.
to collect information on other countries and international developments for the Government and other interested parties.
to provide information on Dutch policy and the Netherlands' position on international issues and developments.
to present the Netherlands to the world.
to deal with applications from and the problems of foreigners living in the Netherlands or seeking to enter or leave the country.
Organisation
The Minister of Foreign Affairs and the Minister for Foreign Trade and Development Cooperation provide political leadership to the Ministry. The ministry consists of four directorates-general, which deal with a particular policy area:
The Directorate-General for Political Affairs is concerned with peace, security and human rights. This includes the EU's Common Foreign and Security Policy, the political role of NATO, the United Nations and the guidance for embassies and other diplomatic missions.
The Directorate-General for European Cooperation concerns itself with the European Union. It is responsible for Dutch relations with EU members and candidate countries. It also coordinates policy in other regional organisations like the Council of Europe, the OECD and the Benelux .
The Directorate-General for International Cooperation is responsible for international development, in line with the four Dutch priorities of water, security and the rule of law, food security and sexual and reproductive health and rights.
The Directorate-General for Foreign Economic Relations promotes the interests of Dutch businesses abroad and helps shape the Dutch contribution to the global economic order.
The Netherlands has about 140 diplomatic missions abroad, see list of diplomatic missions of the Netherlands.
International Institute for Communication and Development
The International Institute for Communication and Development (IICD) was a non-profit foundation established by the Ministry in 1996. IICD's aim was to support sustainable development through the use of information and communication technologies (ICTs), notably computers and the Internet.
The institute, which was based in The Hague, was active in nine developing countries: Bolivia, Burkina Faso, Ecuador, Ghana, Jamaica, Mali, Tanzania, Uganda and Zambia. IICD supported policy processes and projects involving the use of ICTs in the following sectors: health, education, "livelihoods" (mainly agriculture), and governance. IICD received funding from the Directorate-General for International Cooperation (DGIS) of the Netherlands, the UK Department for International Development (DFID) and the Swiss Agency for Development and Cooperation (SDC), amongst others.
IICD ceased operations on 31 December 2015.
See also
Minister of Foreign Affairs of the Netherlands
References
External links
IICD Legacy website - with information on IICD's approach and programmes, and an extensive digital archive with all key IICD resources
1798 establishments in the Batavian Republic
Netherlands
Netherlands
Foreign relations of the Netherlands
Foreign Affairs
Ministries established in 1798
Development organizations
Information and communication technologies for development
Non-profit organisations based in the Netherlands
Organisations based in The Hague | Ministry of Foreign Affairs (Netherlands) | [
"Technology"
] | 805 | [
"Information and communications technology",
"Information and communication technologies for development"
] |
9,476,628 | https://en.wikipedia.org/wiki/List%20of%20Y-DNA%20single-nucleotide%20polymorphisms |
See also
Single-nucleotide polymorphism
Unique-event polymorphism
Human Y-chromosome DNA haplogroups
List of Y-STR markers
External links
Sequence information for 218 M series markers published by 2001
ISOGG Y-DNA SNP Index - 2007
Karafet et al. (2008) Supplemental Research Data
DNA
Y DNA
Human evolution
Human population genetics
Genetic genealogy
Phylogenetics
Bioinformatics
Evolutionary biology
Molecular genetics | List of Y-DNA single-nucleotide polymorphisms | [
"Chemistry",
"Engineering",
"Biology"
] | 87 | [
"Evolutionary biology",
"Biological engineering",
"Taxonomy (biology)",
"Bioinformatics",
"Molecular genetics",
"Molecular biology",
"Phylogenetics"
] |
9,477,975 | https://en.wikipedia.org/wiki/Completion%20of%20a%20ring | In abstract algebra, a completion is any of several related functors on rings and modules that result in complete topological rings and modules. Completion is similar to localization, and together they are among the most basic tools in analysing commutative rings. Complete commutative rings have a simpler structure than general ones, and Hensel's lemma applies to them. In algebraic geometry, a completion of a ring of functions R on a space X concentrates on a formal neighborhood of a point of X: heuristically, this is a neighborhood so small that all Taylor series centered at the point are convergent. An algebraic completion is constructed in a manner analogous to completion of a metric space with Cauchy sequences, and agrees with it in the case when R has a metric given by a non-Archimedean absolute value.
General construction
Suppose that E is an abelian group with a descending filtration
of subgroups. One then defines the completion (with respect to the filtration) as the inverse limit:
This is again an abelian group. Usually E is an additive abelian group. If E has additional algebraic structure compatible with the filtration, for instance E is a filtered ring, a filtered module, or a filtered vector space, then its completion is again an object with the same structure that is complete in the topology determined by the filtration. This construction may be applied both to commutative and noncommutative rings. As may be expected, when the intersection of the equals zero, this produces a complete topological ring.
Krull topology
In commutative algebra, the filtration on a commutative ring R by the powers of a proper ideal I determines the Krull (after Wolfgang Krull) or I-adic topology on R. The case of a maximal ideal is especially important, for example the distinguished maximal ideal of a valuation ring. The basis of open neighbourhoods of 0 in R is given by the powers In, which are nested and form a descending filtration on R:
(Open neighborhoods of any r ∈ R are given by cosets r + In.) The (I-adic) completion is the inverse limit of the factor rings,
pronounced "R I hat". The kernel of the canonical map from the ring to its completion is the intersection of the powers of I. Thus is injective if and only if this intersection reduces to the zero element of the ring; by the Krull intersection theorem, this is the case for any commutative Noetherian ring which is an integral domain or a local ring.
There is a related topology on R-modules, also called Krull or I-adic topology. A basis of open neighborhoods of a module M is given by the sets of the form
The I-adic completion of an R-module M is the inverse limit of the quotients
This procedure converts any module over R into a complete topological module over . [that is wrong in general! Only if the ideal is finite generated it is the case.]
Examples
The ring of p-adic integers is obtained by completing the ring of integers at the ideal (p).
Let R = K[x1,...,xn] be the polynomial ring in n variables over a field K and be the maximal ideal generated by the variables. Then the completion is the ring K[[x1,...,xn]] of formal power series in n variables over K.
Given a noetherian ring and an ideal the -adic completion of is an image of a formal power series ring, specifically, the image of the surjection
The kernel is the ideal
Completions can also be used to analyze the local structure of singularities of a scheme. For example, the affine schemes associated to and the nodal cubic plane curve have similar looking singularities at the origin when viewing their graphs (both look like a plus sign). Notice that in the second case, any Zariski neighborhood of the origin is still an irreducible curve. If we use completions, then we are looking at a "small enough" neighborhood where the node has two components. Taking the localizations of these rings along the ideal and completing gives and respectively, where is the formal square root of in More explicitly, the power series:
Since both rings are given by the intersection of two ideals generated by a homogeneous degree 1 polynomial, we can see algebraically that the singularities "look" the same. This is because such a scheme is the union of two non-equal linear subspaces of the affine plane.
Properties
The completion of a Noetherian ring with respect to some ideal is a Noetherian ring.
The completion of a Noetherian local ring with respect to the unique maximal ideal is a Noetherian local ring.
The completion is a functorial operation: a continuous map f: R → S of topological rings gives rise to a map of their completions,
Moreover, if M and N are two modules over the same topological ring R and f: M → N is a continuous module map then f uniquely extends to the map of the completions:
where are modules over
The completion of a Noetherian ring R is a flat module over R.
The completion of a finitely generated module M over a Noetherian ring R can be obtained by extension of scalars:
Together with the previous property, this implies that the functor of completion on finitely generated R-modules is exact: it preserves short exact sequences. In particular, taking quotients of rings commutes with completion, meaning that for any quotient R-algebra , there is an isomorphism
Cohen structure theorem (equicharacteristic case). Let R be a complete local Noetherian commutative ring with maximal ideal and residue field K. If R contains a field, then
for some n and some ideal I (Eisenbud, Theorem 7.7).
See also
Formal scheme
Profinite integer
Locally compact field
Zariski ring
Linear topology
Quasi-unmixed ring
Citations
References
David Eisenbud, Commutative algebra. With a view toward algebraic geometry. Graduate Texts in Mathematics, 150. Springer-Verlag, New York, 1995. xvi+785 pp. ;
Commutative algebra
Topological algebra | Completion of a ring | [
"Mathematics"
] | 1,302 | [
"Topological algebra",
"Fields of abstract algebra",
"Commutative algebra",
"Topology"
] |
9,478,497 | https://en.wikipedia.org/wiki/Micro%20Center | Micro Center is an American computer retail store, headquartered in Hilliard, Ohio. It was founded in 1979, and has 28 stores in 19 states. The chain is a highly electronic and mechanical center for building personal computers and gaming computers.
History
Micro Center was founded in Columbus, Ohio in 1979 by John Baker and Bill Bayne, two former Radio Shack employees, with a $35,000 investment. Rick Mershad is the current CEO and President of Micro Center. Mershad was one of the first 10 employees of the company, starting as a Sales Associate two years after the company's founding. The first Micro Center store was established in a storefront located in the Lane Avenue Shopping Center in Upper Arlington, Ohio. The store benefited from its proximity to Ohio State University and the scientific think-tank Battelle Memorial Institute, which provided a large customer base and a source of computer-literate salespeople. Their goal for the first year was $30 million in sales, and they achieved $29.9 million.
In the fall of 1997, Micro Center expanded into Silicon Valley by opening a store in Santa Clara, California. To compete against what was then the dominant computer retailer in California, Fry's Electronics, Micro Center stressed its better employee pay and superior customer service.
In 2009, Micro Center developed an "18-minute pickup" service where customers who order merchandise on their website can pick it up from the store in 18 minutes.
On July 23, 2012, Micro Center suddenly closed its Santa Clara store—its only one in Silicon Valley—after it was unable to negotiate a further extension of its store lease. By then, the store's front facade had already become a dated relic of the late 1990s, with long-obsolete logos from Hayes, USRobotics, Practical Peripherals, Lotus Software, and Fujifilm.
In January 2014, the company planned to open two new New York City stores in Brooklyn and Queens.
As of 2024, there are 28 Micro Center stores nationwide in 19 states, including California, Colorado, Florida, Georgia, Illinois, Indiana, Kansas, Maryland, Massachusetts, Michigan, Minnesota, Missouri, New Jersey, New York, North Carolina, Ohio, Pennsylvania, Texas, and Virginia. A new store in Santa Clara (in a different location than the previous one) is planned to open in 2025, delayed from 2024.
Corporate structure
Micro Center is a subsidiary of Micro Electronics, Inc., a privately held corporation headquartered in Hilliard, Ohio.
Stores are sized up to , stocking about 36,000 products across 700 categories, including major name brands and Micro Center's own brands. Micro Center is an approved seller of all Apple products. The company has had Apple departments in all stores since 1982, and has included "Build Your Own PC" departments, and "Knowledge Bars" for service and support since 2007.
Public profile
Micro Center was the first retailer in the United States to sell the DJI Mavic Pro drone, launching it by hosting a three-day demonstration in their Columbus store's parking lot which was open to the press and the public.
In a 2015 interview, Micro Center CEO Rick Mershad described how their product line is changing: the STEM movement is driving students and adults to make their own creations, and Micro Center is focusing on Arduino projects and Raspberry Pi, which require more consultative selling.
Media reception
Joan Verdon of The Record noted that meeting customer's needs with a high level of service and skilled salespeople are Micro Center's "claim to fame". She also quoted Doug Olenick, editor at TWICE, a major consumer electronics trade publication, who said that the store's salespeople, compared to others in the industry, are extremely well trained.
In 2021, the store started to offer a free solid-state drive to new customers, but Storage Review was not impressed, concluding "it's free, but it's still not worth it". More generally, they noted that: "Micro Center's Inland brand is to tech what Amazon's dozens of brands are to toilet paper, shampoo, and such."
Awards and rankings
In 2014, Micro Center was ranked 93rd on a list of the 100 hottest retailers in the United States compiled by the National Retail Federation.
In 2015, the industry trade journal Dealerscope ranked it as the 18th largest consumer electronics retailer in the United States and Canada.
In 2016, Forbes magazine ranked it 195th among America's largest private companies.
In October 2016, Micro Center stores won first and second prizes in Intel's annual "Score with Intel Core" competition, and donated their prize money to local schools.
In 2019, Micro Center stores won first and third prizes, making two more prize money donations to local schools.
See also
According to the American business research company Hoover's, the major competitors to Micro Center's parent company Micro Electronics are:
Best Buy
Fry's Electronics (defunct)
PC Connection
Amazon.com
References
External links
Consumer electronics retailers of the United States
Consumer electronics retailers
Consumer electronics
Online retailers of the United States
Computer companies of the United States
Home computer hardware companies
American companies established in 1979
Computer companies established in 1979
Computer hardware companies
Electronics companies established in 1979
Retail companies established in 1979
1979 establishments in Ohio
Retail companies based in Ohio
Privately held companies based in Ohio
Companies based in Franklin County, Ohio
Companies based in the Columbus, Ohio metropolitan area | Micro Center | [
"Technology"
] | 1,099 | [
"Computer hardware companies",
"Computers"
] |
9,478,630 | https://en.wikipedia.org/wiki/Integral%20element | In commutative algebra, an element b of a commutative ring B is said to be integral over a subring A of B if b is a root of some monic polynomial over A.
If A, B are fields, then the notions of "integral over" and of an "integral extension" are precisely "algebraic over" and "algebraic extensions" in field theory (since the root of any polynomial is the root of a monic polynomial).
The case of greatest interest in number theory is that of complex numbers integral over Z (e.g., or ); in this context, the integral elements are usually called algebraic integers. The algebraic integers in a finite extension field k of the rationals Q form a subring of k, called the ring of integers of k, a central object of study in algebraic number theory.
In this article, the term ring will be understood to mean commutative ring with a multiplicative identity.
Definition
Let be a ring and let be a subring of
An element of is said to be integral over if for some there exists in such that
The set of elements of that are integral over is called the integral closure of in The integral closure of any subring in is, itself, a subring of and contains If every element of is integral over then we say that is integral over , or equivalently is an integral extension of
Examples
Integral closure in algebraic number theory
There are many examples of integral closure which can be found in algebraic number theory since it is fundamental for defining the ring of integers for an algebraic field extension (or ).
Integral closure of integers in rationals
Integers are the only elements of Q that are integral over Z. In other words, Z is the integral closure of Z in Q.
Quadratic extensions
The Gaussian integers are the complex numbers of the form , and are integral over Z. is then the integral closure of Z in . Typically this ring is denoted .
The integral closure of Z in is the ring
This example and the previous one are examples of quadratic integers. The integral closure of a quadratic extension can be found by constructing the minimal polynomial of an arbitrary element and finding number-theoretic criterion for the polynomial to have integral coefficients. This analysis can be found in the quadratic extensions article.
Roots of unity
Let ζ be a root of unity. Then the integral closure of Z in the cyclotomic field Q(ζ) is Z[ζ]. This can be found by using the minimal polynomial and using Eisenstein's criterion.
Ring of algebraic integers
The integral closure of Z in the field of complex numbers C, or the algebraic closure is called the ring of algebraic integers.
Other
The roots of unity, nilpotent elements and idempotent elements in any ring are integral over Z.
Integral closure in algebraic geometry
In geometry, integral closure is closely related with normalization and normal schemes. It is the first step in resolution of singularities since it gives a process for resolving singularities of codimension 1.
For example, the integral closure of is the ring since geometrically, the first ring corresponds to the -plane unioned with the -plane. They have a codimension 1 singularity along the -axis where they intersect.
Let a finite group G act on a ring A. Then A is integral over AG, the set of elements fixed by G; see Ring of invariants.
Let R be a ring and u a unit in a ring containing R. Then
u−1 is integral over R if and only if u−1 ∈ R[u].
is integral over R.
The integral closure of the homogeneous coordinate ring of a normal projective variety X is the ring of sections
Integrality in algebra
If is an algebraic closure of a field k, then is integral over
The integral closure of C[[x]] in a finite extension of C((x)) is of the form (cf. Puiseux series)
Equivalent definitions
Let B be a ring, and let A be a subring of B. Given an element b in B, the following conditions are equivalent:
(i) b is integral over A;
(ii) the subring A[b] of B generated by A and b is a finitely generated A-module;
(iii) there exists a subring C of B containing A[b] and which is a finitely generated A-module;
(iv) there exists a faithful A[b]-module M such that M is finitely generated as an A-module.
The usual proof of this uses the following variant of the Cayley–Hamilton theorem on determinants:
Theorem Let u be an endomorphism of an A-module M generated by n elements and I an ideal of A such that . Then there is a relation:
This theorem (with I = A and u multiplication by b) gives (iv) ⇒ (i) and the rest is easy. Coincidentally, Nakayama's lemma is also an immediate consequence of this theorem.
Elementary properties
Integral closure forms a ring
It follows from the above four equivalent statements that the set of elements of that are integral over forms a subring of containing . (Proof: If x, y are elements of that are integral over , then are integral over since they stabilize , which is a finitely generated module over and is annihilated only by zero.) This ring is called the integral closure of in .
Transitivity of integrality
Another consequence of the above equivalence is that "integrality" is transitive, in the following sense. Let be a ring containing and . If is integral over and integral over , then is integral over . In particular, if is itself integral over and is integral over , then is also integral over .
Integral closed in fraction field
If happens to be the integral closure of in , then A is said to be integrally closed in . If is the total ring of fractions of , (e.g., the field of fractions when is an integral domain), then one sometimes drops the qualification "in and simply says "integral closure of " and " is integrally closed." For example, the ring of integers is integrally closed in the field .
Transitivity of integral closure with integrally closed domains
Let A be an integral domain with the field of fractions K and A' the integral closure of A in an algebraic field extension L of K. Then the field of fractions of A' is L. In particular, A' is an integrally closed domain.
Transitivity in algebraic number theory
This situation is applicable in algebraic number theory when relating the ring of integers and a field extension. In particular, given a field extension the integral closure of in is the ring of integers .
Remarks
Note that transitivity of integrality above implies that if is integral over , then is a union (equivalently an inductive limit) of subrings that are finitely generated -modules.
If is noetherian, transitivity of integrality can be weakened to the statement:
There exists a finitely generated -submodule of that contains .
Relation with finiteness conditions
Finally, the assumption that be a subring of can be modified a bit. If is a ring homomorphism, then one says is integral if is integral over . In the same way one says is finite ( finitely generated -module) or of finite type ( finitely generated -algebra). In this viewpoint, one has that
is finite if and only if is integral and of finite type.
Or more explicitly,
is a finitely generated -module if and only if is generated as an -algebra by a finite number of elements integral over .
Integral extensions
Cohen-Seidenberg theorems
An integral extension A ⊆ B has the going-up property, the lying over property, and the incomparability property (Cohen–Seidenberg theorems). Explicitly, given a chain of prime ideals in A there exists a in B with (going-up and lying over) and two distinct prime ideals with inclusion relation cannot contract to the same prime ideal (incomparability). In particular, the Krull dimensions of A and B are the same. Furthermore, if A is an integrally closed domain, then the going-down holds (see below).
In general, the going-up implies the lying-over. Thus, in the below, we simply say the "going-up" to mean "going-up" and "lying-over".
When A, B are domains such that B is integral over A, A is a field if and only if B is a field. As a corollary, one has: given a prime ideal of B, is a maximal ideal of B if and only if is a maximal ideal of A. Another corollary: if L/K is an algebraic extension, then any subring of L containing K is a field.
Applications
Let B be a ring that is integral over a subring A and k an algebraically closed field. If is a homomorphism, then f extends to a homomorphism B → k. This follows from the going-up.
Geometric interpretation of going-up
Let be an integral extension of rings. Then the induced map
is a closed map; in fact, for any ideal I and is surjective if f is injective. This is a geometric interpretation of the going-up.
Geometric interpretation of integral extensions
Let B be a ring and A a subring that is a noetherian integrally closed domain (i.e., is a normal scheme). If B is integral over A, then is submersive; i.e., the topology of is the quotient topology. The proof uses the notion of constructible sets. (See also: Torsor (algebraic geometry).)
Integrality, base-change, universally-closed, and geometry
If is integral over , then is integral over R for any A-algebra R. In particular, is closed; i.e., the integral extension induces a "universally closed" map. This leads to a geometric characterization of integral extension. Namely, let B be a ring with only finitely many minimal prime ideals (e.g., integral domain or noetherian ring). Then B is integral over a (subring) A if and only if is closed for any A-algebra R. In particular, every proper map is universally closed.
Galois actions on integral extensions of integrally closed domains
Proposition. Let A be an integrally closed domain with the field of fractions K, L a finite normal extension of K, B the integral closure of A in L. Then the group acts transitively on each fiber of .
Proof. Suppose for any in G. Then, by prime avoidance, there is an element x in such that for any . G fixes the element and thus y is purely inseparable over K. Then some power belongs to K; since A is integrally closed we have: Thus, we found is in but not in ; i.e., .
Application to algebraic number theory
The Galois group then acts on all of the prime ideals lying over a fixed prime ideal . That is, if
then there is a Galois action on the set . This is called the Splitting of prime ideals in Galois extensions.
Remarks
The same idea in the proof shows that if is a purely inseparable extension (need not be normal), then is bijective.
Let A, K, etc. as before but assume L is only a finite field extension of K. Then
(i) has finite fibers.
(ii) the going-down holds between A and B: given , there exists that contracts to it.
Indeed, in both statements, by enlarging L, we can assume L is a normal extension. Then (i) is immediate. As for (ii), by the going-up, we can find a chain that contracts to . By transitivity, there is such that and then are the desired chain.
Integral closure
Let A ⊂ B be rings and A' the integral closure of A in B. (See above for the definition.)
Integral closures behave nicely under various constructions. Specifically, for a multiplicatively closed subset S of A, the localization S−1A' is the integral closure of S−1A in S−1B, and is the integral closure of in . If are subrings of rings , then the integral closure of in is where are the integral closures of in .
The integral closure of a local ring A in, say, B, need not be local. (If this is the case, the ring is called unibranch.) This is the case for example when A is Henselian and B is a field extension of the field of fractions of A.
If A is a subring of a field K, then the integral closure of A in K is the intersection of all valuation rings of K containing A.
Let A be an -graded subring of an -graded ring B. Then the integral closure of A in B is an -graded subring of B.
There is also a concept of the integral closure of an ideal. The integral closure of an ideal , usually denoted by , is the set of all elements such that there exists a monic polynomial
with with as a root. The radical of an ideal is integrally closed.
For noetherian rings, there are alternate definitions as well.
if there exists a not contained in any minimal prime, such that for all .
if in the normalized blow-up of I, the pull back of r is contained in the inverse image of I. The blow-up of an ideal is an operation of schemes which replaces the given ideal with a principal ideal. The normalization of a scheme is simply the scheme corresponding to the integral closure of all of its rings.
The notion of integral closure of an ideal is used in some proofs of the going-down theorem.
Conductor
Let B be a ring and A a subring of B such that B is integral over A. Then the annihilator of the A-module B/A is called the conductor of A in B. Because the notion has origin in algebraic number theory, the conductor is denoted by . Explicitly, consists of elements a in A such that . (cf. idealizer in abstract algebra.) It is the largest ideal of A that is also an ideal of B. If S is a multiplicatively closed subset of A, then
.
If B is a subring of the total ring of fractions of A, then we may identify
.
Example: Let k be a field and let (i.e., A is the coordinate ring of the affine curve ). B is the integral closure of A in . The conductor of A in B is the ideal . More generally, the conductor of , a, b relatively prime, is with .
Suppose B is the integral closure of an integral domain A in the field of fractions of A such that the A-module is finitely generated. Then the conductor of A is an ideal defining the support of ; thus, A coincides with B in the complement of in . In particular, the set , the complement of , is an open set.
Finiteness of integral closure
An important but difficult question is on the finiteness of the integral closure of a finitely generated algebra. There are several known results.
The integral closure of a Dedekind domain in a finite extension of the field of fractions is a Dedekind domain; in particular, a noetherian ring. This is a consequence of the Krull–Akizuki theorem. In general, the integral closure of a noetherian domain of dimension at most 2 is noetherian; Nagata gave an example of dimension 3 noetherian domain whose integral closure is not noetherian. A nicer statement is this: the integral closure of a noetherian domain is a Krull domain (Mori–Nagata theorem). Nagata also gave an example of dimension 1 noetherian local domain such that the integral closure is not finite over that domain.
Let A be a noetherian integrally closed domain with field of fractions K. If L/K is a finite separable extension, then the integral closure of A in L is a finitely generated A-module. This is easy and standard (uses the fact that the trace defines a non-degenerate bilinear form).
Let A be a finitely generated algebra over a field k that is an integral domain with field of fractions K. If L is a finite extension of K, then the integral closure of A in L is a finitely generated A-module and is also a finitely generated k-algebra. The result is due to Noether and can be shown using the Noether normalization lemma as follows. It is clear that it is enough to show the assertion when L/K is either separable or purely inseparable. The separable case is noted above, so assume L/K is purely inseparable. By the normalization lemma, A is integral over the polynomial ring . Since L/K is a finite purely inseparable extension, there is a power q of a prime number such that every element of L is a q-th root of an element in K. Let be a finite extension of k containing all q-th roots of coefficients of finitely many rational functions that generate L. Then we have: The ring on the right is the field of fractions of , which is the integral closure of S; thus, contains . Hence, is finite over S; a fortiori, over A. The result remains true if we replace k by Z.
The integral closure of a complete local noetherian domain A in a finite extension of the field of fractions of A is finite over A. More precisely, for a local noetherian ring R, we have the following chains of implications:
(i) A complete A is a Nagata ring
(ii) A is a Nagata domain A analytically unramified the integral closure of the completion is finite over the integral closure of A is finite over A.
Noether's normalization lemma
Noether's normalisation lemma is a theorem in commutative algebra. Given a field K and a finitely generated K-algebra A, the theorem says it is possible to find elements y1, y2, ..., ym in A that are algebraically independent over K such that A is finite (and hence integral) over B = K[y1,..., ym]. Thus the extension K ⊂ A can be written as a composite K ⊂ B ⊂ A where K ⊂ B is a purely transcendental extension and B ⊂ A is finite.
Integral morphisms
In algebraic geometry, a morphism of schemes is integral if it is affine and if for some (equivalently, every) affine open cover of Y, every map is of the form where A is an integral B-algebra. The class of integral morphisms is more general than the class of finite morphisms because there are integral extensions that are not finite, such as, in many cases, the algebraic closure of a field over the field.
Absolute integral closure
Let A be an integral domain and L (some) algebraic closure of the field of fractions of A. Then the integral closure of A in L is called the absolute integral closure of A. It is unique up to a non-canonical isomorphism. The ring of all algebraic integers is an example (and thus is typically not noetherian).
See also
Normal scheme
Noether normalization lemma
Algebraic integer
Splitting of prime ideals in Galois extensions
Torsor (algebraic geometry)
Notes
References
H. Matsumura Commutative ring theory. Translated from the Japanese by M. Reid. Second edition. Cambridge Studies in Advanced Mathematics, 8.
M. Reid, Undergraduate Commutative Algebra, London Mathematical Society, 29, Cambridge University Press, 1995.
Further reading
Irena Swanson, Integral closures of ideals and rings
Do DG-algebras have any sensible notion of integral closure?
Is always an integral extension of for a regular sequence ?]
Commutative algebra
Ring theory
Algebraic structures | Integral element | [
"Mathematics"
] | 4,168 | [
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures",
"Commutative algebra"
] |
9,478,903 | https://en.wikipedia.org/wiki/Wild%20Fermentation | Wild Fermentation: The Flavor, Nutrition, and Craft of Live-Culture Foods is a 2003 book by Sandor Katz that discusses the ancient practice of fermentation. While most of the conventional literature assumes the use of modern technology, Wild Fermentation focuses more on the practice and culture of fermenting food.
The term "wild fermentation" refers to the reliance on naturally occurring bacteria and yeast to ferment food. For example, conventional bread making requires the use of a commercial, highly specialized yeast, while wild-fermented bread relies on naturally occurring cultures that are found on the flour, in the air, and so on. Similarly, the book's instructions on sauerkraut require only cabbage and salt, relying on the cultures that naturally exist on the vegetable to perform the fermentation.
The book also discusses some foods that are not, strictly speaking, wild ferments such as miso, yogurt, kefir, and nattō.
Beyond food, the book includes some discussion of social, personal, and political issues, such as the legality of raw milk cheeses in the United States.
Newsweek has referred to Wild Fermentation as the "fermentation bible".
References
External links
Wild Fermentation updated and revised edition on author website
2003 non-fiction books
Fermentation
Chelsea Green Publishing books | Wild Fermentation | [
"Chemistry",
"Biology"
] | 279 | [
"Biochemistry",
"Cellular respiration",
"Fermentation"
] |
9,479,849 | https://en.wikipedia.org/wiki/History%20of%20manifolds%20and%20varieties | The study of manifolds combines many important areas of mathematics: it generalizes concepts such as curves and surfaces as well as ideas from linear algebra and topology. Certain special classes of manifolds also have additional algebraic structure; they may behave like groups, for instance. In that case, they are called Lie Groups. Alternatively, they may be described by polynomial equations, in which case they are called algebraic varieties, and if they additionally carry a group structure, they are called algebraic groups.
Nomenclature
The term "manifold" comes from German Mannigfaltigkeit, by Bernhard Riemann.
In English, "manifold" refers to spaces with a differentiable or topological structure, while "variety" refers to spaces with an algebraic structure, as in algebraic varieties.
In Romance languages, manifold is translated as "variety" – such spaces with a differentiable structure are literally translated as "analytic varieties", while spaces with an algebraic structure are called "algebraic varieties". Thus for example, the French word "variété topologique" means topological manifold. In the same vein, the Japanese word "" (tayōtai) also encompasses both manifold and variety. ("" (tayō) means various.)
Background
Ancestral to the modern concept of a manifold were several important results of 18th and 19th century mathematics. The oldest of these was Non-Euclidean geometry, which considers spaces where Euclid's parallel postulate fails. Saccheri first studied this geometry in 1733. Lobachevsky, Bolyai, and Riemann developed the subject further 100 years later. Their research uncovered two types of spaces whose geometric structures differ from that of classical Euclidean space; these are called hyperbolic geometry and elliptic geometry. In the modern theory of manifolds, these notions correspond to manifolds with constant, negative and positive curvature, respectively.
Carl Friedrich Gauss may have been the first to consider abstract spaces as mathematical objects in their own right. His theorema egregium gives a method for computing the curvature of a surface without considering the ambient space in which the surface lies. In modern terms, the theorem proved that the curvature of the surface is an intrinsic property. Manifold theory has come to focus exclusively on these intrinsic properties (or invariants), while largely ignoring the extrinsic properties of the ambient space.
Another, more topological example of an intrinsic property of a manifold is the Euler characteristic. For a non-intersecting graph in the Euclidean plane, with V vertices (or corners), E edges and F faces (counting the exterior) Euler showed that V-E+F= 2. Thus 2 is called the Euler characteristic of the plane. By contrast, in 1813 Antoine-Jean Lhuilier showed that the Euler characteristic of the torus is 0, since the complete graph on seven points can be embedded into the torus. The Euler characteristic of other surfaces is a useful topological invariant, which has been extended to higher dimensions using Betti numbers. In the mid nineteenth century, the Gauss–Bonnet theorem linked the Euler characteristic to the Gaussian curvature.
Lagrangian mechanics and Hamiltonian mechanics, when considered geometrically, are naturally manifold theories. All these use the notion of several characteristic axes or dimensions (known as generalized coordinates in the latter two cases), but these dimensions do not lie along the physical dimensions of width, height, and breadth.
In the early 19th century the theory of elliptic functions succeeded in giving a basis for the theory of elliptic integrals, and this left open an obvious avenue of research. The standard forms for elliptic integrals involved the square roots of cubic and quartic polynomials. When those were replaced by polynomials of higher degree, say quintics, what would happen?
In the work of Niels Henrik Abel and Carl Jacobi, the answer was formulated: the resulting integral would involve functions of two complex variables, having four independent periods (i.e. period vectors). This gave the first glimpse of an abelian variety of dimension 2 (an abelian surface): what would now be called the Jacobian of a hyperelliptic curve of genus 2.
Riemann
Bernhard Riemann was the first to do extensive work generalizing the idea of a surface to higher dimensions. The name manifold comes from Riemann's original German term, Mannigfaltigkeit, which William Kingdon Clifford translated as "manifoldness". In his Göttingen inaugural lecture, Riemann described the set of all possible values of a variable with certain constraints as a Mannigfaltigkeit, because the variable can have many values. He distinguishes between stetige Mannigfaltigkeit and diskrete Mannigfaltigkeit (continuous manifoldness and discontinuous manifoldness), depending on whether the value changes continuously or not. As continuous examples, Riemann refers to not only colors and the locations of objects in space, but also the possible shapes of a spatial figure. Using induction, Riemann constructs an n-fach ausgedehnte Mannigfaltigkeit (n times extended manifoldness or n-dimensional manifoldness) as a continuous stack of (n−1) dimensional manifoldnesses. Riemann's intuitive notion of a Mannigfaltigkeit evolved into what is today formalized as a manifold. Riemannian manifolds and Riemann surfaces are named after Bernhard Riemann.
In 1857, Riemann introduced the concept of Riemann surfaces as part of a study of the process of analytic continuation; Riemann surfaces are now recognized as one-dimensional complex manifolds. He also furthered the study of abelian and other multi-variable complex functions.
Contemporaries of Riemann
Johann Benedict Listing, inventor of the word "topology", wrote an 1847 paper "Vorstudien zur Topologie" in which he defined a "complex". He first defined the Möbius strip in 1861 (rediscovered four years later by Möbius), as an example of a non-orientable surface.
After Abel, Jacobi, and Riemann, some of the most important contributors to the theory of abelian functions were Weierstrass, Frobenius, Poincaré and Picard. The subject was very popular at the time, already having a large literature. By the end of the 19th century, mathematicians had begun to use geometric methods in the study of abelian functions.
Poincaré
Henri Poincaré's 1895 paper Analysis Situs studied three-and-higher-dimensional manifolds (which he called "varieties"), giving rigorous definitions of homology, homotopy, and Betti numbers and raised a question, today known as the Poincaré conjecture, based his new concept of the fundamental group. In 2003, Grigori Perelman proved the conjecture using Richard S. Hamilton's Ricci flow, this is after nearly a century of effort by many mathematicians.
Later developments
Hermann Weyl gave an intrinsic definition for differentiable manifolds in 1912. During the 1930s Hassler Whitney and others clarified the foundational aspects of the subject, and thus intuitions dating back to the latter half of the 19th century became precise, and developed through differential geometry and Lie group theory.
The Whitney embedding theorem showed that manifolds intrinsically defined by charts could always be embedded in Euclidean space, as in the extrinsic definition, showing that the two concepts of manifold were equivalent. Due to this unification, it is said to be the first complete exposition of the modern concept of manifold.
Eventually, in the 1920s, Lefschetz laid the basis for the study of abelian functions in terms of complex tori. He also appears to have been the first to use the name "abelian variety"; in Romance languages, "variety" was used to translate Riemann's term "Mannigfaltigkeit". It was Weil in the 1940s who gave this subject its modern foundations in the language of algebraic geometry.
Sources
Riemann, Bernhard, Grundlagen für eine allgemeine Theorie der Functionen einer veränderlichen complexen Grösse.
The 1851 doctoral thesis in which "manifold" (Mannigfaltigkeit) first appears.
Riemann, Bernhard, On the Hypotheses which lie at the Bases of Geometry.
The famous Göttingen inaugural lecture (Habilitationsschrift) of 1854.
Early history of knot theory at St-Andrews history of mathematics website
Early history of topology at St. Andrews
H. Lange and Ch. Birkenhake, Complex Abelian Varieties, 1992,
A comprehensive treatment of the theory of abelian varieties, with an overview of the history the subject.
André Weil: Courbes algébriques et variétés abéliennes, 1948
The first modern text on abelian varieties. In French.
Henri Poincaré, Analysis Situs, Journal de l'École Polytechnique ser 2, 1 (1895) pages 1–123.
Henri Poincaré, Complément à l'Analysis Situs, Rendiconti del Circolo Matematico di Palermo, 13 (1899) pages 285–343.
Henri Poincaré, Second complément à l'Analysis Situs, Proceedings of the London Mathematical Society, 32 (1900), pages 277–308.
Henri Poincaré, Sur certaines surfaces algébriques; troisième complément à l'Analysis Situs, Bulletin de la Société mathématique de France, 30 (1902), pages 49–70.
Henri Poincaré, Sur les cycles des surfaces algébriques; quatrième complément à l'Analysis Situs, Journal de mathématiques pures et appliquées, 5° série, 8 (1902), pages 169–214.
Henri Poincaré, Cinquième complément à l'analysis situs, Rendiconti del Circolo matematico di Palermo 18 (1904) pages 45–110.
Erhard Scholz, Geschichte des Mannigfaltigkeitsbegriffs von Riemann bis Poincaré, Birkhäuser, 1980.
A study of the genesis of the manifold concept. Based on the author's dissertation, directed by Egbert Brieskorn.
Manifolds and varieties
Manifolds | History of manifolds and varieties | [
"Mathematics"
] | 2,109 | [
"Topological spaces",
"Manifolds",
"Topology",
"Space (mathematics)"
] |
9,480,763 | https://en.wikipedia.org/wiki/Argon%20flash | Argon flash, also known as argon bomb, argon flash bomb, argon candle, and argon light source, is a single-use source of very short and extremely bright flashes of light. The light is generated by a shock wave in argon or, less commonly, another noble gas. The shock wave is usually produced by an explosion. Argon flash devices are almost exclusively used for photographing explosions and shock waves.
Although krypton and xenon can be also used, argon is favored because of its low cost.
Process
The light generated by an explosion is produced primarily by compression heating of the surrounding air. Replacement of the air with a noble gas considerably increases the light output; with molecular gases, the energy is consumed partially by dissociation and other processes, while noble gases are monatomic and can only undergo ionization; the ionized gas then produces the light. The low specific heat capacity of noble gases allows heating to higher temperatures, yielding brighter emission. Flashtubes are filled with noble gases for the same reason.
Engineering
Typical argon flash devices consist of an argon-filled cardboard or plastic tube with a transparent window on one end and an explosive charge on the other end. Many explosives can be used; Composition B, PETN, RDX, and plastic bonded explosives are just a few examples.
The device consists of a vessel filled with argon and a solid explosive charge. The explosion generates a shock wave, which heats the gas to very high temperature (over 104 K; published values vary between 15,000 K to 30,000 K with the best values around 25,000 K). The gas becomes incandescent and emits a flash of intense visible and ultraviolet black-body radiation. The emission for the temperature range is highest between 97–193 nm, but usually only the visible and near-ultraviolet ranges are exploited.
To achieve emission, the layer of at least one or two optical depths of the gas has to be compressed to sufficient temperature. The light intensity rises to full magnitude in about 0.1 microsecond. For about 0.5 microsecond the shock wave front instabilities are sufficient to create significant striations in the produced light; this effect diminishes as the thickness of the compressed layer increases. Only an about 75 micrometer thick layer of the gas is responsible for the light emission. The shock wave reflects after reaching the window at the end of the tube; this yields a brief increase of light intensity. The intensity then fades.
The amount of explosive can control the intensity of the shock wave and therefore of the flash. The intensity of the flash can be increased and its duration decreased by reflecting the shock wave by a suitable obstacle; a foil or a curved glass can be used. The duration of the flash is about as long as the explosion itself, depending on the construction of the lamp, between 0.1 and 100 microseconds. The duration is dependent on the length of the shockwave path through the gas, which is proportional to the length of the tube; it was shown that each centimeter of the path of shock wave through the argon medium is equivalent to 2 microseconds.
Uses
Argon flash is a standard procedure for high-speed photography, especially for photographing explosions, or less commonly for use in high altitude test vehicles. The photography of explosions and shock waves is made easy by the fact that the detonation of the argon flash lamp charge can be accurately timed relative to the test specimen explosion and the light intensity can overpower the light generated by the explosion itself. The formation of shock waves during explosions of shaped charges can be imaged this way.
As the amount of released radiant energy is fairly high, significant heating of the illuminated object can occur. Especially in the case of high explosives, this has to be taken into account.
Superradiant Light (SRL) sources are an alternative to argon flash. An electron beam source delivers a brief and intense pulse of electrons to suitable crystals (e.g. doped cadmium sulfide). Flash times in the nanosecond to picosecond range are achievable. Pulsed lasers are another alternative.
See also
Sonoluminescence
References
Argon
Explosives
Flash photography
Photographic lighting
Types of lamp | Argon flash | [
"Chemistry"
] | 875 | [
"Explosives",
"Explosions"
] |
9,480,771 | https://en.wikipedia.org/wiki/Elastolefin | Elastolefin is a fiber composed of at least 95% (by weight) of macromolecules partially cross-linked, made of ethylene and at least one other olefin. When stretched to one and a half times its original length, it recovers rapidly to its original length. It therefore will stretch up to 50% and recover. Recent updates to EU fabric labelling directive to include elastolefin in Anex I and II. Low crystallinity polyolefin elastomers that have a cross-linked structure have been developed by the DOW Chemical Company in 2002. The trade name of the elastolefin fibers is DOW XLA, the fibers when under lower stress have the ability to expand when larger strains are applied. The DOW XLA fibers were designed to have high thermal and chemical resistance, stretch performance, and durability.
References
Organic polymers
Polyolefins
Elastomers
Synthetic fibers | Elastolefin | [
"Chemistry"
] | 193 | [
"Organic polymers",
"Synthetic fibers",
"Polymer stubs",
"Synthetic materials",
"Elastomers",
"Organic compounds",
"Organic chemistry stubs"
] |
9,480,887 | https://en.wikipedia.org/wiki/Zolt%C3%A1n%20Szab%C3%B3%20%28mathematician%29 | Zoltán Szabó (born November 24, 1965) is a professor of mathematics at Princeton University known for his work on Heegaard Floer homology.
Education and career
Szabó received his BA from Eötvös Loránd University in Budapest, Hungary in 1990, and he received his PhD from Rutgers University in 1994.
Together with Peter Ozsváth, Szabó created Heegaard Floer homology, a homology theory for 3-manifolds. For this contribution to the field of topology, Ozsváth and Szabó were awarded the 2007 Oswald Veblen Prize in Geometry. In 2010, he was elected honorary member of the Hungarian Academy of Sciences.
Selected publications
.
.
Grid Homology for Knots and Links, American Mathematical Society, (2015)
References
External links
Personal homepage
1965 births
20th-century Hungarian mathematicians
21st-century Hungarian mathematicians
Members of the Hungarian Academy of Sciences
Living people
International Mathematical Olympiad participants
Topologists
Eötvös Loránd University alumni
Rutgers University alumni
Princeton University faculty | Zoltán Szabó (mathematician) | [
"Mathematics"
] | 209 | [
"Topologists",
"Topology"
] |
9,480,989 | https://en.wikipedia.org/wiki/Hebeloma%20crustuliniforme | Hebeloma crustuliniforme, commonly known as poison pie or fairy cakes, is a gilled mushroom of the genus Hebeloma found in both Old and New World countries. It is moderately poisonous.
Description
The buff-to-beige cap is in diameter, convex then umbonate with an inrolled cap margin until old. The gills are pale grey-brown, with orange to brown spores and exude droplets in moist conditions. The stipe is 4–9 cm high and thick, with a wider base. It bears no ring, while the thick flesh is white. The fungus has a radish-like smell and bitter taste.
The spores are brown, elliptical, and somewhat rough.
Similar species
Similar species include Hebeloma sinapizans and H. insigne.
Taxonomy
The species' specific name derives from the Latin crustulum ('little biscuit').
Distribution and habitat
H. crustuliniforme has been found in 18 countries, including most parts of Europe, both coasts of North America, and less frequently in Victoria, Australia.
A common mushroom, H. crustuliniforme can be found in open woodland and heathland in summer and autumn, though may also be found in winter in places with milder climates such as California. It is "by far the most common" Hebeloma found in California.
Toxicity
This fungus is poisonous, the symptoms being those of a severe gastrointestinal nature, namely vomiting, diarrhea and colicky abdominal pain several hours after consumption.
References
crustuliniforme
Fungi of Europe
Fungi of North America
Poisonous fungi
Fungus species | Hebeloma crustuliniforme | [
"Biology",
"Environmental_science"
] | 329 | [
"Poisonous fungi",
"Fungi",
"Toxicology",
"Fungus species"
] |
9,480,994 | https://en.wikipedia.org/wiki/Oxygen%20balance | Oxygen balance (OB, OB%, or Ω) is an expression that is used to indicate the degree to which an explosive can be oxidized, to determine if an explosive molecule contains enough oxygen to fully oxidize the other atoms in the explosive. For example, fully oxidized carbon forms carbon dioxide, hydrogen forms water, sulfur forms sulfur dioxide, and metals form metal oxides. A molecule is said to have a positive oxygen balance if it contains more oxygen than is needed and a negative oxygen balance if it contains less oxygen than is needed.
An explosive with a negative oxygen balance will lead to incomplete combustion, which commonly produces carbon monoxide, which is a toxic gas. Explosives with negative or positive oxygen balance are commonly mixed with other energetic materials that are either oxygen positive or negative, respectively, to increase the explosive's power. For example, TNT is an oxygen negative explosive and is commonly mixed with oxygen positive energetic materials or fuels to increase its power.
Calculating oxygen balance
The procedure for calculating oxygen balance in terms of 100 grams of the explosive material is to determine the number of moles of oxygen that are excess or deficient for 100 grams of the compound.
X = number of atoms of carbon, Y = number of atoms of hydrogen, Z = number of atoms of oxygen, and M = number of atoms of metal (metallic oxide produced).
In the case of TNT (C6H2(NO2)3CH3),
Molecular weight = 227.1
X = 7 (number of carbon atoms)
Y = 5 (number of hydrogen atoms)
Z = 6 (number of oxygen atoms)
Therefore,
OB% = −73.97% for TNT
Examples of materials with negative oxygen balance are nitromethane (−39%), trinitrotoluene (−74%), aluminium powder (−89%), sulfur (−100%), or carbon (−266.7%). Examples of materials with positive oxygen balance are ammonium nitrate (+20%), ammonium perchlorate (+34%), potassium chlorate (+39.2%), sodium chlorate (+45%), potassium nitrate (+47.5%), tetranitromethane (+49%), lithium perchlorate (+60%), or nitroglycerine (+3.5%). Ethylene glycol dinitrate has an oxygen balance of zero, as does the theoretical compound trinitrotriazine.
Oxygen balance and power
Because sensitivity, brisance, and strength are properties resulting from a complex explosive chemical reaction, a simple relationship such as oxygen balance cannot be depended upon to yield universally consistent results. When using oxygen balance to predict properties of one explosive relative to another, it is to be expected that one with an oxygen balance closer to zero will be the more brisant, powerful, and sensitive; however, many exceptions to this rule do exist.
One area in which oxygen balance can be applied is in the processing of mixtures of explosives. The family of explosives called amatols are mixtures of ammonium nitrate and TNT. Ammonium nitrate has an oxygen balance of +20% and TNT has an oxygen balance of −74%, so it would appear that the mixture yielding an oxygen balance of zero would also result in the best explosive properties. In actual practice a mixture of 80% ammonium nitrate and 20% TNT by weight yields an oxygen balance of +1%, the best properties of all mixtures, and an increase in strength of 30% over TNT.
References
Explosives engineering | Oxygen balance | [
"Engineering"
] | 743 | [
"Explosives engineering"
] |
9,481,141 | https://en.wikipedia.org/wiki/Polyvinyl%20nitrate | Polyvinyl nitrate (abbreviated: PVN) is a high-energy polymer with the idealized formula of [CH2CH(ONO2)]. Polyvinyl nitrate is a long carbon chain (polymer) with nitrate groups (-O-NO2) bonded randomly along the chain. PVN is a white, fibrous solid, and is soluble in polar organic solvents such as acetone. PVN can be prepared by nitrating polyvinyl alcohol with an excess of nitric acid. Because PVN is also a nitrate ester such as nitroglycerin (a common explosive), it exhibits energetic properties and is commonly used in explosives and propellants.
Preparation
Polyvinyl nitrate was first synthesized by submersing polyvinyl alcohol (PVA) in a solution of concentrated sulfuric and nitric acids. This causes the PVA to lose a hydrogen atom from its hydroxy group (deprotonation), and the nitric acid (HNO3) to lose a NO2+ when in sulfuric acid. The NO2+ attaches to the oxygen in the PVA and creates a nitrate group, producing polyvinyl nitrate. This method results in a low nitrogen content of 10% and an overall yield of 80%. This method is inferior, as PVA has a low solubility in sulfuric acid and a slow rate of nitration for PVA. This meant that a lot of sulfuric acid was needed relative to PVA and did not produce a high nitrogen PVN, which is desirable for its energetic properties.
An improved method is where PVA is nitrated without sulfuric acid; however, when this solution is exposed to air, the PVA combusts. In this new method, either the PVA nitration is done in an inert gas (carbon dioxide or nitrogen) or the PVA powder is clumped into larger particles and submerged underneath the nitric acid to limit the amount of air exposure.
Currently, the most common method is when PVA powder is dissolved in acetic anhydride at -10°C. Then cooled nitric acid is slowly added. This produces a high nitrogen content PVN within about 5-7 hours. Because acetic anhydride was used as the solvent instead of sulfuric acid, the PVA will not combust when exposed to air.
Physical properties
PVN is a white thermoplastic with a softening point of 40-50°C. The theoretical maximum nitrogen content of PVN is 15.73%. PVN is a polymer that has an atactic configuration, meaning the nitrate groups are randomly distributed along the main chain. Fibrous PVN increases in crystallinity as the nitrogen content increases, showing that the PVN molecules organize themselves more orderly as nitrogen percent increases. Intramolecularly, the geometry of the polymer is planar zigzag. The porous PVN can be gelatinized when added to acetone at room temperature. This creates a viscous slurry and loses its fibrous and porous nature; however, it retains most of its energetic properties.
Chemical properties
Combustion
Polyvinyl nitrate is a high-energy polymer due to the significant presence of O - NO2 groups, similar to nitrocellulose and nitroglycerin. These nitrate groups have an activation energy of 53 kcal/mol are the primary cause of PVN's high chemical potential energy. The complete combustion reaction of PVN assuming full nitration is:
2CH2CH(ONO2) + 5/2O2 -> 4CO2 + N2 + 3H2O
When burned, PVN samples with less nitrogen had a significantly higher heat of combustion because there were more hydrogen molecules and more heat was generated when oxygen was present. The heat of combustion was about 3,000 cal/g for 15.71% N and 3,700 cal/g for 11.76% N. Alternatively, PVN samples with a higher nitrogen content had a significantly higher heat of explosion as it had more O - NO2 groups as it had more oxygen leading to more complete combustion. This leads to a more complete combustion and more heat generated when burned in inert or low oxygen environments.
Stability
Nitrate esters, in general, are unstable because of the weak N - O bond and tend to decompose at higher temperatures. Fibrous PVN is relatively stable at 80°C and is less stable as the nitrogen content increases. Gelatinized PVN is less stable than fibrous PVN.
Activation energy
Ignition temperature is the temperature at which a substance combusts spontaneously and requires no other additional energy (other than the temperature)/ This temperature can be used to determine the activation energy. For samples of varying nitrogen content, the ignition temperature decreases as nitrogen percentage increases, showing that PVN is more ignitable as nitrogen content increases. Using the Semenov equation:
where D is the ignition delay (the time it takes for a substance to ignite), E is the activation energy, R is the universal gas constant, T is absolute temperature, and C is a constant, dependent on the material.
The activation energy is greater than 13 kcal/mol and reaches 16 kcal/mol (at 15.71% nitrogen, near theoretical maximum) and varies greatly between different nitrogen concentrations and has no linear pattern between activation energy and the degree of nitration.
Impact sensitivity
The height at which a mass is dropped on PVN and causes an explosion shows the sensitivity of PVN to impacts. As nitrogen content increases, fibrous PVN is more sensitive to impacts. Gelatinous PVN is similar to fibrous PVN in impact sensitivity.
Applications
Because of the nitrate groups of PVN, polyvinyl nitrate is mainly used for its explosive and energetic capabilities. Structurally, PVN is similar to nitrocellulose in that it is a polymer with several nitrate groups off the main branch, differing only in their main chain (carbon and cellulose respectively). Because of this similarity, PVN is typically used in explosives and propellants as a binder. In explosives, a binder is used to form an explosive where the explosive materials are difficult to mold (see Polymer-bonded explosive (PBX)). A common binder polymer is hydroxyl-terminated polybutadiene (HTPB) or glycidyl azide polymer (GAP). Moreover, the binder needs a plasticizer such as dioctyl adipate (DOP) or 2-nitrodiphenylamine (2-NDPA) to make the explosive more flexible. Polyvinyl nitrate combines the traits of both a binder and a plasticizer, as this polymer binds the explosive ingredients together and is flexible at is softening point (40-50°C). Moreover, PVN adds to the explosive's overall energetic potential due to its nitrate groups.
An example composition including polyvinyl nitrate is PVN, nitrocellulose and/or polyvinyl acetate, and 2-nitrodiphenylamine. This creates a moldable thermoplastic that can be combined with a powder containing nitrocellulose to create a cartridge case where the PVN composition acts as a propellant and assists as an explosive material.
See also
Nitrate ester
Polyvinyl ester
Vinyl polymer
References
Explosive chemicals
Explosive polymers
Nitrate esters
Plastics
Vinyl polymers | Polyvinyl nitrate | [
"Physics",
"Chemistry"
] | 1,549 | [
"Explosive chemicals",
"Amorphous solids",
"Unsolved problems in physics",
"Plastics"
] |
9,481,277 | https://en.wikipedia.org/wiki/Control%20of%20chaos | In lab experiments that study chaos theory, approaches designed to control chaos are based on certain observed system behaviors. Any chaotic attractor contains an infinite number of unstable, periodic orbits. Chaotic dynamics, then, consists of a motion where the system state moves in the neighborhood of one of these orbits for a while, then falls close to a different unstable, periodic orbit where it remains for a limited time and so forth. This results in a complicated and unpredictable wandering over longer periods of time.
Control of chaos is the stabilization, by means of small system perturbations, of one of these unstable periodic orbits. The result is to render an otherwise chaotic motion more stable and predictable, which is often an advantage. The perturbation must be tiny compared to the overall size of the attractor of the system to avoid significant modification of the system's natural dynamics.
Several techniques have been devised for chaos control, but most are developments of two basic approaches: the Ott–Grebogi–Yorke (OGY) method and Pyragas continuous control. Both methods require a previous determination of the unstable periodic orbits of the chaotic system before the controlling algorithm can be designed.
OGY method
Edward Ott, Celso Grebogi and James A. Yorke were the first to make the key observation that the infinite number of unstable periodic orbits typically embedded in a chaotic attractor could be taken advantage of for the purpose of achieving control by means of applying only very small perturbations. After making this general point, they illustrated it with a specific method, since called the Ott–Grebogi–Yorke (OGY) method of achieving stabilization of a chosen unstable periodic orbit. In the OGY method, small, wisely chosen, kicks are applied to the system once per cycle, to maintain it near the desired unstable periodic orbit.
To start, one obtains information about the chaotic system by analyzing a slice of the chaotic attractor. This slice is a Poincaré section. After the information about the section has been gathered, one allows the system to run and waits until it comes near a desired periodic orbit in the section. Next, the system is encouraged to remain on that orbit by perturbing the appropriate parameter. When the control parameter is actually changed, the chaotic attractor is shifted and distorted somewhat. If all goes according to plan, the new attractor encourages the system to continue on the desired trajectory. One strength of this method is that it does not require a detailed model of the chaotic system but only some information about the Poincaré section. It is for this reason that the method has been so successful in controlling a wide variety of chaotic systems.
The weaknesses of this method are in isolating the Poincaré section and in calculating the precise perturbations necessary to attain stability.
Pyragas method
In the Pyragas method of stabilizing a periodic orbit, an appropriate continuous controlling signal is injected into the system, whose intensity is practically zero as the system evolves close to the desired periodic orbit but increases when it drifts away from the desired orbit. Both the Pyragas and OGY methods are part of a general class of methods called "closed loop" or "feedback" methods which can be applied based on knowledge of the system obtained through solely observing the behavior of the system as a whole over a suitable period of time. The method was proposed by Lithuanian physicist .
Applications
Experimental control of chaos by one or both of these methods has been achieved in a variety of systems, including turbulent fluids, oscillating chemical reactions, magneto-mechanical oscillators and cardiac tissues. attempt the control of chaotic bubbling with the OGY method and using electrostatic potential as the primary control variable.
Forcing two systems into the same state is not the only way to achieve synchronization of chaos. Both control of chaos and synchronization constitute parts of cybernetical physics, a research area on the border between physics and control theory.
References
External links
Chaos control bibliography (1997–2000)
Chaos theory
Nonlinear systems | Control of chaos | [
"Mathematics"
] | 826 | [
"Nonlinear systems",
"Dynamical systems"
] |
9,481,422 | https://en.wikipedia.org/wiki/24-cell%20honeycomb | In four-dimensional Euclidean geometry, the 24-cell honeycomb, or icositetrachoric honeycomb is a regular space-filling tessellation (or honeycomb) of 4-dimensional Euclidean space by regular 24-cells. It can be represented by Schläfli symbol {3,4,3,3}.
The dual tessellation by regular 16-cell honeycomb has Schläfli symbol {3,3,4,3}. Together with the tesseractic honeycomb (or 4-cubic honeycomb) these are the only regular tessellations of Euclidean 4-space.
Coordinates
The 24-cell honeycomb can be constructed as the Voronoi tessellation of the D4 or F4 root lattice. Each 24-cell is then centered at a D4 lattice point, i.e. one of
These points can also be described as Hurwitz quaternions with even square norm.
The vertices of the honeycomb lie at the deep holes of the D4 lattice. These are the Hurwitz quaternions with odd square norm.
It can be constructed as a birectified tesseractic honeycomb, by taking a tesseractic honeycomb and placing vertices at the centers of all the square faces. The 24-cell facets exist between these vertices as rectified 16-cells. If the coordinates of the tesseractic honeycomb are integers (i,j,k,l), the birectified tesseractic honeycomb vertices can be placed at all permutations of half-unit shifts in two of the four dimensions, thus: (i+,j+,k,l), (i+,j,k+,l), (i+,j,k,l+), (i,j+,k+,l), (i,j+,k,l+), (i,j,k+,l+).
Configuration
Each 24-cell in the 24-cell honeycomb has 24 neighboring 24-cells. With each neighbor it shares exactly one octahedral cell.
It has 24 more neighbors such that with each of these it shares a single vertex.
It has no neighbors with which it shares only an edge or only a face.
The vertex figure of the 24-cell honeycomb is a tesseract (4-dimensional cube). So there are 16 edges, 32 triangles, 24 octahedra, and 8 24-cells meeting at every vertex. The edge figure is a tetrahedron, so there are 4 triangles, 6 octahedra, and 4 24-cells surrounding every edge. Finally, the face figure is a triangle, so there are 3 octahedra and 3 24-cells meeting at every face.
Cross-sections
One way to visualize a 4-dimensional figure is to consider various 3-dimensional cross-sections. That is, the intersection of various hyperplanes with the figure in question. Applying this technique to the 24-cell honeycomb gives rise to various 3-dimensional honeycombs with varying degrees of regularity.
A vertex-first cross-section uses some hyperplane orthogonal to a line joining opposite vertices of one of the 24-cells. For instance, one could take any of the coordinate hyperplanes in the coordinate system given above (i.e. the planes determined by xi = 0). The cross-section of {3,4,3,3} by one of these hyperplanes gives a rhombic dodecahedral honeycomb. Each of the rhombic dodecahedra corresponds to a maximal cross-section of one of the 24-cells intersecting the hyperplane (the center of each such (4-dimensional) 24-cell lies in the hyperplane). Accordingly, the rhombic dodecahedral honeycomb is the Voronoi tessellation of the D3 root lattice (a face-centered cubic lattice). Shifting this hyperplane halfway to one of the vertices (e.g. xi = ) gives rise to a regular cubic honeycomb. In this case the center of each 24-cell lies off the hyperplane. Shifting again, so the hyperplane intersects the vertex, gives another rhombic dodecahedral honeycomb but with new 24-cells (the former ones having shrunk to points). In general, for any integer n, the cross-section through xi = n is a rhombic dodecahedral honeycomb, and the cross-section through xi = n + is a cubic honeycomb. As the hyperplane moves through 4-space, the cross-section morphs between the two periodically.
A cell-first cross-section uses some hyperplane parallel to one of the octahedral cells of a 24-cell. Consider, for instance, some hyperplane orthogonal to the vector (1,1,0,0). The cross-section of {3,4,3,3} by this hyperplane is a rectified cubic honeycomb. Each cuboctahedron in this honeycomb is a maximal cross-section of a 24-cell whose center lies in the plane. Meanwhile, each octahedron is a boundary cell of a (4-dimensional) 24-cell whose center lies off the plane. Shifting this hyperplane till it lies halfway between the center of a 24-cell and the boundary, one obtains a bitruncated cubic honeycomb. The cuboctahedra have shrunk, and the octahedra have grown until they are both truncated octahedra. Shifting again, so the hyperplane intersects the boundary of the central 24-cell gives a rectified cubic honeycomb again, the cuboctahedra and octahedra having swapped positions. As the hyperplane sweeps through 4-space, the cross-section morphs between these two honeycombs periodically.
Kissing number
If a 3-sphere is inscribed in each hypercell of this tessellation, the resulting arrangement is the densest known regular sphere packing in four dimensions, with the kissing number 24. The packing density of this arrangement is
Each inscribed 3-sphere kisses 24 others at the centers of the octahedral facets of its 24-cell, since each such octahedral cell is shared with an adjacent 24-cell. In a unit-edge-length tessellation, the diameter of the spheres (the distance between the centers of kissing spheres) is .
Just outside this surrounding shell of 24 kissing 3-spheres is another less dense shell of 24 3-spheres which do not kiss each other or the central 3-sphere; they are inscribed in 24-cells with which the central 24-cell shares only a single vertex (rather than an octahedral cell). The center-to-center distance between one of these spheres and any of its shell neighbors or the central sphere is 2.
Alternatively, the same sphere packing arrangement with kissing number 24 can be carried out with smaller 3-spheres of edge-length-diameter, by locating them at the centers and the vertices of the 24-cells. (This is equivalent to locating them at the vertices of a 16-cell honeycomb of unit-edge-length.) In this case the central 3-sphere kisses 24 others at the centers of the cubical facets of the three tesseracts inscribed in the 24-cell. (This is the unique body-centered cubic packing of edge-length spheres of the tesseractic honeycomb.)
Just outside this shell of kissing 3-spheres of diameter 1 is another less dense shell of 24 non-kissing 3-spheres of diameter 1; they are centered in the adjacent 24-cells with which the central 24-cell shares an octahedral facet. The center-to-center distance between one of these spheres and any of its shell neighbors or the central sphere is .
Symmetry constructions
There are five different Wythoff constructions of this tessellation as a uniform polytope. They are geometrically identical to the regular form, but the symmetry differences can be represented by colored 24-cell facets. In all cases, eight 24-cells meet at each vertex, but the vertex figures have different symmetry generators.
See also
Other uniform honeycombs in 4-space:
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
Truncated 24-cell honeycomb
Rectified 24-cell honeycomb
Snub 24-cell honeycomb
Notes
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) - Model 88
o4o3x3o4o, o3x3o *b3o4o, o3x3o *b3o4o, o3x3o4o3o, o3o3o4o3x - icot - O88
5-polytopes
Honeycombs (geometry)
Regular tessellations | 24-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,992 | [
"Regular tessellations",
"Honeycombs (geometry)",
"Tessellation",
"Crystallography",
"Symmetry"
] |
9,481,589 | https://en.wikipedia.org/wiki/Photo%2051 | Photo 51 is an X-ray based fiber diffraction image of a paracrystalline gel composed of DNA fiber taken by Raymond Gosling, a postgraduate student working under the supervision of Maurice Wilkins and Rosalind Franklin at King's College London, while working in Sir John Randall's group. The image was tagged "photo 51" because it was the 51st diffraction photograph that Franklin had taken. It was critical evidence in identifying the structure of DNA.
Use in discovering structure of DNA
According to a later account by Raymond Gosling, although Photo 51 was an exceptionally clear diffraction pattern of the "B" form of DNA, Franklin was more interested in solving the diffraction pattern of the "A" form of DNA, so she put Gosling's Photo 51 to the side. When it had been decided that Franklin would leave King's College, Gosling showed the photograph to Maurice Wilkins (who would become Gosling's advisor after Franklin left).
A few days later, Wilkins showed the photo to James Watson after Gosling had returned to working under Wilkins' supervision. Franklin did not know this at the time because she was leaving King's College London. Randall, the head of the group, had asked Gosling to share all his data with Wilkins. Watson recognized the pattern as a helix because his co-worker Francis Crick had previously published a paper of what the diffraction pattern of a helix would be. Watson and Crick used characteristics and features of Photo 51, together with evidence from multiple other sources, to develop the chemical model of the DNA molecule. Their model, along with papers by Wilkins and colleagues, and by Gosling and Franklin, were first published, together, in 1953, in the same issue of Nature.
In 1962, the Nobel Prize in Physiology or Medicine was awarded to Watson, Crick and Wilkins. The prize was not awarded to Franklin; she had died four years earlier, and although there was not yet a rule against posthumous awards, the Nobel Committee generally does not make posthumous nominations. Gosling's work also was not cited by the prize committee.
The photograph provided key information that was essential for developing a model of DNA. The diffraction pattern determined the helical nature of the double helix strands (antiparallel). The outside of the DNA chain has a backbone of alternating deoxyribose and phosphate moieties, and the base pairs, the order of which provides codes for protein building and thereby inheritance, are inside the helix. Watson and Crick's calculations from Gosling and Franklin's photography gave crucial parameters for the size and structure of the helix.
Photo 51 became a crucial data source that led to the development of the DNA model and confirmed the prior postulated double helical structure of DNA, which were presented in the series of three articles in the journal Nature in 1953.
As historians of science have re-examined the period during which this image was obtained, considerable controversy has arisen over both the significance of the contribution of this image to the work of Watson and Crick, as well as the methods by which they obtained the image. Franklin had been hired independently of Maurice Wilkins, who, taking over as Gosling's new supervisor, showed Photo 51 to Watson and Crick without Franklin's knowledge. Whether Franklin would have deduced the structure of DNA on her own, from her own data, had Watson and Crick not obtained Gosling's image, is a hotly debated topic, made more controversial by the negative caricature of Franklin presented in the early chapters of Watson's history of the research on DNA structure, The Double Helix. Watson admitted his distortion of Franklin in his book, noting in the epilogue: "Since my initial impressions about [Franklin], both scientific and personal (as recorded in the early pages of this book) were often wrong, I want to say something here about her achievements."
Cultural references
A 56-minute documentary, DNA – Secret of Photo 51, was broadcast in 2003 on PBS NOVA. Narrated by Sigourney Weaver, the program features interviews with Wilkins, Gosling, Aaron Klug, Brenda Maddox, including Franklin's friends Vittorio Luzzati, Donald Caspar, Anne Piper, and Sue Richley. The UK version produced by the BBC is titled Rosalind Franklin: DNA's Dark Lady.
The first episode of a PBS documentary serial, DNA, which aired on 4 January 2004 as "The Secret of Life", centres on and features the contributions of Franklin. Narrated by Jeff Goldblum, it features Watson, Wilkins, Gosling and Peter Pauling (son of Linus Pauling).
A play entitled Photograph 51 by Anna Ziegler focuses on the role of X-ray crystallographer Rosalind Franklin in the discovery of the structure of DNA. This play won the third STAGE International Script Competition in 2008. In 2015, the play was put on at London West End, with Nicole Kidman playing Franklin.
A 107-minute documentary, Life Story, BBC Horizon science series 1987, starring Juliet Stevenson as Rosalind Franklin, Nicholas Fry as Raymond Gosling
See also
List of photographs considered the most important
References
X-ray crystallography
DNA
Black-and-white photographs
Genetics in the United Kingdom
History of genetics
Works originally published in Nature (journal)
1952 works
1952 in art
1950s photographs | Photo 51 | [
"Chemistry",
"Materials_science"
] | 1,095 | [
"X-ray crystallography",
"Crystallography"
] |
9,482,345 | https://en.wikipedia.org/wiki/Dose%20profile | In external beam Radiotherapy, transverse and longitudinal dose measurements are taken by a radiation detector in order to characterise the radiation beams from medical linear accelerators. Typically, an ionisation chamber and water phantom are used to create these radiation dose profiles. Water is used due to its tissue equivalence.
Transverse dose measurements are performed in the x (crossplane) or y (inplane) directions perpendicular to the radiation beam, and at a given depth (z) in the phantom. These are known as dose profiles.
Dose measurements taken along the z direction create radiation dose distribution known as a depth-dose curve.
See also
Dosimetry
Percentage depth dose curve
References
Cancer treatments
Radiation
Radiation therapy
Medical physics | Dose profile | [
"Physics",
"Chemistry"
] | 140 | [
"Transport phenomena",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Waves",
"Radiation",
"Medical physics"
] |
9,482,601 | https://en.wikipedia.org/wiki/Liouville%27s%20theorem%20%28differential%20algebra%29 | In mathematics, Liouville's theorem, originally formulated by French mathematician Joseph Liouville in 1833 to 1841, places an important restriction on antiderivatives that can be expressed as elementary functions.
The antiderivatives of certain elementary functions cannot themselves be expressed as elementary functions. These are called nonelementary antiderivatives. A standard example of such a function is whose antiderivative is (with a multiplier of a constant) the error function, familiar from statistics. Other examples include the functions and
Liouville's theorem states that elementary antiderivatives, if they exist, are in the same differential field as the function, plus possibly a finite number of applications of the logarithm function.
Definitions
For any differential field the of is the subfield
Given two differential fields and is called a of if is a simple transcendental extension of (that is, for some transcendental ) such that
This has the form of a logarithmic derivative. Intuitively, one may think of as the logarithm of some element of in which case, this condition is analogous to the ordinary chain rule. However, is not necessarily equipped with a unique logarithm; one might adjoin many "logarithm-like" extensions to Similarly, an is a simple transcendental extension that satisfies
With the above caveat in mind, this element may be thought of as an exponential of an element of Finally, is called an of if there is a finite chain of subfields from to where each extension in the chain is either algebraic, logarithmic, or exponential.
Basic theorem
Suppose and are differential fields with and that is an elementary differential extension of Suppose and satisfy (in words, suppose that contains an antiderivative of ).
Then there exist and such that
In other words, the only functions that have "elementary antiderivatives" (that is, antiderivatives living in, at worst, an elementary differential extension of ) are those with this form. Thus, on an intuitive level, the theorem states that the only elementary antiderivatives are the "simple" functions plus a finite number of logarithms of "simple" functions.
A proof of Liouville's theorem can be found in section 12.4 of Geddes, et al. See Lützen's scientific bibliography for a sketch of Liouville's original proof (Chapter IX. Integration in Finite Terms), its modern exposition and algebraic treatment (ibid. §61).
Examples
As an example, the field of rational functions in a single variable has a derivation given by the standard derivative with respect to that variable. The constants of this field are just the complex numbers that is,
The function which exists in does not have an antiderivative in Its antiderivatives do, however, exist in the logarithmic extension
Likewise, the function does not have an antiderivative in Its antiderivatives do not seem to satisfy the requirements of the theorem, since they are not (apparently) sums of rational functions and logarithms of rational functions. However, a calculation with Euler's formula shows that in fact the antiderivatives can be written in the required manner (as logarithms of rational functions).
Relationship with differential Galois theory
Liouville's theorem is sometimes presented as a theorem in differential Galois theory, but this is not strictly true. The theorem can be proved without any use of Galois theory. Furthermore, the Galois group of a simple antiderivative is either trivial (if no field extension is required to express it), or is simply the additive group of the constants (corresponding to the constant of integration). Thus, an antiderivative's differential Galois group does not encode enough information to determine if it can be expressed using elementary functions, the major condition of Liouville's theorem.
See also
Notes
References
External links
Differential algebra
Differential equations
Field (mathematics)
Theorems in algebra | Liouville's theorem (differential algebra) | [
"Mathematics"
] | 845 | [
"Differential algebra",
"Mathematical theorems",
"Theorems in algebra",
"Mathematical objects",
"Differential equations",
"Equations",
"Fields of abstract algebra",
"Mathematical problems",
"Algebra"
] |
9,483,282 | https://en.wikipedia.org/wiki/Organometallics | Organometallics is a biweekly journal published by the American Chemical Society. Its area of focus is organometallic and organometalloid chemistry. This peer-reviewed journal has an impact factor of 3.837 as reported by the 2021 Journal Citation Reports by Thomson Reuters.
Since 2015 Paul Chirik is the editor-in-chief of Organometallics. He is an American chemist and the Edwards S. Sanford Professor of Chemistry at Princeton University, and associate director for external partnerships of the Andlinger Center for Energy and the Environment. He writes about the catalysis of hydrocarbons.
Past editors-in-chief are Dietmar Seyferth and John Gladysz. This journal is indexed in Chemical Abstracts Service (CAS), British Library, CAB International, EBSCOhost, ProQuest, PubMed, SCOPUS, SwetsWise, and Web of Science.
See also
Organic Letters
Inorganic Chemistry
The Journal of Organic Chemistry
References
External links
American Chemical Society academic journals
Organometallic chemistry
Academic journals established in 1982
English-language journals
Biweekly journals | Organometallics | [
"Chemistry"
] | 225 | [
"Organometallic chemistry"
] |
9,483,405 | https://en.wikipedia.org/wiki/Special-use%20permit | A special-use permit authorizes land uses that are allowed and encouraged by the ordinance and declared harmonious with the applicable zoning district.
Purpose
Land use is governed by a set of regulations generally known as ordinances or municipal codes, which are authorized by the state's zoning enabling law. Within an ordinance is a list of land use designations commonly known as zoning. Each different type of zone has its own set of allowed uses. These are known as by-right uses. Then there is an extra set of uses known as special uses. To build a use that is listed as a special use, a special-use permit (or conditional-use permit) must be obtained.
An example of a special-use permit may be found in a church applying for one to construct a church building in a residential neighborhood. Although the church building is not a residential building, the zoning law may allow for churches in the residential neighborhood if the local zoning authority may review the impact on the neighborhood. This process grants discretion to the local zoning authority to ensure that an acceptable land use does not disrupt the zoning scheme because of its particular location.
US Zoning Model Act
The Standard State Zoning Enabling Act allows special-use permits based upon a finding of compatibility with surrounding areas and with developments already permitted under the general provisions of the ordinance.
Abuse
If the local zoning authority grants a special-use permit that exceeds the discretion allowed to it, then an incidence of spot zoning may arise. Such discretion then may be attacked as ultra vires, and the special-use permit overturned as an unconstitutional violation of equal protection.
Special-use permits are also required when a property has been deemed a "nonconforming use." A permit for a nonconforming use will allow the owner of a previously-compliant property to continue the existing use. This often arises when a property has been rezoned or amortized. Amortization is unconstitutional in many states, and is a controversial tool according to many property rights advocacy groups.
An example of special-use-permit abuse may be found when a business or other organization is using U.S. Forest Service land for commercial use. Special-use permits may be revoked after the initial period if it is deemed to have not met the proposed public need. This then allows for another operator to apply for the special-use permit which will be evaluated on whether they are likely to succeed and meet the public need as well as follow all other criteria such as proper resource management and respectful public use. If deemed successful after the initial period the permit can be renewed for a longer period.
Other uses
Special land-use permits are also issued by the U.S. Forest Service for the operation of ski areas and other recreational facilities in national forests. These facilities are operated by commercial providers who help the public in a way that would not otherwise be helped by the U.S. Forest Service. The U.S. Forest Service does not run the facility itself and is not the one providing the benefits that the commercial business provides. Special-use permits have also been issued for other purposes, such as in Alaska during the summer of 2015, when special fishing permits were issued to feed firefighters who had difficulty receiving supplies via land routes due to the forest fires that they were fighting in remote areas.
In broadcasting, a restricted service licence (UK) or special temporary authority (US) may be issued by a broadcasting authority for temporary changes or set-ups for a radio station or television station. This may be for a temporary LPFM station for a special event (an RSL), or for an unexpected situation such as having to operate at low power from an emergency radio antenna or radio tower after a disaster or major equipment failure (STA).
See also
Zoning
Zoning in the United States (land use)
Variance (land use)
Spot zoning
References
Zoning
Legal documents
Building engineering | Special-use permit | [
"Engineering"
] | 778 | [
"Building engineering",
"Zoning",
"Construction",
"Civil engineering",
"Architecture"
] |
2,222,206 | https://en.wikipedia.org/wiki/K%E1%B9%9Bttik%C4%81 | The star cluster Sanskrit: कृत्तिका, pronounced , popularly transliterated Krittika), sometimes known as Kārtikā, corresponds to the open star cluster called Pleiades in western astronomy; it is one of the clusters which makes up the constellation Taurus. In Indian astronomy and (Hindu astrology) the name literally translates to "the cutters". It is also the name of its goddess-personification, who is a daughter of Daksha and Panchajani, and thus a half-sister to Khyati. Spouse of Kṛttikā is Chandra ("moon"). The six Krittikas who raised the Hindu God Kartikeya are Śiva, Sambhūti, Prīti, Sannati, Anasūya and Kṣamā.
In Hindu astrology, is the third of the 27 s. It is ruled by Sun.
Under the traditional Hindu principle of naming individuals according to their Ascendant/Lagna , the following Sanskrit syllables correspond with this , and would belong at the beginning of the first name of an individual born under it: A (अ), I (ई), U (उ) and E (ए).
See also
List of Nakshatras
Pleione
References
Taurus (constellation)
Nakshatra
Daughters of Daksha | Kṛttikā | [
"Astronomy"
] | 273 | [
"Nakshatra",
"Taurus (constellation)",
"Constellations"
] |
2,222,213 | https://en.wikipedia.org/wiki/Society%20for%20Cryobiology | The Society for Cryobiology is an international scientific society that was founded in 1964. Its objectives are to promote research in low temperature biology, to improve scientific understanding in this field, and to disseminate and aid in the application of this knowledge. The Society also publishes a journal called Cryobiology.
The society has hosted 60 annual meetings to date, with the 2024 annual meeting being held in Washington. The three-day event will host over 350 delegates from more than 35 countries.
Presidents of the society
A list of past presidents of the society is in the following
1964 – 1965 Basile J. Luyet
1966 – 1967 Ronald I. N. Greaves
1967 – 1968 Donald Greiff
1968 – 1969 Charles E. Huggins
1969 – 1970 Arthur P. Rinfret
1970 – 1971 George W. Hyatt
1971 – 1973 Jacob Levitt
1973 – 1974 Peter Mazur
1975 – 1976 David E. Pegg
1977 – 1978 Alan P. MacKenzie
1979 – 1980 Michael J. Ashwood-Smith
1981 – 1982 Harold T. Meryman
1983 – 1985 Arthur W. Rowe
1985 – 1987 Stanley P. Leibo
1987 – 1989 John G. Baust
1989 – 1991 S. Randolph May
1992 – 1993 James H. Southard
1994 – 1995 Kenneth R. Diller
1996 – 1997 Peter L. Steponkus
1998 – 1999 Locksley E. McGann
2000 – 2001 John J. McGrath
2002 – 2003 Mehmet Toner
2004 – 2005 John C. Bischof
2006 – 2007 Andreas Sputtek
2008 – 2009 John K. Critser
2010 – 2011 Barry J. Fuller
2012 – 2013 John H. Crowe
2014 – 2015 Erik J. Woods
2016 – 2017 Jason P. Acker
2018 – 2019 Dayong Gao
2020 – 2021 Adam Higgins
2022 – 2023 Gregory M. Fahy
2024 – 2025 Allison Hubel
2026 – 2027 John M. Baust
References
External links
Society for Cryobiology official site
Cryobiology
1964 establishments in the United States | Society for Cryobiology | [
"Physics",
"Chemistry",
"Biology"
] | 407 | [
"Biochemistry",
"Physical phenomena",
"Phase transitions",
"Cryobiology"
] |
2,222,261 | https://en.wikipedia.org/wiki/OTR-21%20Tochka | OTR-21 Tochka () is a Soviet tactical ballistic missile. Its GRAU designation is 9K79. Its NATO reporting name is the SS-21 Scarab. One missile is transported per 9P129 vehicle and raised prior to launch. It uses an inertial guidance system.
The OTR-21 forward deployment to East Germany began in 1981, replacing the earlier Luna-M series of unguided artillery rockets. The system was scheduled to be decommissioned by the Russian Armed Forces in 2020 in favour of the 9K720 Iskander, but they have been observed in use against Ukrainian targets during the 2022 Russian invasion of Ukraine.
Description
The OTR-21 is a mobile missile launch system, designed to be deployed along with other land combat units on the battlefield. While the 9K52 Luna-M is large and relatively inaccurate, the OTR-21 is much smaller. The missile itself can be used for precise strikes on enemy tactical targets, such as control posts, bridges, storage facilities, troop concentrations and airfields. The fragmentation warhead can be replaced with a nuclear, biological or chemical warhead. The solid propellant makes the missile easy to maintain and deploy.
OTR-21 units are usually managed in a brigade structure. There are 18 launchers in a brigade. Each launcher is provided with two or three missiles.
The vehicle is amphibious, with a maximum road speed of and in water. The vehicle is NBC-protected. The system began development in 1968. Three variants were developed.
Tochka
The initial version, Tochka, NATO reporting name Scarab A, entered service with the Soviet Army in 1975. It carried one of four types of warhead:
9M123F unitary High explosive warhead. Weight .
9M123K submunitions warhead. Anti-personnel, anti-armour and anti-runway submunitions available.
9M79B nuclear. Selectable yield of 10 or 100 kT.
9N123R EMP warhead.
The minimum range was about , maximum range was . Its circular error probable (CEP) is estimated to be about .
Tochka-U
The improved Tochka-U, NATO reporting name Scarab B, passed state tests from 1986 to 1988, and was introduced in 1989.
A new motor propellant increased the range to . The CEP significantly improved, to . Six warhead options have been reported, a unitary high explosive warhead, an anti-personnel submunition dispenser, an anti-radar warhead, an EMP warhead and two nuclear warheads.
Scarab C
An unconfirmed third variant, designated Scarab C by NATO, may have been developed in the 1990s, but was likely never operational. Range increased to , and the CEP decreased to less than 70 m (229 ft). Scarab C weighed .
Configuration
9M79 missiles with various types of warheads (-9M79-1 for Tochka U Complex).
Launcher 9P129 or 9P129-1M (SPU);
Transport and loading machine 9T218 or 9T128-1 (TZM);
Transport vehicle 9T222 or 9T238 (TM);
Automatic testing machine 9V819 or 9V819-1 (AKIM);
Technical service vehicle 9V844 or 9V844M (MTO).
Set of weapon equipment 9F370-1 (KAO);
Educational means:
Simulator 9F625M;
Missile overall weight model (such as 9M79K-GVM).
9M79-UT training missile and 9N123F (K) -UT, 9N39-UT warhead. 9H123F-R UT;
9M79-RM missile and 9N123K-RM missile split training model.
Operational history
During the 1994 Yemen civil war, the Yemeni government used Tochka missiles against southern forces.
In 1999, Russia used the missiles in the Second Chechen War.
In August 2008, at least 15 Tochka missiles were deployed by Russian forces during the 2008 South Ossetia war.
In 2014, CNN reported that at least one was used near Donetsk during the War in Donbass, by either the Ukrainian Army or the Russian-backed separatist forces. The Ukrainian army issued a statement denying the use of the ballistic missile.
In the early stages of the Russian invasion of Ukraine, the Tochka was the Ukrainian Army primary way of striking Russian air bases.
Syrian civil war (2011–present)
In early December 2014, the Syrian Army fired at least one Tochka against Syrian rebels during the Siege of Wadi al-Deif (near Maarat al-Numan, in Idlib province).
On 26 April 2016, the Syrian Army fired a Tochka at Syrian rebels in the Syrian Civil Defense Center in west Aleppo.
On 14 June 2016, the Syrian Army fired a Tochka at Syrian rebel groups Al-Rahman Legion and Jaysh Al-Fustat in Eastern Ghouta, killing several fighters.
On 20 March 2018, the Syrian Army fired a Tochka towards the Turkish Hatay province, which fell in the border district of Yayladağı without causing any casualties or damage.
On 23 July 2018, the Syrian Army fired two Tochka missiles near the Israeli border. Initially thought to be inbound to Israel near the Sea of Galilee, two David's Sling interceptors were fired by Israel. A few moments later it became clear they were going to strike within Syria, as such one interceptor was detonated over Israel while the other one fell inside Syria. One Tochka missile landed 1 kilometer inside Syria.
On 5 March 2021, the Syrian Army reportedly fired a KN-02 Toksa, a North Korean copy, solid fuelled short ranged missile against a major oil facility in the country’s Idlib governorate, which is currently under the control of Turkish-backed insurgents. The strike near oil facilities ignited major blazes and killed one and wounded 11 people.
Yemeni civil war (2014–present)
On 4 September 2015, Houthi forces fired a Tochka missile at Safir base in Marib killing over 100 Saudi-led coalition personnel.
On 14 December 2015, Houthi forces fired another Tochka missile at Bab Al Mandab base killing over 150 Saudi-led coalition personnel stationed there.
On 16 January 2016, Houthi forces fired a Tochka at Al Bairaq base in Marib killing dozens of Saudi-led coalition personnel
On 31 January 2016, Houthi forces fired a Tochka at Al Anad base in Lahj killing and wounding over 200 Saudi-led coalition personnel
2020 Nagorno-Karabakh war
Azerbaijan claimed Armenia fired Tochka-U rockets at its territory during the 2020 Nagorno-Karabakh conflict. Armenia denied this, stating that Azerbaijan is making "disinformation to justify the use of a similar system or a system of a higher caliber."
Russo-Ukrainian War
On 24 February 2022, Ukrainian forces launched a missile attack on Russian Millerovo Airbase in Rostov Oblast, using two Tochka-U ballistic missiles in response for the Russian invasion of Ukraine and to prevent further air strikes by the Russian air force against Ukraine. The attack left one Su-30SM destroyed on the ground.
On 24 February 2022, a 9M79 Tochka missile fired by Russian forces struck near a hospital building in Vuhledar, Donetsk Oblast, Ukraine, killing 4 civilians and wounding 10. An Amnesty International investigation confirmed that the hospital was not a military target.
On 14 March 2022, the Russian Federation and the government of the separatist Donetsk People Republic blamed Ukrainian forces of launching a Tochka-U missile which killed 23 civilians and wounded 28 in Donetsk. The housing facility was supposedly used as a barracks for separatists forces.
On 19 March 2022, Russian forces claimed that they shot down a Ukrainian-fired missile near the Port of Berdiansk.
On 24 March 2022, the Russian Navy landing ship Saratov, docked in Berdyansk port in Ukraine, caught fire and sunk. On 3 July, a Russian official confirmed the sinking of the Saratov, a Soviet era Tapir-class landing ship. The ship was hit by a Tochka-U missile. Russia claims that the ship was scuttled by its crew to prevent its munitions from exploding and that the ship has been salvaged since.
On 8 April 2022, the railway station in Kramatorsk under Ukrainian control was hit by two Russian Tochka-U ballistic missiles. The attack killed at least 52 civilians and injured at least 87 more. Later, Russia falsely blamed Ukraine for the strike. The message in Russian "Za detei", meaning on behalf of the children, had been daubed on the missile in white.
On 16 June 2022, a Russian ammunition warehouse in the occupied Ukrainian city of Khrustalnyi was report to have been hit by a Ukrainian Tochka-U missile.
On 13 January 2023, Ukraine claims to have killed over 100 Russian soldiers in the Soledar area using various special forces, artillery and a Tochka-U missile.
On 12 May 2024, according to Russian government and state media reports, a Ukrainian missile attack reportedly containing Tochka-U's allegedly damaged a 10-story residential building in Belgorod, with a reported death toll of 15 people.
Operators
Current operators
3+ launchers as of 2024
4 Tochka-U launchers as of 2024
Unknown number of launchers as of 2024
12 launchers as of 2024
Unknown numbers of KN-02 Toksa variant as of 2024
In 2022, it was estimated that Russia had 200 missiles in service, despite being largely replaced by the Iskander. 50 launchers as of 2024
In 2022, it was estimated that Ukraine had 90 launchers and 500 missiles. Unknown number of launchers as of 2024, possibly no longer operational
Unknown number of launchers as of 2024
Former operators
36 Tochka-U launchers in 2022, split in three brigades with 12 launchers each according to the International Institute of Strategic Studies, while the Belarusian order of battle only lists the 465th Missile Brigade. None in service in 2024
8 launchers in 1989. Passed on to successor states.
Inherited from Czechoslovakia, remained in service as late as 2004
8 launchers in 1989, scrapped after the German reunification
Ordered 12 launchers and around 100 missiles. Declared operational in 1988. They were used during the 1994 civil war, and were passed on to unified Yemen after.
4 launchers and 40 missiles delivered in 1987, remained in service as late as 2008
Inherited a small number from Czechoslovakia, remained in service as late as 2001
300 launchers in 1991, passed on to successor states
Inherited from North Yemen. Used during the 1994 civil war and the ongoing civil war. None in service in 2024
See also
References
Bibliography
External links
CSIS Missile Threat SS-21
CSIS Missile Defense Project - The Missile War in Yemen
SS-21 Scarab article on Warfare.ru
Tochka-U Video
SS-21 Scarab (9K79 Tochka)
OTR Tochka
Cold War missiles of the Soviet Union
Nuclear missiles of the Soviet Union
Tactical ballistic missiles
Theatre ballistic missiles
Chemical weapon delivery systems
KB Mashinostroyeniya products
Tactical ballistic missiles of North Korea
Tactical ballistic missiles of the Soviet Union
Military equipment introduced in the 1970s | OTR-21 Tochka | [
"Chemistry"
] | 2,399 | [
"Chemical weapon delivery systems",
"Chemical weapons"
] |
2,222,518 | https://en.wikipedia.org/wiki/One-name%20study | A one-name study is a project researching a specific surname, as opposed to a particular pedigree (ancestors of one person) or descendancy (descendants of one person or couple). Some people who research a specific surname may restrict their research geographically and chronologically, perhaps to one country and time period, while others may collect all occurrences world-wide for all time.
A one-name study is not limited to persons who are related biologically. Studies may have a number of family trees which have no link with each other.
Findings from a one-name study are useful to genealogists. Onomasticians, who study the etymology, meaning and geographic origin of names, also draw on the macro perspective provided by a one-name study.
Scope
Many people conducting family history, genealogical or onomastic research may conduct a one-name study of a surname in a given period or locality quite informally.
A full one-name study can be daunting, particularly if the surname is very common. Conversely, a rare surname can be difficult to trace. Since such studies are usually conducted by individuals as a pastime, they are generally feasible only when a surname is not used by more than a couple of thousand contemporary people, so that the total historical data-set is numbered in the low tens of thousands. Where a surname is used by hundreds of thousands, or millions of people, it would be practically impossible to differentiate these persons using national-index data alone.
In some cultures, one-name studies are impossible, since hereditary surnames are not used at all or in the case of names such as Singh may represent religious practice rather than an ancestry. Since a majority of human societies use patronymic surnames, one-name studies generally focus on male succession and ignore family relationships through marriage.
Some researchers are satisfied to collect all information and group it geographically, approximately representing the different family groups. Others attempt to reconstruct lineages.
In most one-name studies, a united lineage will not be discovered, but broad perspectives can be achieved, giving clues to name origins and migrations. Many researchers are motivated to go beyond the one-name-study stage and to compile fully researched, single-family histories of some of the families they discover.
Methods
Accessibility of the data required for a one-name study varies from country to country. Where civil registration indexes are open to public search, they may not be online or gathered in the national capital, but are scattered through the states, as in Australia, or towns, as in France and the United States. In many countries, such as Germany, civil registration and census data are regarded as a state prerogative: vital data are only available to the persons concerned and 19th-century census returns are not available at all.
One-name studies in the United States have become more feasible than they were, thanks to the increased availability of online indexes to 19th-century and early-20th-century censuses.
More limited one-name studies can be conducted using other national indexes including:
telephone and address directories
registers of wills or deceased estates
electoral rolls
land possession records
military service indexes
One-name studies are generally rounded out with a miscellany of information drawn from national bibliographies, archival catalogues, patent databases, reports of law cases, tax lists, newspaper indexes and web searches. A one-name researcher may also report on the linguistic origins of the surname and its use in place names and corporate names.
UK surnames
Civil registration indexes of births, marriages and deaths in England and Wales (for the period from 1837), Scotland (from 1855) and Northern Ireland (from 1865 and Protestant marriages from 1845) are in the public domain, and anyone may apply to see the details of any birth, marriage or death. For the period before civil registration, in principle back to 1538 in England and Wales and 1533 in Scotland, parish registers have recorded birth and/or baptisms, marriages and deaths and/or burials. These are also freely available, although the survival of such registers is less likely as we reach back to the earliest dates of this period.
The civil registration index books for England and Wales were scanned and made available online in 2004 by the subscription web site Findmypast (formerly 1837online) and an index has also been created by volunteers for the free web site FreeBMD. Records for Scotland can be searched at the pay-per-view web site ScotlandsPeople, and this means that a one-name study with a British focus can be conducted from anywhere in the world. Civil registration indexes for Northern Ireland can be viewed at the General Register Office (Northern Ireland) (GRONI) on payment of an entrance fee.
Censuses have taken place in England, Wales, Scotland and Ireland since the 1800s. The Irish Census returns for the years 1841 to 1891 are not available having been destroyed. Otherwise information from the 10-yearly censuses from 1841 until 1911 is available and facilitates the linking of surname data into family groups.
Since it is possible to extract a complete data-set of a given surname from these public records, ancestries of most 20th-century persons with a particular surname in England and Wales can be compiled without needing any contact to the persons concerned.
Tools
While most one-name studies are conducted as a pastime, rather than as an economic activity, the sheer volume of information to be organised may require semi-professional data-processing and publishing skills. To avoid retyping large volumes of data by hand, one-name researchers are often skilled at data scraping and automated reformatting. The data must be carefully structured. An accurate copy of the original indexes must be drawn up, and updated when they are amended. Errors and conflicts in the indexes are noted. Links to those tables appear in the roll of individual persons.
Many one-name researchers keep data tables in computer spreadsheets because it is possible to see hundreds of items on a single screen and use thinking power to detect patterns. Genealogy software is used by many researchers to collate and define family trees. Others employ relational database software.
Increasingly one-namers are becoming involved in Surname DNA projects, using Y-DNA testing to analyse relationships among different lineages bearing the same surname (or suspected spelling variants).
Motivation and support
One-name researchers often begin a study in the hope that obtaining a massive data set will give them sufficient perspective to break through a barrier in their own family history research. Some are motivated by the belief, only rarely borne out, that kinship can be documented among all persons sharing a surname.
The Guild of One-Name Studies was established in the United Kingdom in September 1979, and maintains a register of surnames being researched by members. It is a channel for anyone wishing to contact the person researching a particular registered name. In 2014 the Guild had over 2,000 world-wide members conducting studies of individual surnames and their variants.
Publication
Traditionally, publication of definitive research is undertaken by printing a book or by publishing a one-name periodical. Such publications are often sponsored by formally established one-name groups. The UK-based Federation of Family History Societies includes several One-Name Societies, whilst the Guild of One-Name Studies has many members who are associated with such organisations. Advice on setting up a one-name group appears in a short booklet, "One-Name Family History Groups" by Derek Palgrave published by the Halsted Trust in 2008.
Today many studies are presented online, since the data can be continually updated and made available worldwide.
A number of Guild of One-Name Studies members have taken advantage of the member benefit called the Members Website Project (MWP) which enables a members to both share and publish their study as a website whilst continuing to work on their study.
See also
Surname DNA project
Extinction of surnames
References
External links
Guild of One-Name Studies
The Surname Society
Genealogy
Surname
Onomastics | One-name study | [
"Biology"
] | 1,615 | [
"Phylogenetics",
"Genealogy"
] |
2,222,635 | https://en.wikipedia.org/wiki/Atmospheric%20electricity | Atmospheric electricity describes the electrical charges in the Earth's atmosphere (or that of another planet). The movement of charge between the Earth's surface, the atmosphere, and the ionosphere is known as the global atmospheric electrical circuit. Atmospheric electricity is an interdisciplinary topic with a long history, involving concepts from electrostatics, atmospheric physics, meteorology and Earth science.
Thunderstorms act as a giant battery in the atmosphere, charging up the electrosphere to about 400,000 volts with respect to the surface. This sets up an electric field throughout the atmosphere, which decreases with increase in altitude. Atmospheric ions created by cosmic rays and natural radioactivity move in the electric field, so a very small current flows through the atmosphere, even away from thunderstorms. Near the surface of the Earth, the magnitude of the field is on average around 100 V/m, oriented such that it drives positive charges down.
Atmospheric electricity involves both thunderstorms, which create lightning bolts to rapidly discharge huge amounts of atmospheric charge stored in storm clouds, and the continual electrification of the air due to ionization from cosmic rays and natural radioactivity, which ensure that the atmosphere is never quite neutral.
History
Sparks drawn from electrical machines and from Leyden jars suggested to early experimenters Hauksbee, Newton, Wall, Nollet, and Gray that lightning was caused by electric discharges. In 1708, Dr. William Wall was one of the first to observe that spark discharges resembled miniature lightning, after observing the sparks from a charged piece of amber.
Benjamin Franklin's experiments showed that electrical phenomena of the atmosphere were not fundamentally different from those produced in the laboratory, by listing many similarities between electricity and lightning. By 1749, Franklin observed lightning to possess almost all the properties observable in electrical machines.
In July 1750, Franklin hypothesized that electricity could be taken from clouds via a tall metal aerial with a sharp point. Before Franklin could carry out his experiment, in 1752 Thomas-François Dalibard erected a iron rod at Marly-la-Ville, near Paris, drawing sparks from a passing cloud. With ground-insulated aerials, an experimenter could bring a grounded lead with an insulated wax handle close to the aerial, and observe a spark discharge from the aerial to the grounding wire. In May 1752, Dalibard affirmed that Franklin's theory was correct.
Around June 1752, Franklin reportedly performed his famous kite experiment. The kite experiment was repeated by Romas, who drew from a metallic string sparks long, and by Cavallo, who made many important observations on atmospheric electricity. Lemonnier (1752) also reproduced Franklin's experiment with an aerial, but substituted the ground wire with some dust particles (testing attraction). He went on to document the fair weather condition, the clear-day electrification of the atmosphere, and its diurnal variation. Beccaria (1775) confirmed Lemonnier's diurnal variation data and determined that the atmosphere's charge polarity was positive in fair weather. Saussure (1779) recorded data relating to a conductor's induced charge in the atmosphere. Saussure's instrument (which contained two small spheres suspended in parallel with two thin wires) was a precursor to the electrometer. Saussure found that the atmospheric electrification under clear weather conditions had an annual variation, and that it also varied with height. In 1785, Coulomb discovered the electrical conductivity of air. His discovery was contrary to the prevailing thought at the time, that the atmospheric gases were insulators (which they are to some extent, or at least not very good conductors when not ionized). Erman (1804) theorized that the Earth was negatively charged, and Peltier (1842) tested and confirmed Erman's idea.
Several researchers contributed to the growing body of knowledge about atmospheric electrical phenomena. Francis Ronalds began observing the potential gradient and air-earth currents around 1810, including making continuous automated recordings. He resumed his research in the 1840s as the inaugural Honorary Director of the Kew Observatory, where the first extended and comprehensive dataset of electrical and associated meteorological parameters was created. He also supplied his equipment to other facilities around the world with the goal of delineating atmospheric electricity on a global scale. Kelvin's new water dropper collector and divided-ring electrometer were introduced at Kew Observatory in the 1860s, and atmospheric electricity remained a speciality of the observatory until its closure. For high-altitude measurements, kites were once used, and weather balloons or aerostats are still used, to lift experimental equipment into the air. Early experimenters even went aloft themselves in hot-air balloons.
Hoffert (1888) identified individual lightning downward strokes using early cameras. Elster and Geitel, who also worked on thermionic emission, proposed a theory to explain thunderstorms' electrical structure (1885) and, later, discovered atmospheric radioactivity (1899) from the existence of positive and negative ions in the atmosphere. Pockels (1897) estimated lightning current intensity by analyzing lightning flashes in basalt (c. 1900) and studying the left-over magnetic fields caused by lightning. Discoveries about the electrification of the atmosphere via sensitive electrical instruments and ideas on how the Earth's negative charge is maintained were developed mainly in the 20th century, with CTR Wilson playing an important part. Current research on atmospheric electricity focuses mainly on lightning, particularly high-energy particles and transient luminous events, and the role of non-thunderstorm electrical processes in weather and climate.
Description
Atmospheric electricity is always present, and during fine weather away from thunderstorms, the air above the surface of Earth is positively charged, while the Earth's surface charge is negative. This can be understood in terms of a difference of potential between a point of the Earth's surface, and a point somewhere in the air above it. Because the atmospheric electric field is negatively directed in fair weather, the convention is to refer to the potential gradient, which has the opposite sign and is about 100 V/m at the surface, away from thunderstorms. There is a weak conduction current of atmospheric ions moving in the atmospheric electric field, about 2 picoamperes per square meter, and the air is weakly conductive due to the presence of these atmospheric ions.
Variations
Global daily cycles in the atmospheric electric field, with a minimum around 03 UT and peaking roughly 16 hours later, were researched by the Carnegie Institution of Washington in the 20th century. This Carnegie curve variation has been described as "the fundamental electrical heartbeat of the planet".
Even away from thunderstorms, atmospheric electricity can be highly variable, but, generally, the electric field is enhanced in fogs and dust whereas the atmospheric electrical conductivity is diminished.
Links with biology
The atmospheric potential gradient leads to an ion flow from the positively charged atmosphere to the negatively charged earth surface. Over a flat field on a day with clear skies, the atmospheric potential gradient is approximately 120 V/m. Objects protruding these fields, e.g. flowers and trees, can increase the electric field strength to several kilovolts per meter. These near-surface electrostatic forces are detected by organisms such as the bumblebee to navigate to flowers and the spider to initiate dispersal by ballooning. The atmospheric potential gradient is also thought to affect sub-surface electro-chemistry and microbial processes.
On the other hand, swarming insects and birds can be a source of biogenic charge in the atmosphere, likely contributing to a source of electrical variability in the atmosphere.
Near space
The electrosphere layer (from tens of kilometers above the surface of the Earth to the ionosphere) has a high electrical conductivity and is essentially at a constant electric potential. The ionosphere is the inner edge of the magnetosphere and is the part of the atmosphere that is ionized by solar radiation. (Photoionization is a physical process in which a photon is incident on an atom, ion or molecule, resulting in the ejection of one or more electrons.)
Cosmic radiation
The Earth, and almost all living things on it, are constantly bombarded by radiation from outer space. This radiation primarily consists of positively charged ions from protons to iron and larger nuclei derived sources outside the Solar System. This radiation interacts with atoms in the atmosphere to create an air shower of secondary ionising radiation, including X-rays, muons, protons, alpha particles, pions, and electrons. Ionization from this secondary radiation ensures that the atmosphere is weakly conductive, and the slight current flow from these ions over the Earth's surface balances the current flow from thunderstorms. Ions have characteristic parameters such as mobility, lifetime, and generation rate that vary with altitude.
Thunderstorms and lightning
The potential difference between the ionosphere and the Earth is maintained by thunderstorms, with lightning strikes delivering negative charges from the atmosphere to the ground.
Collisions between ice and soft hail (graupel) inside cumulonimbus clouds causes separation of positive and negative charges within the cloud, essential for the generation of lightning. How lightning initially forms is still a matter of debate: Scientists have studied root causes ranging from atmospheric perturbations (wind, humidity, and atmospheric pressure) to the impact of solar wind and energetic particles.
An average bolt of lightning carries a negative electric current of 40 kiloamperes (kA) (although some bolts can be up to 120 kA), and transfers a charge of five coulombs and energy of 500 MJ, or enough energy to power a 100-watt lightbulb for just under two months. The voltage depends on the length of the bolt, with the dielectric breakdown of air being three million volts per meter, and lightning bolts often being several hundred meters long. However, lightning leader development is not a simple matter of dielectric breakdown, and the ambient electric fields required for lightning leader propagation can be a few orders of magnitude less than dielectric breakdown strength. Further, the potential gradient inside a well-developed return-stroke channel is on the order of hundreds of volts per meter or less due to intense channel ionization, resulting in a true power output on the order of megawatts per meter for a vigorous return-stroke current of 100 kA .
If the quantity of water that is condensed in and subsequently precipitated from a cloud is known, then the total energy of a thunderstorm can be calculated. In an average thunderstorm, the energy released amounts to about 10,000,000 kilowatt-hours (3.6 joule), which is equivalent to a 20-kiloton nuclear warhead. A large, severe thunderstorm might be 10 to 100 times more energetic.
Corona discharges
St. Elmo's Fire is an electrical phenomenon in which luminous plasma is created by a coronal discharge originating from a grounded object. Ball lightning is often erroneously identified as St. Elmo's Fire, whereas they are separate and distinct phenomena. Although referred to as "fire", St. Elmo's Fire is, in fact, plasma, and is observed, usually during a thunderstorm, at the tops of trees, spires or other tall objects, or on the heads of animals, as a brush or star of light.
Corona is caused by the electric field around the object in question ionizing the air molecules, producing a faint glow easily visible in low-light conditions. Approximately 1,000 – 30,000 volts per centimeter is required to induce St. Elmo's Fire; however, this is dependent on the geometry of the object in question. Sharp points tend to require lower voltage levels to produce the same result because electric fields are more concentrated in areas of high curvature, thus discharges are more intense at the end of pointed objects. St. Elmo's Fire and normal sparks both can appear when high electrical voltage affects a gas. St. Elmo's fire is seen during thunderstorms when the ground below the storm is electrically charged, and there is high voltage in the air between the cloud and the ground. The voltage tears apart the air molecules and the gas begins to glow. The nitrogen and oxygen in the Earth's atmosphere causes St. Elmo's Fire to fluoresce with blue or violet light; this is similar to the mechanism that causes neon signs to glow.
Earth-Ionosphere cavity
The Schumann resonances are a set of spectrum peaks in the extremely low frequency (ELF) portion of the Earth's electromagnetic field spectrum. Schumann resonance is due to the space between the surface of the Earth and the conductive ionosphere acting as a waveguide. The limited dimensions of the earth cause this waveguide to act as a resonant cavity for electromagnetic waves. The cavity is naturally excited by energy from lightning strikes.
Electrical system grounding
Atmospheric charges can cause undesirable, dangerous, and potentially lethal charge potential buildup in suspended electric wire power distribution systems. Bare wires suspended in the air spanning many kilometers and isolated from the ground can collect very large stored charges at high voltage, even when there is no thunderstorm or lightning occurring. This charge will seek to discharge itself through the path of least insulation, which can occur when a person reaches out to activate a power switch or to use an electric device.
To dissipate atmospheric charge buildup, one side of the electrical distribution system is connected to the earth at many points throughout the distribution system, as often as on every support pole. The one earth-connected wire is commonly referred to as the "protective earth", and provides path for the charge potential to dissipate without causing damage, and provides redundancy in case any one of the ground paths is poor due to corrosion or poor ground conductivity. The additional electric grounding wire that carries no power serves a secondary role, providing a high-current short-circuit path to rapidly blow fuses and render a damaged device safe, rather than have an ungrounded device with damaged insulation become "electrically live" via the grid power supply, and hazardous to touch.
Each transformer in an alternating current distribution grid segments the grounding system into a new separate circuit loop. These separate grids must also be grounded on one side to prevent charge buildup within them relative to the rest of the system, and which could cause damage from charge potentials discharging across the transformer coils to the other grounded side of the distribution network.
See also
General
Atmospheric physics
Ionosphere
Air quality
Lightning rocket
Electromagnetism
Earth's magnetic field
Sprites and lightning
Whistler (radio)
Telluric current
Other
Electrodynamic tether
Solar radiation
References and external articles
Citations and notes
Other reading
Richard E. Orville (ed.), "Atmospheric and Space Electricity". ("Editor's Choice" virtual journal) – "American Geophysical Union". (AGU) Washington, DC 20009-1277 USA
Schonland, B. F. J., "Atmospheric Electricity". Methuen and Co., Ltd., London, 1932.
MacGorman, Donald R., W. David Rust, D. R. Macgorman, and W. D. Rust, "The Electrical Nature of Storms". Oxford University Press, March 1998.
Volland, H., "Atmospheric Electrodynamics", Springer, Berlin, 1984.
Further reading
Electricity in the Atmosphere - The Feynman Lectures on Physics
James R. Wait, Some basic electromagnetic aspects of ULF field variations in the atmosphere. Journal Pure and Applied Geophysics, Volume 114, Number 1 / January, 1976 Pages 15–28 Birkhäuser Basel ISSN 0033-4553 (Print) 1420-9136 (Online) DOI 10.1007/BF00875488
National Research Council (U.S.)., & American Geophysical Union. (1986). The Earth's electrical environment. Washington, D.C: National Academy Pres
Solar variability, weather, and climate By National Research Council (U.S.). Geophysics Study Committee
This gives a detailed summary of the phenomena as understood in the early 20th century.
External links
Electric Current through the Atmosphere
The Global Circuit , phys.uh.edu
Soaking in atmospheric electricity 'Fair weather' measurements important to understanding thunderstorms. science.nasa.gov
Atmospheric Electricity HomePage, uah.edu
Tjt, Fair-weather atmospheric electricity. ava.fmi.fi
International Commission on Atmospheric Electricity (ICAE) Homepage
Electrical phenomena | Atmospheric electricity | [
"Physics"
] | 3,372 | [
"Physical phenomena",
"Electrical phenomena",
"Atmospheric electricity"
] |
2,223,114 | https://en.wikipedia.org/wiki/Baryon%20asymmetry | In physical cosmology, the baryon asymmetry problem, also known as the matter asymmetry problem or the matter–antimatter asymmetry problem, is the observed imbalance in baryonic matter (the type of matter experienced in everyday life) and antibaryonic matter in the observable universe. Neither the standard model of particle physics nor the theory of general relativity provides a known explanation for why this should be so, and it is a natural assumption that the universe is neutral with all conserved charges. The Big Bang should have produced equal amounts of matter and antimatter. Since this does not seem to have been the case, it is likely some physical laws must have acted differently or did not exist for matter and/or antimatter. Several competing hypotheses exist to explain the imbalance of matter and antimatter that resulted in baryogenesis. However, there is as of yet no consensus theory to explain the phenomenon, which has been described as "one of the great mysteries in physics".
Sakharov conditions
In 1967, Andrei Sakharov proposed a set of three necessary conditions that a baryon-generating interaction must satisfy to produce matter and antimatter at different rates. These conditions were inspired by the recent discoveries of the Cosmic microwave background and CP violation in the neutral kaon system. The three necessary "Sakharov conditions" are:
Baryon number violation.
C-symmetry and CP-symmetry violation.
Interactions out of thermal equilibrium.
Baryon number violation
Baryon number violation is a necessary condition to produce an excess of baryons over anti-baryons. But C-symmetry violation is also needed so that the interactions which produce more baryons than anti-baryons will not be counterbalanced by interactions which produce more anti-baryons than baryons. CP-symmetry violation is similarly required because otherwise equal numbers of left-handed baryons and right-handed anti-baryons would be produced, as well as equal numbers of left-handed anti-baryons and right-handed baryons. Finally, the interactions must be out of thermal equilibrium, since otherwise CPT symmetry would assure compensation between processes increasing and decreasing the baryon number.
Currently, there is no experimental evidence of particle interactions where the conservation of baryon number is broken perturbatively: this would appear to suggest that all observed particle reactions have equal baryon number before and after. Mathematically, the commutator of the baryon number quantum operator with the (perturbative) Standard Model hamiltonian is zero: . However, the Standard Model is known to violate the conservation of baryon number only non-perturbatively: a global U(1) anomaly. To account for baryon violation in baryogenesis, such events (including proton decay) can occur in Grand Unification Theories (GUTs) and supersymmetric (SUSY) models via hypothetical massive bosons such as the X boson.
CP-symmetry violation
The second condition for generating baryon asymmetry—violation of charge-parity symmetry—is that a process is able to happen at a different rate to its antimatter counterpart. In the Standard Model, CP violation appears as a complex phase in the quark mixing matrix of the weak interaction. There may also be a non-zero CP-violating phase in the neutrino mixing matrix, but this is currently unmeasured. The first in a series of basic physics principles to be violated was parity through Chien-Shiung Wu's experiment. This led to CP violation being verified in the 1964 Fitch–Cronin experiment with neutral kaons, which resulted in the 1980 Nobel Prize in Physics (direct CP violation, that is violation of CP symmetry in a decay process, was discovered later, in 1999). Due to CPT symmetry, violation of CP symmetry demands violation of time inversion symmetry, or T-symmetry. Despite the allowance for CP violation in the Standard Model, it is insufficient to account for the observed baryon asymmetry of the universe (BAU) given the limits on baryon number violation, meaning that beyond-Standard Model sources are needed.
A possible new source of CP violation was found at the Large Hadron Collider (LHC) by the LHCb collaboration during the first three years of LHC operations (beginning March 2010). The experiment analyzed the decays of two particles, the bottom Lambda (Λb0) and its antiparticle, and compared the distributions of decay products. The data showed an asymmetry of up to 20% of CP-violation sensitive quantities, implying a breaking of CP-symmetry. This analysis will need to be confirmed by more data from subsequent runs of the LHC.
One method to search for additional CP-violation is the search for electric dipole moments of fundamental or composed particles. The existence of electric dipole moments in equilibrium states requires violation of T-symmetry. That way finding a non zero electric dipole moment would imply the existence of T-violating interactions in the vacuum corrections to the measured particle. So far all measurements are consistent with zero putting strong bounds on the properties of the yet unknown new CP-violating interactions.
Interactions out of thermal equilibrium
In the out-of-equilibrium decay scenario, the last condition states that the rate of a reaction which generates baryon-asymmetry must be less than the rate of expansion of the universe. In this situation the particles and their corresponding antiparticles do not achieve thermal equilibrium due to rapid expansion decreasing the occurrence of pair-annihilation.
Other explanations
Regions of the universe where antimatter dominates
Another possible explanation of the apparent baryon asymmetry is that matter and antimatter are essentially separated into different, widely distant regions of the universe. The formation of antimatter galaxies was originally thought to explain the baryon asymmetry, as from a distance, antimatter atoms are indistinguishable from matter atoms; both produce light (photons) in the same way. Along the boundary between matter and antimatter regions, however, annihilation (and the subsequent production of gamma radiation) would be detectable, depending on its distance and the density of matter and antimatter. Such boundaries, if they exist, would likely lie in deep intergalactic space. The density of matter in intergalactic space is reasonably well established at about one atom per cubic meter. Assuming this is a typical density near a boundary, the gamma ray luminosity of the boundary interaction zone can be calculated. No such zones have been detected, but 30 years of research have placed bounds on how far they might be. On the basis of such analyses, it is now deemed unlikely that any region within the observable universe is dominated by antimatter.
Mirror anti-universe
The state of the universe, as it is, does not violate the CPT symmetry, because the Big Bang could be considered as a double sided event, both classically and quantum mechanically, consisting of a universe-antiuniverse pair. This means that this universe is the charge (C), parity (P) and time (T) image of the anti-universe. This pair emerged from the Big Bang epochs not directly into a hot, radiation-dominated era. The antiuniverse would flow back in time from the Big Bang, becoming bigger as it does so, and would be also dominated by antimatter. Its spatial properties are inverted if compared to those in our universe, a situation analogous to creating electron–positron pairs in a vacuum. This model, devised by physicists from the Perimeter Institute for Theoretical Physics in Canada, proposes that temperature fluctuations in the cosmic microwave background (CMB) are due to the quantum-mechanical nature of space-time near the Big Bang singularity. This means that a point in the future of our universe and a point in the distant past of the anti-universe would provide fixed classical points, while all possible quantum-based permutations would exist in between. Quantum uncertainty causes the universe and antiuniverse to not be exact mirror images of each other.
This model has not shown if it can reproduce certain observations regarding the inflation scenario, such as explaining the uniformity of the cosmos on large scales. However, it provides a natural and straightforward explanation for dark matter. Such a universe-antiuniverse pair would produce large numbers of superheavy neutrinos, also known as sterile neutrinos. These neutrinos might also be the source of recently observed bursts of high-energy cosmic rays.
Baryon asymmetry parameter
The challenges to the physics theories are then to explain how to produce the predominance of matter over antimatter, and also the magnitude of this asymmetry. An important quantifier is the asymmetry parameter,
This quantity relates the overall number density difference between baryons and antibaryons (nB and n, respectively) and the number density of cosmic background radiation photons nγ.
According to the Big Bang model, matter decoupled from the cosmic background radiation (CBR) at a temperature of roughly kelvin, corresponding to an average kinetic energy of / () = . After the decoupling, the total number of CBR photons remains constant. Therefore, due to space-time expansion, the photon density decreases. The photon density at equilibrium temperature T per cubic centimeter, is given by
with kB as the Boltzmann constant, ħ as the Planck constant divided by 2 and c as the speed of light in vacuum, and ζ(3) as Apéry's constant. At the current CBR photon temperature of , this corresponds to a photon density nγ of around 411 CBR photons per cubic centimeter.
Therefore, the asymmetry parameter η, as defined above, is not the "good" parameter. Instead, the preferred asymmetry parameter uses the entropy density s,
because the entropy density of the universe remained reasonably constant throughout most of its evolution. The entropy density is
with p and ρ as the pressure and density from the energy density tensor Tμν, and g* as the effective number of degrees of freedom for "massless" particles (inasmuch as mc2 ≪ kBT holds) at temperature T,
for bosons and fermions with gi and gj degrees of freedom at temperatures Ti and Tj respectively. Presently, s = .
See also
Baryogenesis
CP violation
List of unsolved problems in physics
References
Concepts in astrophysics
Antimatter
Asymmetry
Unsolved problems in physics | Baryon asymmetry | [
"Physics"
] | 2,172 | [
"Symmetry",
"Antimatter",
"Concepts in astrophysics",
"Unsolved problems in physics",
"Astrophysics",
"Asymmetry",
"Matter"
] |
2,223,301 | https://en.wikipedia.org/wiki/Bill%20Odenkirk | William Leonard Odenkirk (born October 13, 1965) is an American comedy writer.
Biography
Odenkirk was born in Naperville, Illinois. He is the younger brother of American actor and comedian Bob Odenkirk, and worked as a writer, producer and actor on the HBO sketch comedy TV show Mr. Show with Bob and David, which featured his brother as co-star. Odenkirk went on to write for Tenacious D, Futurama, and Disenchantment. He has written and executive produced episodes of The Simpsons. He holds a PhD in inorganic chemistry from the University of Chicago.
Writing credits
Tenacious D episodes
He is credited with writing the following episodes, along with Jack Black, David Cross, Kyle Gass, Tom Gianas, and Bob Odenkirk:
"Death of a Dream"
"The Greatest Song in the World"
"The Fan"
"Road Gig"
Futurama episodes
He is credited with writing the following episodes:
"How Hermes Requisitioned His Groove Back" (2000)
"A Tale of Two Santas" (2001)
"Insane in the Mainframe" (2001)
"Kif Gets Knocked Up a Notch" (2003)
"The Farnsworth Parabox" (2003)
"Planet Espresso" (2024)
The Simpsons episodes
He is credited with writing the following episodes:
"Treehouse of Horror XV" (all three segments) (2004)
"The Seven-Beer Snitch" (2005)
"The Mook, the Chef, the Wife and Her Homer" (2006)
"Crook and Ladder" (2007)
"Double, Double, Boy in Trouble" (2008)
"Million Dollar Maybe" (2010)
"Love Is a Many Strangled Thing" (2011)
"Adventures in Baby-Getting" (2012)
"Pulpit Friction" (2013)
"Super Franchise Me" (2014)
"To Courier with Love" (2016)
"The Last Traction Hero" (2016)
"Grampy Can Ya Hear Me" (2017)
"Forgive and Regret" (2018)
"The Fat Blue Line” (2019)
Disenchantment episodes
Odenkirk is credited with writing the following episodes:
"Steamland Confidential" (2021)
"Love Is Hell" (2022)
Guru Nation project
The untitled series is in development for Paramount+ and is being developed by Bob Odenkirk and David Cross.
References
External links
1965 births
Television producers from Illinois
American television writers
American male television writers
Living people
Writers from Naperville, Illinois
University of Chicago alumni
Primetime Emmy Award winners
Comedians from Illinois
Inorganic chemists
Screenwriters from Illinois
21st-century American comedians
Scientists from Illinois
20th-century American comedians
20th-century American male writers
21st-century American male writers
20th-century American writers | Bill Odenkirk | [
"Chemistry"
] | 571 | [
"Inorganic chemists"
] |
2,223,302 | https://en.wikipedia.org/wiki/Zorica%20Panti%C4%87 | Zorica Pantić, also known as Zorica Pantić-Tanner, born 1951 in Yugoslavia, is a professor of electrical engineering and past president of Wentworth Institute of Technology in Boston.
Pantić was previously the founding dean of the College of Engineering at the University of Texas at San Antonio, and director of the School of Engineering at San Francisco State University. She served as president of Wentworth Institute of Technology from 2005 to 2019.
Early life and education
Zorica Pantić received her B.S., M.S., and Ph.D. degrees in electrical engineering from the University of Niš, Yugoslavia (Serbia), in 1975, 1978, and 1982, respectively. She has 30 years of academic and teaching experience. She served on the engineering faculty of the University of Nis (1975–1984), San Francisco State University (1989–2001), and the University of Texas at San Antonio (2001–2004). She was a Fulbright Fellow and a visiting scientist at the University of Illinois at Urbana-Champaign from 1984-1989.
Career
Affiliations
She is a senior member of IEEE and served on various committees of the IEEE Electromagnetic Compatibility Society (EMC-S) until 2004. She served on the EMC-S Board of Directors and as chair, vice-chair, treasurer, and secretary of the Santa Clara Valley EMC-S Chapter. She is also a member of the American Society for Engineering Education and serves on the ASEE Projects Board, President's Award Committee, and Contact Committee.
She is a member of the IEEE Women in Engineering, Society of Women Engineers, and American Society for Higher Education, as well as a member of the Engineering Deans Council and the EDC Public Policy Committee. She served on various National Academy of Engineering panels and committees.
Pantić received the Woman Entrepreneur of the Year award from the San Antonio Women Chamber of Commerce and is a graduate of the Leadership America program.
She is included in the Encyclopedia of the National Diaspora, edited by chronicler Ivan Kalauzović Ivanus.
Research publications
Pantić has published more than 80 journal and conference papers. Her research areas have included uniform antennas, microwave transmission lines, the finite element method, and electromagnetic emissions.
At University of Texas at San Antonio (UTSA)
As the engineering dean at UTSA, she spearheaded the College of Engineering's and UTSA's efforts to become a flagship university in the state of Texas and a top-tier research university in the U.S.
During her tenure, the college started three new Ph.D. engineering programs (biomedical, electrical, and environmental) and one M.S. program (computer engineering). The college created a new Department of Biomedical Engineering and a Center for Response and Security Engineering and Technology, doubled the number of faculty, and increased its research funding tenfold to $7 million in active grants.
Pantić also secured $2.5 million in federal funding to establish a Material Science and Engineering Laboratory at the former Kelly Air Force Base. Through strategic partnerships with various state and national agencies, national companies and small businesses, she raised more than $5 million in various donations and equipment grants.
She revived relationships with engineering alumni and was instrumental in securing a $250,000 endowment donation, the single largest alumni gift to the college and UTSA. In her last four years, the college increased its enrollment by 75%, being especially effective in attracting female students (83% enrollment increase) and minorities (50% of students are Hispanic).
At San Francisco State University
While at SFSU, Pantić improved the engineering programs in quality, size and visibility, and, as a result—for the first time in SFSU history – the programs were ranked among the top 50 undergraduate programs by U.S. News & World Report.
She established a Partnership for Engineering Education that resulted in a 30 percent enrollment increase and played a crucial role in shaping and bringing to life a partnership with a neighboring community college to offer upper-division engineering courses there.
This project serves as a blueprint for cooperation between the 23-campus California State University (CSU) system, of which San Francisco State is a member, and California's community colleges. Through a partnership with the local chapter of the Institute of Electrical and Electronics Engineers (IEEE) and a major grant from the National Science Foundation, she established a Center for Applied Electromagnetics that supports undergraduate and graduate research.
Pantić was active in state-level fundraising for engineering programs. She worked with fellow CSU Engineering deans on the successful $10-million California Workforce Initiative to support strategic disciplines such as agriculture, biotechnology, computer science, engineering, and nursing. She also served on the Executive Committee of the Texas Engineering and Technology Consortium, a private-public partnership that raised $8 million to increase the number of engineering and computer science graduates in the state of Texas.
President of Wentworth Institute
Pantić's appointment as Wentworth Institute of Technology's fourth president was announced on June 8, 2005. She took office August 1, 2005 and was the first female engineer to head an institute of technology in the United States. She was formally installed as president of Wentworth on April 5, 2006.
Pantić retired as President of Wentworth on May 31, 2019, after 14 years at its helm.
References
Literature
External links
Wentworth announcements: 1 and 2
UTSA Today
San Antonio Express-News
Heads of universities and colleges in the United States
Living people
American people of Serbian descent
1951 births
Senior members of the IEEE
Electrical engineers
American women engineers
Serbian women engineers
Wentworth Institute of Technology
University of Niš alumni
21st-century women engineers
Women heads of universities and colleges
21st-century American women
Members of the Society of Women Engineers | Zorica Pantić | [
"Engineering"
] | 1,148 | [
"Electrical engineering",
"Electrical engineers"
] |
2,223,535 | https://en.wikipedia.org/wiki/Mass%20flow%20rate | In physics and engineering, mass flow rate is the rate at which mass of a substance changes over time. Its unit is kilogram per second (kg/s) in SI units, and slug per second or pound per second in US customary units. The common symbol is (pronounced "m-dot"), although sometimes (Greek lowercase mu) is used.
Sometimes, mass flow rate as defined here is termed "mass flux" or "mass current".
Confusingly, "mass flow" is also a term for mass flux, the rate of mass flow per unit of area.
Formulation
Mass flow rate is defined by the limit
i.e., the flow of mass through a surface per time .
The overdot on is Newton's notation for a time derivative. Since mass is a scalar quantity, the mass flow rate (the time derivative of mass) is also a scalar quantity. The change in mass is the amount that flows after crossing the boundary for some time duration, not the initial amount of mass at the boundary minus the final amount at the boundary, since the change in mass flowing through the area would be zero for steady flow.
Alternative equations
Mass flow rate can also be calculated by
where
The above equation is only true for a flat, plane area. In general, including cases where the area is curved, the equation becomes a surface integral:
The area required to calculate the mass flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface, e.g. for substances passing through a filter or a membrane, the real surface is the (generally curved) surface area of the filter, macroscopically - ignoring the area spanned by the holes in the filter/membrane. The spaces would be cross-sectional areas. For liquids passing through a pipe, the area is the cross-section of the pipe, at the section considered. The vector area is a combination of the magnitude of the area through which the mass passes through, , and a unit vector normal to the area, . The relation is .
The reason for the dot product is as follows. The only mass flowing through the cross-section is the amount normal to the area, i.e. parallel to the unit normal. This amount is
where is the angle between the unit normal and the velocity of mass elements. The amount passing through the cross-section is reduced by the factor , as increases less mass passes through. All mass which passes in tangential directions to the area, that is perpendicular to the unit normal, doesn't actually pass through the area, so the mass passing through the area is zero. This occurs when :
These results are equivalent to the equation containing the dot product. Sometimes these equations are used to define the mass flow rate.
Considering flow through porous media, a special quantity, superficial mass flow rate, can be introduced. It is related with superficial velocity, , with the following relationship:
The quantity can be used in particle Reynolds number or mass transfer coefficient calculation for fixed and fluidized bed systems.
Usage
In the elementary form of the continuity equation for mass, in hydrodynamics:
In elementary classical mechanics, mass flow rate is encountered when dealing with objects of variable mass, such as a rocket ejecting spent fuel. Often, descriptions of such objects erroneously invoke Newton's second law by treating both the mass and the velocity as time-dependent and then applying the derivative product rule. A correct description of such an object requires the application of Newton's second law to the entire, constant-mass system consisting of both the object and its ejected mass.
Mass flow rate can be used to calculate the energy flow rate of a fluid:
where is the unit mass energy of a system.
Energy flow rate has SI units of kilojoule per second or kilowatt.
See also
Continuity equation
Fluid dynamics
Mass flow controller
Mass flow meter
Mass flux
Orifice plate
Standard cubic centimetres per minute
Thermal mass flow meter
Volumetric flow rate
Notes
References
Fluid dynamics
Temporal rates
Mass
Mechanical quantities | Mass flow rate | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 815 | [
"Temporal quantities",
"Scalar physical quantities",
"Mechanical quantities",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Mass",
"Temporal rates",
"Size",
"Mechanics",
"Piping",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
2,223,645 | https://en.wikipedia.org/wiki/Somos%27%20quadratic%20recurrence%20constant | In mathematical analysis and number theory, Somos' quadratic recurrence constant or simply Somos' constant is a constant defined as an expression of infinitely many nested square roots. It arises when studying the asymptotic behaviour of a certain sequence and also in connection to the binary representations of real numbers between zero and one. The constant named after Michael Somos. It is defined by:
which gives a numerical value of approximately:
.
Sums and products
Somos' constant can be alternatively defined via the following infinite product:
This can be easily rewritten into the far more quickly converging product representation
which can then be compactly represented in infinite product form by:
Another product representation is given by:
Expressions for include:
Integrals
Integrals for are given by:
Other formulas
The constant arises when studying the asymptotic behaviour of the sequence
with first few terms 1, 1, 2, 12, 576, 1658880, ... . This sequence can be shown to have asymptotic behaviour as follows:
Guillera and Sondow give a representation in terms of the derivative of the Lerch transcendent :
If one defines the Euler-constant function (which gives Euler's constant for ) as:
one has:
Universality
One may define a "continued binary expansion" for all real numbers in the set , similarly to the decimal expansion or simple continued fraction expansion. This is done by considering the unique base-2 representation for a number which does not contain an infinite tail of 0's (for example write one half as instead of ). Then define a sequence which gives the difference in positions of the 1's in this base-2 representation. This expansion for is now given by:
For example the fractional part of Pi we have:
The first 1 occurs on position 3 after the radix point. The next 1 appears three places after the first one, the third 1 appears five places after the second one, etc. By continuing in this manner, we obtain:
This gives a bijective map , such that for every real number we uniquely can give:
It can now be proven that for almost all numbers the limit of the geometric mean of the terms converges to Somos' constant. That is, for almost all numbers in that interval we have:
Somos' constant is universal for the "continued binary expansion" of numbers in the same sense that Khinchin's constant is universal for the simple continued fraction expansions of numbers .
Generalizations
The generalized Somos' constants may be given by:
for .
The following series holds:
We also have a connection to the Euler-constant function:
and the following limit, where is Euler's constant:
See also
Euler's constant
Khinchin's constant
Binary number
Ergodic theory
List of mathematical constants
References
Mathematical constants
Infinite products | Somos' quadratic recurrence constant | [
"Mathematics"
] | 586 | [
"Mathematical analysis",
"Mathematical objects",
"Infinite products",
"nan",
"Mathematical constants",
"Numbers"
] |
2,223,912 | https://en.wikipedia.org/wiki/Digi%20International | Digi International, Inc. is an American Industrial Internet of Things (IIoT) technology company based in Hopkins, Minnesota.
History
The company was founded in 1985, and went public as Digi International in 1989.
The company initially offered intelligent ISA/PCI boards (the 'DigiBoard') with multiple asynchronous serial interfaces for PCs.
Today, multiport serial boards are still sold, but the company focuses on embedded and external network (wired and wireless) communications, scalable USB products, radio modems and embedded modules based on LTE (4G) communications platforms.
Patents
Digi International settled a patent infringement lawsuit with U.S. Ethernet Innovations LLC for $1.525 million in April 2013.
In 2022, NimbeLink, a subsidiary of Airgain, filed a lawsuit alleging certain Digi products infringed on NimbeLink's Skywire cellular modems. The U.S. District Court for the District of Minnesota found all patent claims by Nimbelink to be invalid in May 2024.
Acquisitions
1991: Arnet Corp
1993: Stargate and Milan Technologies
1995: Lan Access
1997: Aetherworks
1998: Illinois-based Central Data Corp and German-based open systems provider ITK International Inc.
2000: Inside Out Networks, a USB connectivity manufacturer in Austin, Texas. The Inside Out brand was phased out in 2006.
2001: French company Decision Europe, including the Xcell Technology brand.
2002: Net Silicon, fabless manufacturer of ARM-based microprocessors.
2003: Embrace Networks
2005: Rabbit Semiconductor, a manufacturer of Z180 based silicon, embedded modules and single-board computers. In the same year, the company acquired processor module manufacturers FS Forth-Systeme GmbH of Breisach, Germany, and Logroño, Spain-based Sistemas Embebidos S.A.
2006: Wireless technology company MaxStream, for $38.8M.
2008: Cellular router manufacturer Sarian Systems Ltd. for $30.5 million. In the same year, the company also acquired wireless technology design services company Spectrum Design Solutions Inc.
2009: Mobiapps, fabless manufacturer of satellite modems on the Orbcomm satellite network.
2012: Cloud computing services provider Etherios.
2015 Digi acquired Bluenica, Toronto-based company focused on temperature monitoring of perishable goods in the food industry.
2016 Digi acquired FreshTemps, temperature monitoring and task management for the food industry.
2017 Digi acquired SMART Temps, LLC, a provider of real-time food service temperature management for restaurant, grocery, education and hospital settings as well as real-time temperature management for healthcare.
2017 Digi acquired TempAlert, a provider of temperature and task management for retail pharmacy, food service, and industrial applications.
2018 Digi acquired Accelerated Concepts, a provider of secure, enterprise-grade, cellular (LTE) networking equipment for primary and backup connectivity.
2019 Digi acquired Opengear.
2021 Digi acquired Haxiot, a provider of wireless connection services.
2021 Digi acquired Ctek, a company specializing in remote monitoring and industrial controls.
2021 Digi acquired Ventus Holdings.
References
Computer companies of the United States
Computer hardware companies
Wireless sensor network
Sensors
Computer peripheral companies
Networking companies of the United States
American companies established in 1985
Companies based in Minnetonka, Minnesota
Companies listed on the Nasdaq | Digi International | [
"Technology",
"Engineering"
] | 700 | [
"Computer hardware companies",
"Wireless networking",
"Wireless sensor network",
"Measuring instruments",
"Sensors",
"Computers"
] |
2,223,940 | https://en.wikipedia.org/wiki/Strong%20cryptography | Strong cryptography or cryptographically strong are general terms used to designate the cryptographic algorithms that, when used correctly, provide a very high (usually insurmountable) level of protection against any eavesdropper, including the government agencies. There is no precise definition of the boundary line between the strong cryptography and (breakable) weak cryptography, as this border constantly shifts due to improvements in hardware and cryptanalysis techniques. These improvements eventually place the capabilities once available only to the NSA within the reach of a skilled individual, so in practice there are only two levels of cryptographic security, "cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files" (Bruce Schneier).
The strong cryptography algorithms have high security strength, for practical purposes usually defined as a number of bits in the key. For example, the United States government, when dealing with export control of encryption, considered any implementation of the symmetric encryption algorithm with the key length above 56 bits or its public key equivalent to be strong and thus potentially a subject to the export licensing. To be strong, an algorithm needs to have a sufficiently long key and be free of known mathematical weaknesses, as exploitation of these effectively reduces the key size. At the beginning of the 21st century, the typical security strength of the strong symmetrical encryption algorithms is 128 bits (slightly lower values still can be strong, but usually there is little technical gain in using smaller key sizes).
Demonstrating the resistance of any cryptographic scheme to attack is a complex matter, requiring extensive testing and reviews, preferably in a public forum. Good algorithms and protocols are required (similarly, good materials are required to construct a strong building), but good system design and implementation is needed as well: "it is possible to build a cryptographically weak system using strong algorithms and protocols" (just like the use of good materials in construction does not guarantee a solid structure). Many real-life systems turn out to be weak when the strong cryptography is not used properly, for example, random nonces are reused A successful attack might not even involve algorithm at all, for example, if the key is generated from a password, guessing a weak password is easy and does not depend on the strength of the cryptographic primitives. A user can become the weakest link in the overall picture, for example, by sharing passwords and hardware tokens with the colleagues.
Background
The level of expense required for strong cryptography originally restricted its use to the government and military agencies, until the middle of the 20th century the process of encryption required a lot of human labor and errors (preventing the decryption) were very common, so only a small share of written information could have been encrypted. US government, in particular, was able to keep a monopoly on the development and use of cryptography in the US into the 1960s. In the 1970, the increased availability of powerful computers and unclassified research breakthroughs (Data Encryption Standard, the Diffie-Hellman and RSA algorithms) made strong cryptography available for civilian use. Mid-1990s saw the worldwide proliferation of knowledge and tools for strong cryptography. By the 21st century the technical limitations were gone, although the majority of the communication were still unencrypted. At the same the cost of building and running systems with strong cryptography became roughly the same as the one for the weak cryptography.
The use of computers changed the process of cryptanalysis, famously with Bletchley Park's Colossus. But just as the development of digital computers and electronics helped in cryptanalysis, it also made possible much more complex ciphers. It is typically the case that use of a quality cipher is very efficient, while breaking it requires an effort many orders of magnitude larger - making cryptanalysis so inefficient and impractical as to be effectively impossible.
Cryptographically strong algorithms
This term "cryptographically strong" is often used to describe an encryption algorithm, and implies, in comparison to some other algorithm (which is thus cryptographically weak), greater resistance to attack. But it can also be used to describe hashing and unique identifier and filename creation algorithms. See for example the description of the Microsoft .NET runtime library function Path.GetRandomFileName. In this usage, the term means "difficult to guess".
An encryption algorithm is intended to be unbreakable (in which case it is as strong as it can ever be), but might be breakable (in which case it is as weak as it can ever be) so there is not, in principle, a continuum of strength as the idiom would seem to imply: Algorithm A is stronger than Algorithm B which is stronger than Algorithm C, and so on. The situation is made more complex, and less subsumable into a single strength metric, by the fact that there are many types of cryptanalytic attack and that any given algorithm is likely to force the attacker to do more work to break it when using one attack than another.
There is only one known unbreakable cryptographic system, the one-time pad, which is not generally possible to use because of the difficulties involved in exchanging one-time pads without their being compromised. So any encryption algorithm can be compared to the perfect algorithm, the one-time pad.
The usual sense in which this term is (loosely) used, is in reference to a particular attack, brute force key search — especially in explanations for newcomers to the field. Indeed, with this attack (always assuming keys to have been randomly chosen), there is a continuum of resistance depending on the length of the key used. But even so there are two major problems: many algorithms allow use of different length keys at different times, and any algorithm can forgo use of the full key length possible. Thus, Blowfish and RC5 are block cipher algorithms whose design specifically allowed for several key lengths, and who cannot therefore be said to have any particular strength with respect to brute force key search. Furthermore, US export regulations restrict key length for exportable cryptographic products and in several cases in the 1980s and 1990s (e.g., famously in the case of Lotus Notes' export approval) only partial keys were used, decreasing 'strength' against brute force attack for those (export) versions. More or less the same thing happened outside the US as well, as for example in the case of more than one of the cryptographic algorithms in the GSM cellular telephone standard.
The term is commonly used to convey that some algorithm is suitable for some task in cryptography or information security, but also resists cryptanalysis and has no, or fewer, security weaknesses. Tasks are varied, and might include:
generating randomness
encrypting data
providing a method to ensure data integrity
Cryptographically strong would seem to mean that the described method has some kind of maturity, perhaps even approved for use against different kinds of systematic attacks in theory and/or practice. Indeed, that the method may resist those attacks long enough to protect the information carried (and what stands behind the information) for a useful length of time. But due to the complexity and subtlety of the field, neither is almost ever the case. Since such assurances are not actually available in real practice, sleight of hand in language which implies that they are will generally be misleading.
There will always be uncertainty as advances (e.g., in cryptanalytic theory or merely affordable computer capacity) may reduce the effort needed to successfully use some attack method against an algorithm.
In addition, actual use of cryptographic algorithms requires their encapsulation in a cryptosystem, and doing so often introduces vulnerabilities which are not due to faults in an algorithm. For example, essentially all algorithms require random choice of keys, and any cryptosystem which does not provide such keys will be subject to attack regardless of any attack resistant qualities of the encryption algorithm(s) used.
Legal issues
Widespread use of encryption increases the costs of surveillance, so the government policies aim to regulate the use of the strong cryptography. In the 2000s, the effect of encryption on the surveillance capabilities was limited by the ever-increasing share of communications going through the global social media platforms, that did not use the strong encryption and provided governments with the requested data. Murphy talks about a legislative balance that needs to be struck between the power of the government that are broad enough to be able to follow the quickly-evolving technology, yet sufficiently narrow for the public and overseeing agencies to understand the future use of the legislation.
USA
The initial response of the US government to the expanded availability of cryptography was to treat the cryptographic research in the same way the atomic energy research is, i.e., "born classified", with the government exercising the legal control of dissemination of research results. This had quickly found to be impossible, and the efforts were switched to the control over deployment (export, as prohibition on the deployment of cryptography within the US was not seriously considered).
The export control in the US historically uses two tracks:
military items (designated as "munitions", although in practice the items on the United States Munitions List do not match the common meaning of this word). The export of munitions is controlled ty the Department of State. The restrictions for the munitions are very tight, with individual export licenses specifying the product and the actual customer;
dual-use items ("commodities") need to be commercially available without excessive paperwork, so, depending on the destination, broad permissions can be granted for sales to civilian customers. The licensing for the dual-use items is provided by the Department of Commerce. The process of moving an item from the munition list to commodity status is handled by the Department of State.
Since the original applications of cryptography were almost exclusively military, it was placed on the munitions list. With the growth of the civilian uses, the dual-use cryptography was defined by cryptographic strength, with the strong encryption remaining a munition in a similar way to the guns (small arms are dual-use while artillery is of purely military value). This classification had its obvious drawbacks: a major bank is arguably just as systemically important as a military installation, and restriction on publishing the strong cryptography code run against the First Amendment, so after experimenting in 1993 with the Clipper chip (where the US government kept special decryption keys in escrow), in 1996 almost all cryptographic items were transferred to the Department of Commerce.
EU
The position of the EU, in comparison to the US, had always been tilting more towards privacy. In particular, EU had rejected the key escrow idea as early as 1997. European Union Agency for Cybersecurity (ENISA) holds the opinion that the backdoors are not efficient for the legitimate surveillance, yet pose great danger to the general digital security.
Five Eyes
The Five Eyes (post-Brexit) represent a group of states with similar views one the issues of security and privacy. The group might have enough heft to drive the global agenda on the lawful interception. The efforts of this group are not entirely coordinated: for example, the 2019 demand for Facebook not to implement end-to-end encryption was not supported by either Canada or New Zealand, and did not result in a regulation.
Russia
President and government of Russia in 90s has issued a few decrees formally banning uncertified cryptosystems from use by government agencies. Presidential decree of 1995 also attempted to ban individuals from producing and selling cryptography systems without having appropriate license, but it wasn't enforced in any way as it was suspected to be contradictory the Russian Constitution of 1993 and wasn't a law per se. The decree of No.313 issued in 2012 further amended previous ones allowing to produce and distribute products with embedded cryptosystems and requiring no license as such, even though it declares some restrictions. France had quite strict regulations in this field, but has relaxed them in recent years.
Examples
Strong
PGP is generally considered an example of strong cryptography, with versions running under most popular operating systems and on various hardware platforms. The open source standard for PGP operations is OpenPGP, and GnuPG is an implementation of that standard from the FSF. However, the IDEA signature key in classical PGP is only 64 bits long, therefore no longer immune to collision attacks. OpenPGP therefore uses the SHA-2 hash function and AES cryptography.
The AES algorithm is considered strong after being selected in a lengthy selection process that was open and involved numerous tests.
Elliptic curve cryptography is another system which is based on a graphical geometrical function.
The latest version of TLS protocol (version 1.3), used to secure Internet transactions, is generally considered strong. Several vulnerabilities exist in previous versions, including demonstrated attacks such as POODLE. Worse, some cipher-suites are deliberately weakened to use a 40-bit effective key to allow export under pre-1996 U.S. regulations.
Weak
Examples that are not considered cryptographically strong include:
The DES, whose 56-bit keys allow attacks via exhaustive search.
Triple-DES (3DES / EDE3-DES) can be subject of the "SWEET32 Birthday attack"
Wired Equivalent Privacy which is subject to a number of attacks due to flaws in its design.
SSL v2 and v3. TLS 1.0 and TLS 1.1 are also deprecated now [see RFC7525] because of irreversible flaws which are still present by design and because they do not provide elliptical handshake (EC) for ciphers, no modern cryptography, no CCM/GCM ciphermodes. TLS1.x are also announced off by the PCIDSS 3.2 for commercial business/banking implementations on web frontends. Only TLS1.2 and TLS 1.3 are allowed and recommended, modern ciphers, handshakes and ciphermodes must be used exclusively.
The MD5 and SHA-1 hash functions, no longer immune to collision attacks.
The RC4 stream cipher.
The 40-bit Content Scramble System used to encrypt most DVD-Video discs.
Almost all classical ciphers.
Most rotary ciphers, such as the Enigma machine.
DHE/EDHE is guessable/weak when using/re-using known default prime values on the server
Notes
References
Sources
See also
40-bit encryption
Cipher security summary
Export of cryptography
Comparison of cryptography libraries
FBI–Apple encryption dispute
Hash function security summary
Security level
Cryptography | Strong cryptography | [
"Mathematics",
"Engineering"
] | 2,990 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
2,224,155 | https://en.wikipedia.org/wiki/List%20of%20IRC%20commands | This is a list of all Internet Relay Chat commands from RFC 1459, RFC 2812, and extensions added to major IRC daemons. Most IRC clients require commands to be preceded by a slash ("/"). Some commands are actually sent to IRC bots; these are treated by the IRC protocol as ordinary messages, not as /-commands.
Conventions used here: Angle brackets ("<" and ">") are used here to indicate a placeholder for some value, and are not a literal part of a command. Square brackets ("[" and "]") are used to indicate that a value is optional.
User commands
ADMIN
Syntax:
ADMIN [<target>]
Instructs the server to return information about the administrators of the server specified by <target>, where <target> is either a server or a user. If <target> is omitted, the server should return information about the administrators of the current server.
AWAY
Syntax:
AWAY [<message>]
Provides the server with a message to automatically send in reply to a PRIVMSG directed at the user, but not to a channel they are on.
If <message> is omitted, the away status is removed. Defined in RFC 1459.
CNOTICE
Syntax:
CNOTICE <nickname> <channel> :<message>
Sends a channel NOTICE message to <nickname> on <channel> that bypasses flood protection limits. The target nickname must be in the same channel as the client issuing the command, and the client must be a channel operator.
Normally an IRC server will limit the number of different targets a client can send messages to within a certain time frame to prevent spammers or bots from mass-messaging users on the network, however this command can be used by channel operators to bypass that limit in their channel. For example, it is often used by help operators that may be communicating with a large number of users in a help channel at one time.
This command is not formally defined in an RFC, but is in use by some IRC networks. Support is indicated in a RPL_ISUPPORT reply (numeric 005) with the CNOTICE keyword
CPRIVMSG
Syntax:
CPRIVMSG <nickname> <channel> :<message>
Sends a private message to <nickname> on <channel> that bypasses flood protection limits. The target nickname must be in the same channel as the client issuing the command, and the client must be a channel operator.
Normally an IRC server will limit the number of different targets a client can send messages to within a certain time frame to prevent spammers or bots from mass-messaging users on the network, however this command can be used by channel operators to bypass that limit in their channel. For example, it is often used by help operators that may be communicating with a large number of users in a help channel at one time.
This command is not formally defined in an RFC, but is in use by some IRC networks. Support is indicated in a RPL_ISUPPORT reply (numeric 005) with the CPRIVMSG keyword
CONNECT
Syntax:
CONNECT <target server> [<port> [<remote server>]] (RFC 1459)
CONNECT <target server> <port> [<remote server>] (RFC 2812)
Instructs the server <remote server> (or the current server, if <remote server> is omitted) to connect to <target server> on port <port>.
This command should only be available to IRC operators. Defined in RFC 1459; the <port> parameter became mandatory in RFC 2812.
DIE
Syntax:
DIE
Instructs the server to shut down. This command may only be issued by IRC server operators. Defined in RFC 2812.
ENCAP
Syntax:
:<source> ENCAP <destination> <subcommand> <parameters>
This command is for use by servers to encapsulate commands so that they will propagate across hub servers not yet updated to support them, and indicates the subcommand and its parameters should be passed unaltered to the destination, where it will be unencapsulated and parsed. This facilitates implementation of new features without a need to restart all servers before they are usable across the network.
ERROR
Syntax:
ERROR <error message>
This command is for use by servers to report errors to other servers. It is also used before terminating client connections. Defined in RFC 1459.
HELP
Syntax:
HELP
Requests the server to display the help file. This command is not formally defined in an RFC, but is in use by most major IRC daemons.
INFO
Syntax:
INFO [<target>]
Returns information about the <target> server, or the current server if <target> is omitted. Information returned includes the server's version, when it was compiled, the patch level, when it was started, and any other information which may be considered to be relevant. Defined in RFC 1459.
INVITE
Syntax:
INVITE <nickname> <channel>
Invites <nickname> to the channel <channel>. <channel> does not have to exist, but if it does, only members of the channel are allowed to invite other clients. If the channel mode i is set, only channel operators may invite other clients. Defined in RFC 1459.
ISON
Syntax:
ISON <nicknames>
Queries the server to see if the clients in the space-separated list <nicknames> are currently on the network. The server returns only the nicknames that are on the network in a space-separated list. If none of the clients are on the network the server returns an empty list. Defined in RFC 1459.
JOIN
Syntax:
JOIN <channels> [<keys>]
Makes the client join the channels in the comma-separated list <channels>, specifying the passwords, if needed, in the comma-separated list <keys>. If the channel(s) do not exist then they will be created. Defined in RFC 1459.
KICK
Syntax:
KICK <channel> <client> :[<message>]
Forcibly removes <client> from <channel>. This command may only be issued by channel operators. Defined in RFC 1459.
KILL
Syntax:
KILL <client> <comment>
Forcibly removes <client> from the network. This command may only be issued by IRC operators. Defined in RFC 1459.
KNOCK
Syntax:
KNOCK <channel> [<message>]
Sends a NOTICE to an invitation-only <channel> with an optional <message>, requesting an invite. This command is not formally defined by an RFC, but is supported by most major IRC daemons. Support is indicated in a RPL_ISUPPORT reply (numeric 005) with the KNOCK keyword.
LINKS
Syntax:
LINKS [<remote server> [<server mask>]]
Lists all server links matching <server mask>, if given, on <remote server>, or the current server if omitted. Defined in RFC 1459.
LIST
Syntax:
LIST [<channels> [<server>]]
Lists all channels on the server. If the comma-separated list <channels> is given, it will return the channel topics. If <server> is given, the command will be forwarded to <server> for evaluation. Defined in RFC 1459.
LUSERS
Syntax:
LUSERS [<mask> [<server>]]
Returns statistics about the size of the network. If called with no arguments, the statistics will reflect the entire network. If <mask> is given, it will return only statistics reflecting the masked subset of the network. If <target> is given, the command will be forwarded to <server> for evaluation. Defined in RFC 2812.
MODE
Syntax:
MODE <nickname> <flags> (user)
MODE <channel> <flags> [<args>]
The MODE command is dual-purpose. It can be used to set both user and channel modes. Defined in RFC 1459.
MOTD
Syntax:
MOTD [<server>]
Returns the message of the day on <server> or the current server if it is omitted. Defined in RFC 2812.
NAMES
Syntax:
NAMES [<channels>] (RFC 1459)
NAMES [<channels> [<server>]] (RFC 2812)
Returns a list of who is on the comma-separated list of <channels>, by channel name. If <channels> is omitted, all users are shown, grouped by channel name with all users who are not on a channel being shown as part of channel "*". If <server> is specified, the command is sent to <server> for evaluation. Defined in RFC 1459; the optional <server> parameter was added in RFC 2812.
The response contains all nicknames in the channel prefixed with the highest channel status prefix of that user, for example like this (with @ being the highest status prefix)
:irc.server.net 353 Phyre = #SomeChannel :@WiZ
If a client wants to receive all the channel status prefixes of a user and not only their current highest one, the IRCv3 multi-prefix extension can be enabled (@ is the channel operator prefix, and + the lower voice status prefix):
:irc.server.net 353 Phyre = #SomeChannel :@+WiZ
See also NAMESX below for an alternate, older approach to achieve the same effect. However, by today most clients and servers support the new IRCv3 standard.
NICK
Syntax:
NICK <nickname> [<hopcount>] (RFC 1459)
NICK <nickname> (RFC 2812)
Allows a client to change their IRC nickname. Hopcount is for use between servers to specify how far away a nickname is from its home server. Defined in RFC 1459; the optional <hopcount> parameter was removed in RFC 2812.
NOTICE
Syntax:
NOTICE <msgtarget> <message>
This command works similarly to PRIVMSG, except automatic replies must never be sent in reply to NOTICE messages. Defined in RFC 1459.
OPER
Syntax:
OPER <username> <password>
Authenticates a user as an IRC operator on that server/network. Defined in RFC 1459.
PART
Syntax:
PART <channels> [<message>]
Causes a user to leave the channels in the comma-separated list <channels>. Defined in RFC 1459.
PASS
Syntax:
PASS <password>
Sets a connection password. This command must be sent before the NICK/USER registration combination. Defined in RFC 1459.
PING
Syntax:
PING <server1> [<server2>]
Tests the presence of a connection. A PING message results in a PONG reply. If <server2> is specified, the message gets passed on to it. Defined in RFC 1459.
PONG
Syntax:
PONG <server1> [<server2>]
This command is a reply to the PING command and works in much the same way. Defined in RFC 1459.
PRIVMSG
Syntax:
PRIVMSG <msgtarget> :<message>
Sends <message> to <msgtarget>, which is usually a user or channel. Defined in RFC 1459.
QUIT
Syntax:
QUIT [<message>]
Disconnects the user from the server. Defined in RFC 1459.
QUOTE
Syntax:
QUOTE
Sends a command string to the server as-is, i.e. without parsing it in the client application.
REHASH
Syntax:
REHASH
Causes the server to re-read and re-process its configuration file(s). This command can only be sent by IRC operators. Defined in RFC 1459.
RULES
Syntax:
RULES
Requests the server rules. This command is not formally defined in an RFC, but is used by most major IRC daemons.
SERVER
Syntax:
SERVER <servername> <hopcount> <info>
The server message is used to tell a server that the other end of a new connection is a server. This message is also used to pass server data over the whole network.
<hopcount> details how many hops (server connections) away <servername> is.
<info> contains addition human-readable information about the server.
Defined in RFC 1459.
SERVICE
Syntax:
SERVLIST
SQUERY
Syntax:
SQUERY <servicename> <text>
Identical to PRIVMSG except the recipient must be a service. Defined in RFC 2812.
SQUIT
Syntax:
SQUIT <server> <comment>
Causes <server> to quit the network. Defined in RFC 1459.
SETNAME
Syntax:
SETNAME <new real name>
Allows a client to change the "real name" specified when registering a connection.
This command is not formally defined by an RFC, but is in use by some IRC daemons. Support is indicated in a RPL_ISUPPORT reply (numeric 005) with the SETNAME keyword
SILENCE
Syntax:
SILENCE [+/-<hostmask>]
Adds or removes a host mask to a server-side ignore list that prevents matching users from sending the client messages. More than one mask may be specified in a space-separated list, each item prefixed with a "+" or "-" to designate whether it is being added or removed. Sending the command with no parameters returns the entries in the client's ignore list.
This command is not formally defined in an RFC, but is supported by most major IRC daemons. Support is indicated in a RPL_ISUPPORT reply (numeric 005) with the SILENCE keyword and the maximum number of entries a client may have in its ignore list. For example:
:irc.server.net 005 WiZ WALLCHOPS WATCH=128 SILENCE=15 MODES=12 CHANTYPES=#
STATS
Syntax:
STATS <query> [<server>]
Returns statistics about the current server, or <server> if it's specified. Defined in RFC 1459.
SUMMON
Syntax:
SUMMON <user> [<server>] (RFC 1459)
SUMMON <user> [<server> [<channel>]] (RFC 2812)
Gives users who are on the same host as <server> a message asking them to join IRC. Defined in RFC 1459; the optional <channel> parameter was added in RFC 2812.
TIME
Syntax:
TIME [<server>]
Returns the local time on the current server, or <server> if specified. Defined in RFC 1459.
TOPIC
Syntax:
TOPIC <channel> [<topic>]
Allows the client to query or set the channel topic on <channel>. If <topic> is given, it sets the channel topic to <topic>. If channel mode +t is set, only a channel operator may set the topic. Defined in RFC 1459.
TRACE
Syntax:
TRACE [<target>]
Trace a path across the IRC network to a specific server or client, in a similar method to traceroute. Defined in RFC 1459.
USER
Syntax:
USER <username> <hostname> <servername> <realname> (RFC 1459)
USER <user> <mode> <unused> <realname> (RFC 2812)
This command is used at the beginning of a connection to specify the username, hostname, real name and initial user modes of the connecting client. <realname> may contain spaces, and thus must be prefixed with a colon. Defined in RFC 1459, modified in RFC 2812.
USERHOST
Syntax:
USERHOST <nickname> [<nickname> <nickname> ...]
Returns a list of information about the nicknames specified. Defined in RFC 1459.
USERIP
Syntax:
USERIP <nickname>
Requests the direct IP address of the user with the specified nickname. This command is often used to obtain the IP of an abusive user to more effectively perform a ban. It is unclear what, if any, privileges are required to execute this command on a server.
This command is not formally defined by an RFC, but is in use by some IRC daemons. Support is indicated in a RPL_ISUPPORT reply (numeric 005) with the USERIP keyword.
USERS
Syntax:
USERS [<server>]
Returns a list of users and information about those users in a format similar to the UNIX commands who, rusers and finger. Defined in RFC 1459.
VERSION
Syntax:
VERSION [<server>]
Returns the version of <server>, or the current server if omitted. Defined in RFC 1459.
WALLOPS
Syntax:
WALLOPS <message>
Sends <message> to all operators connected to the server (RFC 1459), or all users with user mode 'w' set (RFC 2812). Defined in RFC 1459.
WATCH
Syntax:
WATCH [+/-<nicknames>]
Adds or removes a user to a client's server-side friends list. More than one nickname may be specified in a space-separated list, each item prefixed with a "+" or "-" to designate whether it is being added or removed. Sending the command with no parameters returns the entries in the client's friends list.
This command is not formally defined in an RFC, but is supported by most major IRC daemons. Support is indicated in a RPL_ISUPPORT reply (numeric 005) with the WATCH keyword and the maximum number of entries a client may have in its friends list. For example:
:irc.server.net 005 WiZ WALLCHOPS WATCH=128 SILENCE=15 MODES=12 CHANTYPES=#
WHO
Syntax:
WHO [<name> ["o"]]
Returns a list of users who match <name>. If the flag "o" is given, the server will only return information about IRC operators. Defined in RFC 1459.
WHOIS
Syntax:
WHOIS [<server>] <nicknames>
Returns information about the comma-separated list of nicknames masks <nicknames>. If <server> is given, the command is forwarded to it for processing. Defined in RFC 1459.
WHOWAS
Syntax:
WHOWAS <nickname> [<count> [<server>]]
Used to return information about a nickname that is no longer in use (due to client disconnection, or nickname changes). If given, the server will return information from the last <count> times the nickname has been used. If <server> is given, the command is forwarded to it for processing. In RFC 2812, <nickname> can be a comma-separated list of nicknames.
Defined in RFC 1459.
See also
IRCd
IRCX
Server
References
Bibliography
Further reading
IRC
Computing commands | List of IRC commands | [
"Technology"
] | 3,904 | [
"Computing commands"
] |
2,224,199 | https://en.wikipedia.org/wiki/Spatial%20capacity | Spatial capacity is an indicator of "data intensity" in a transmission medium. It is usually used in conjunction with wireless transport mechanisms. This is analogous to the way that lumens per square meter determine illumination intensity.
Spatial capacity focuses not only on bit rates for data transfer but on bit rates available in confined spaces defined by short transmission ranges. It is measured in bits per second per square meter.
Among those leading research in spatial capacity are Jan Rabaey at the University of California, Berkeley. Some have suggested the term "spatial efficiency" as more descriptive. Marc Weiser, former chief technologist of Xerox PARC, was another contributor to the field who commented on the importance of spatial capacity.
The System spectral efficiency is the spatial capacity divided by the bandwidth in hertz of the available frequency band.
Relative spatial capacities
Engineers at Intel and elsewhere have reported the relative spatial capacities of various wireless technologies as follows:
IEEE 802.11b 1,000 (bit/s)/m²
Bluetooth 30,000 (bit/s)/m²
IEEE 802.11a 83,000 (bit/s)/m²
Ultra-wideband 1,000,000 (bit/s)/m²
IEEE 802.11g N/A
See also
System spectral efficiency
References
Wireless networking
Network performance
Radio resource management | Spatial capacity | [
"Technology",
"Engineering"
] | 264 | [
"Wireless networking",
"Computer networks engineering"
] |
2,224,213 | https://en.wikipedia.org/wiki/English%20units | English units were the units of measurement used in England up to 1826 (when they were replaced by Imperial units), which evolved as a combination of the Anglo-Saxon and Roman systems of units. Various standards have applied to English units at different times, in different places, and for different applications.
Use of the term "English units" can be ambiguous, as, in addition to the meaning used in this article, it is sometimes used to refer to the units of the descendant Imperial system as well to those of the descendant system of United States customary units.
The two main sets of English units were the Winchester Units, used from 1495 to 1587, as affirmed by King Henry VII, and the Exchequer Standards, in use from 1588 to 1825, as defined by Queen Elizabeth I.
In England (and the British Empire), English units were replaced by Imperial units in 1824 (effective as of 1 January 1826) by a Weights and Measures Act, which retained many though not all of the unit names and redefined (standardised) many of the definitions. In the US, being independent from the British Empire decades before the 1824 reforms, English units were standardized and adopted (as "US Customary Units") in 1832.
History
Very little is known of the units of measurement used in the British Isles prior to Roman colonisation in the 1st century AD. During the Roman period, Roman Britain relied on Ancient Roman units of measurement. During the Anglo-Saxon period, the North German foot of 13.2 inches (335 millimetres) was the nominal basis for other units of linear measurement. The foot was divided into 4 palms or 12 thumbs. A cubit was 2 feet, an elne 4 feet. The rod was 15 Anglo-Saxon feet, the furlong 10 rods. An acre was 4 rods × 40 rods, i.e. 160 square rods or 36,000 square Anglo-Saxon feet. However, Roman units continued to be used in the construction crafts, and reckoning by the Roman mile of 5,000 feet (or 8 stades) continued, in contrast to other Germanic countries which adopted the name "mile" for a longer native length closer to the league (which was 3 Roman miles). From the time of Offa King of Mercia (8th century) until 1526 the Saxon pound, also known as the moneyers' pound (and later known as the Tower pound) was the fundamental unit of weight (by Offa's law, one pound of silver, by weight, was subdivided into 240 silver pennies, hence (in money) 240 pence – twenty shillings – was known as one pound).
Prior to the enactment of a law known as the "Composition of Yards and Perches" () some time between 1266 and 1303, the English system of measurement had been based on that of the Anglo-Saxons, who were descended from tribes of northern Germany. The Compositio redefined the yard, foot, inch, and barleycorn to of their previous value. However, it retained the Anglo-Saxon rod of 15 x feet (5.03 metres) and the acre of 4 × 40 square rods. Thus, the rod went from 5 old yards to new yards, or 15 old feet to new feet. The furlong went from 600 old feet (200 old yards) to 660 new feet (220 new yards). The acre went from 36,000 old square feet to 43,560 new square feet. Scholars have speculated that the Compositio may have represented a compromise between the two earlier systems of units, the Anglo-Saxon and the Roman.
The Norman conquest of England introduced just one new unit: the bushel. William the Conqueror, in one of his first legislative acts, confirmed existing Anglo-Saxon measurement, a position which was consistent with Norman policy in dealing with occupied peoples. The Magna Carta of 1215 stipulates that there should be a standard measure of volume for wine, ale and corn (the London Quarter), and for weight, but does not define these units.
Later development of the English system was by defining the units in laws and by issuing measurement standards. Standards were renewed in 1496, 1588, and 1758. The last Imperial Standard Yard in bronze was made in 1845; it served as the standard in the United Kingdom until the yard was redefined by the international yard and pound agreement (as 0.9144 metres) in 1959 (statutory implementation was in the Weights and Measures Act 1963). Over time, the English system had spread to other parts of the British Empire.
Timeline
Selected excerpts from the bibliography of Marks and Marking of Weights and Measures of the British Isles
1215 Magna Carta — the earliest statutory declaration for uniformity of weights and measures
1335 8 & 9 Edw. 3. c. 1 — First statutory reference describing goods as avoirdupois
1414 2 Hen. 5. Stat. 2. c. 4 — First statutory mention of the Troy pound
1495 12 Hen. 7. c. 5 — New Exchequer standards were constructed, including Winchester capacity measures defined by Troy weight of their content of threshed wheat by stricken (i.e. level) measure (first statutory mention of Troy weight as standard weight for bullion, bread, spices etc.).
1527 Hen VIII — Abolished the Tower pound
1531 23 Hen. 8. c. 4 — Barrel to contain 36 gallons of beer or 32 of ale; kilderkin is half of this; firkin is half again.
1532 24 Hen. 8. c. 3 — First statutory references to use of avoirdupois weight.
1536 28 Hen. 8. c. 4 — Added the tierce (41 gallons)
1588 (Elizabeth I) — A new series of Avoirdupois standard bronze weights (bell-shaped from 56 lb to 2 lb and flat-pile from 8 lb to a dram), with new Troy standard weights in nested cups, from 256 oz to oz in a binary progression.
1601–1602 — Standard bushels and gallons were constructed based on the standards of Henry VII and a new series of capacity measures were issued.
1660 12 Cha. 2. c. 24 — Barrel of beer to be 36 gallons, taken by the gauge of the Exchequer standard of the ale quart; barrel of ale to be 32 gallons; all other liquors retailed to be sold by the wine gallon
1689 1 Will. & Mar. c. 24 — Barrels of beer and ale outside London to contain 34 gallons
1695 7 Will. 3. c. 24 (I) — Irish Act about grain measures decreed: unit of measure to be Henry VIII's gallon as confirmed by Elizabeth I; i.e. cubic inches; standard measures of the barrel (32 gallons), half-barrel (16 gallons), bushel (8), peck (2), and gallon lodged in the Irish Exchequer; and copies were provided in every county, city, town, etc.
1696 8 & 9 Will. 3. c. 22 — Size of Winchester bushel "every round bushel with a plain and even bottom being ″ wide throughout and 8″ deep" (i.e. a dry measure of 2150 in3 per gallon).
1706 6 Ann. c. 11 — Act of Union decreed the weights and measures of England to be applied in Scotland, whose burgs (towns) were to take charge of the duplicates of the English Standards sent to them.
1706 6 Ann. c. 27 — Wine gallon to be a cylindrical vessel with an even bottom 7″ diameter throughout and 6″ deep from top to bottom of the inside, or holding 231 in3 and no more.
1713 12 Ann. c. 17 — The legal coal bushel to be round with a plain and even bottom, inches from outside to outside and to hold 1 Winchester bushel and 1 quart of water.
1718 5 Geo. 1. c. 18 — Decreed Scots Pint to be exactly 103 in3.
1803 43 Geo. 3. c. 151 — Referred to wine bottles making about 5 to the wine gallon (i.e. Reputed Quarts)
1824 5 Geo. 4. c. 74 — Weights and Measures Act 1824 completely reorganized British metrology and established Imperial weights and measures; defined the yard, troy and avoirdupois pounds and the gallon (as the standard measure for liquids and dry goods not measured by heaped measure), and provided for a 'brass' standard gallon to be constructed.
1825 6 Geo. 4. c. 12 — Delayed introduction of Imperial weights and measures from 1 May 1825 to 1 January 1826.
1835 5 & 6 Will. 4. c. 63 — Weights and Measures Act 1835 abolished local and customary measures, including the Winchester bushel; made heaped measure illegal; required trade to be carried out by avoirdupois weight only, except for bullion, gems and drugs (which were to be sold by troy weight instead); decreed that all forms of coal were to be sold by weight and not measure; legalised the 'stone' as , the 'hundredweight' as , and the (long) ton as 20 hundredweight, or .
1853 16 & 17 Vict. c. 29 — Permitted the use of decimal bullion weights.
1866 29 & 30 Vict. c. 82 — Standards of Weights, Measures, and Coinage Act 1866 transferred all duties and standards from the Exchequer to the newly created Standards Department of the Board of Trade.
1878 41 & 42 Vict. c. 49 — Weights and Measures Act 1878 defined the Imperial standard yard and pound; enumerated the secondary standards of measure and weight derived from the Imperial standards; required all trade by weight or measure to be in terms of one of the Imperial weights or measures or some multiple part thereof; abolished the Troy pound.
1963 c. 31 — Weights and Measures Act 1963 abolished the chaldron of coal, the fluid drachm and minim (effective 1 February 1971), discontinued the use of the quarter, abolished the use of the bushel and peck, and abolished the pennyweight (from 31 January 1969).
Length
Area
Administrative units
Hide four to eight bovates. A unit of yield, rather than area, it measured the amount of land able to support a single household for agricultural and taxation purposes.
Knight's fee five hides. A knight's fee was expected to produce one fully equipped soldier for a knight's retinue in times of war.
Hundred or wapentake 100 hides grouped for administrative purposes.
Volume
Many measures of capacity were understood as fractions or multiples of a gallon. For example, a quart is a quarter of a gallon, and a pint is half of a quart, or an eighth of a gallon. These ratios applied regardless of the specific size of the gallon. Not only did the definition of the gallon change over time, but there were several different kinds of gallon, which existed at the same time. For example, a wine gallon with a volume of 231 cubic inches (the basis of the U.S. gallon), and an ale gallon of 282 cubic inches, were commonly used for many decades prior to the establishment of the imperial gallon. In other words, a pint of ale and a pint of wine were not the same size. On the other hand, some measures such as the fluid ounce were not defined as a fraction of a gallon. For that reason, it is not always possible to give accurate definitions of units such as pints or quarts, in terms of ounces, prior to the establishment of the imperial gallon.
General liquid measures
Liquid measures as binary submultiples of their respective gallons (ale or wine):
Wine
Wine is traditionally measured based on the wine gallon and its related units. Other liquids such as brandy, spirits, mead, cider, vinegar, oil, honey, and so on, were also measured and sold in these units.
The wine gallon was re-established by Queen Anne in 1707 after a 1688 survey found the Exchequer no longer possessed the necessary standard but had instead been depending on a copy held by the Guildhall. Defined as 231 cubic inches, it differs from the later imperial gallon, but is equal to the United States customary gallon.
Rundlet 18 wine gallons or wine pipe
Wine barrel 31.5 wine gallons or wine hogshead
Tierce 42 wine gallons, puncheon or wine pipe
Wine hogshead 2 wine barrels, 63 wine gallons or wine tun
Puncheon or tertian 2 tierce, 84 wine gallons or wine tun
Wine pipe or butt 2 wine hogshead, 3 tierce, 7 roundlet or 126 wine gallons
Wine tun 2 wine pipe, 3 puncheon or 252 wine gallons
Ale and beer
Pin 4.5 gallons or beer barrel
Firkin 2 pins, 9 gallons (ale, beer or goods) or beer barrel
Kilderkin 2 firkins, 18 gallons or beer barrel
Beer barrel 2 kilderkins, 36 gallons or beer hogshead
Beer hogshead 3 kilderkins, 54 gallons or 1.5 beer barrels
Beer pipe or butt 2 beer hogsheads, 3 beer barrels or 108 gallons
Beer tun 2 beer pipes or 216 gallons
Grain and dry goods
The Winchester measure, also known as the corn measure, centered on the bushel of approximately 2,150.42 cubic inches, which had been in use with only minor modifications since at least the late 15th century. The word corn at that time referred to all types of grain. The corn measure was used to measure and sell many types of dry goods, such as grain, salt, ore, and oysters.
However, in practice, such goods were often sold by weight. For example, it might be agreed by local custom that a bushel of wheat should weigh 60 pounds, or a bushel of oats should weigh 33 pounds. The goods would be measured out by volume, and then weighed, and the buyer would pay more or less depending on the actual weight. This practice of specifying bushels in weight for each commodity continues today. This was not always the case though, and even the same market that sold wheat and oats by weight might sell barley simply by volume. In fact, the entire system was not well standardized. A sixteenth of a bushel might be called a pottle, hoop, beatment, or quartern, in towns only a short distance apart. In some places potatoes might be sold by the firkin—usually a liquid measure—with one town defining a firkin as 3 bushels, and the next town as 2 1/2 bushels.
The pint was the smallest unit in the corn measure. The corn gallon, one eighth of a bushel, was approximately 268.8 cubic inches. Most of the units associated with the corn measure were binary (sub)multiples of the bushel:
Other units included the wey (6 or sometimes 5 seams or quarters), and the last (10 seams or quarters).
Specific goods
Perch 24.75 cubic feet of dry stone, derived from the more commonly known perch, a unit of length equal to 16.5 feet.
Cord 128 cubic feet of firewood, a stack of firewood 4 ft × 4 ft × 8 ft
Chemistry
Fluid-grain The volume of 1 grain of distilled water at 62 °F, 30 inHg pressure.
At that reference, water has a density of ≃ 0.9988 (438.0 or 1.001), and thus:
= 1.096 imperial minim = .06488 ml or approximately a drop.
Weight
The Avoirdupois, Troy and Apothecary systems of weights all shared the same finest unit, the grain; however, they differ as to the number of grains there are in a dram, ounce and pound. This grain was legally defined as the weight of a grain seed from the middle of an ear of barley. There also was a smaller wheat grain, said to be (barley) grains or about 48.6 milligrams.
The avoirdupois pound was eventually standardised as 7,000 grains and was used for all products not subject to Apothecaries's or Tower weight.
Avoirdupois
Troy and Tower
The Troy and Tower pounds and their subdivisions were used for coins and precious metals. The Tower pound, which was based upon an earlier Anglo-Saxon pound, was replaced by the Troy pound when a proclamation dated 1526 required the Troy pound to be used for mint purposes instead of the Tower pound. No standards of the Tower pound are known to have survived.
Established in the 8th century by Offa of Mercia, a pound sterling (or "pound of sterlings") was that weight of sterling silver sufficient to make 240 silver pennies.
Troy
Grain (gr) = 64.79891 mg
Pennyweight (dwt) 24 gr ≈ 1.56 g
Ounce (oz t) 20 dwt = 480 gr ≈ 31.1 g
Pound (lb t) 12 oz t = 5760 gr ≈ 373 g
Mark 8 oz t
Tower
Grain (gr) = gr t ≈ 45.6 mg
Pennyweight (dwt) 32 gr T = gr t ≈ 1.46 g
Tower ounce 20 dwt T = 640 gr T = dwt t = 450 gr t ≈ 29.2 g
Tower pound 12 oz T = 240 dwt T = 7680 gr T = 225 dwt t = 5400 gr t ≈ 350 g
Mark 8 oz T ≈ 233 g
Apothecary
Grain (gr) = 64.79891 mg
Scruple (s ap) 20 gr
Dram (dr ap) 3 s ap = 60 gr
Ounce (oz ap) 8 dr ap = 480 gr
Pound (lb ap) 5760 gr = 1 lb t
Others
Merchants/Mercantile pound 15 oz tower = 6750 gr ≈ 437.4 g
London/Mercantile pound 15 oz troy = 16 oz tower = 7200 gr ≈ 466.6 g
Mercantile stone 12 lb L ≈ 5.6 kg
Butcher's stone 8 lb ≈ 3.63 kg
Sack 26 st = 364 lb ≈ 165 kg
The carat was once specified as four grains in the English-speaking world.
Some local units in the English dominion were (re-)defined in simple terms of English units, such as the Indian tola of 180 grains.
Tod
This was an English weight for wool. It has the alternative spelling forms of tode, todd, todde, toad, and tood. It was usually 28 pounds, or two stone. The tod, however, was not a national standard and could vary by English shire, ranging from 28 to 32 pounds. In addition to the traditional definition in terms of pounds, the tod has historically also been considered to be of a sack, of a sarpler, or of a wey.
See also
, a unit of 100 or 120 items
Notes
References
External links
English Customary Weights and Measures
Jacques J. Proot's Anglo-Saxon weights & measures page. Internet Archive Wayback Machine
Alexander Justice, "A General Discourse of the Weights and Measures" (London, 1707).
Imperial units
Customary units of measurement
Economic history of England
Units of measurement by country | English units | [
"Mathematics"
] | 3,930 | [
"Units of measurement by country",
"Quantity",
"Customary units of measurement",
"Units of measurement"
] |
2,224,558 | https://en.wikipedia.org/wiki/Bitrate%20peeling | Bitrate peeling is a technique used in Ogg Vorbis audio encoded streams, wherein a stream can be encoded at one bitrate but can be served at that or any lower bitrate.
The purpose is to provide access to the clip for people with slower Internet connections, and yet still allow people with faster connections to enjoy the higher quality content. The server automatically chooses which stream to deliver to the user, depending on user's connection speed.
, Ogg Vorbis bitrate peeling existed only as a concept as there was not yet an encoder capable of producing peelable datastreams Bounties - XiphWiki.
Difference from other technologies
The difference between SureStream and bitrate peeling is that SureStream is limited to only a handful of pre-defined bitrates, with significant difference between them, and SureStream encoded files are big because they contain all of the bitrates used, while bitrate peeling uses much smaller steps to change the available bitrate and quality, and only the highest bitrate is used to encode the file/stream, which results in smaller files on servers.
A related technique to the SureStream approach is hierarchical modulation, used in broadcast, where severally different streams at different qualities (and bitrates) are all broadcast, with the higher quality stream used if possible, with the lower quality streams fallen back on if not.
Lossy and correction
A similar technology is to feature a combination of a lossy format and a lossless correction; this allows stripping the correction to easily obtain a lossy file. Such formats include MPEG-4 SLS (scalable to lossless), WavPack, DTS-HD Master Audio and OptimFROG DualStream.
SureStream example
A SureStream encoded file is encoded at bitrates of 16 kbit/s, 32 kbit/s and 96 kbit/s. The file will be about the same in size as three separate files encoded at those bitrates and put together, or one file encoded at the sum of those bitrates, which is about 144 kbit/s (16 + 32 + 96). When a dial-up user has only about 28 kbit/s of bandwidth available, the Real server will serve the 16 kbit/s stream. If the dial-up connection is of higher quality, and maybe about 42 kbit/s is available, the server will automatically switch to the 32 kbit/s stream. A DSL or cable Internet user will be served the 96 kbit/s stream. This looks good, but even though the user with 28 kbit/s can use a higher bitrate / higher quality stream (maybe 22–24 kbit/s), such thing can't be done with SureStream, unless the encoded file contains such a bitrate. This is where Bitrate Peeling comes into play.
Bitrate peeling example
Contrary to SureStream, bitrate peeling requires only the highest bitrate to be used when encoding a file/stream, which is 96 kbit/s in this case. The obvious benefit is much smaller space on a server required by such a file. An additional feature of bitrate peeling is a much finer tuning of available bitrate/quality.
If a dial-up user with 28 kbit/s available bandwidth connects to an Ogg Vorbis file/stream, the server will "peel" the original 96 kbit/s file/stream down to just below available bandwidth (in this case it would be around 20–24 kbit/s). This so-called peeling process is different from transcoding because transcoding uncompresses the file and recompresses it (a computing-intensive process), whereas the peeling process removes excess bits from the stream without more processing.
Aside from the obvious space-saving advantage bitrate peeling allows for smaller steps in the delivery bitrate (the end user will see the file in the highest quality possible for their bandwidth).
These benefits are only theoretical, as the only Vorbis peeler available is still in experimental state and produces file qualities inferior to what transcoding the higher bitrate file to a lower bitrate would.
Comparison with other progressive encodings
Bitrate peeling is theoretically possible, and is implemented in some other formats, notably JPEG 2000, JPEG progressive encoding, and Scalable Video Coding.
The reason that it is not available in Ogg Vorbis is that current encoders do not organize the code-stream to have progressive accuracy, thus peelers cannot tell which data is more or less important.
See also the Adam7 algorithm used in PNG interlacing.
See also
Ogg bitstream format
Vorbis, a free audio compression codec
Streaming media
audio file format
audio signal processing
audio storage
codec
data compression
External links
Xiph.org Foundation
Xiph.org Bitrate Peeling Bounty
Ogg Vorbis site
Ogg Vorbis Bitrate Peeling Description
An experimental Ogg vorbis Bitrate Peeler
Bitrate peeling thread on Hydrogenaudio forums
The obvious quick-and-dirty solution to making a peelable encoder?
Data compression
Audio engineering | Bitrate peeling | [
"Engineering"
] | 1,049 | [
"Electrical engineering",
"Audio engineering"
] |
2,224,692 | https://en.wikipedia.org/wiki/Gasoline%20gallon%20equivalent | Gasoline gallon equivalent (GGE) or gasoline-equivalent gallon (GEG) is the amount of an alternative fuel it takes to equal the energy content of one liquid gallon of gasoline. GGE allows consumers to compare the energy content of competing fuels against a commonly known fuel, namely gasoline.
It is difficult to compare the cost of gasoline with other fuels if they are sold in different units and physical forms. GGE attempts to solve this. One GGE of CNG and one GGE of electricity have exactly the same energy content as one gallon of gasoline. In this way, GGE provides a direct comparison of gasoline with alternative fuels, including those sold as a gas (natural gas, propane, hydrogen) and as metered electricity.
Definition
In 1994, the US National Institute of Standards and Technology (NIST) defined "gasoline gallon equivalent (GGE) [as] 5.660 pounds of natural gas." Compressed natural gas (CNG), for example, is a gas rather than a liquid. It can be measured by its volume in standard cubic feet (ft3) at atmospheric conditions, by its weight in pounds (lb), or by its energy content in joules (J), British thermal units (BTU), or kilowatt-hours (kW·h). CNG sold at filling stations in the US is priced in dollars per GGE.
Using GGE as a measure to compare the stored energy of various fuels for use in an internal combustion engine is only one input for consumers, who typically are interested in the annual cost of driving a vehicle, which requires considering the amount of useful work that can be extracted from a given fuel. This is measured by the car's overall efficiency. In the context of GGE, a real world measure of overall efficiency is the fuel economy or fuel consumption advertised by motor vehicle manufacturers.
Efficiency and consumption
To start, only a fraction of the stored energy of a given fuel (measured in BTU or kW-hr) can be converted to useful work by the vehicle's engine. The measure of this is engine efficiency, often called thermal efficiency in the case of internal combustion engines. A diesel cycle engine can be as much as 40% to 50% efficient at converting fuel into work, where a typical automotive gasoline engine's efficiency is about 25% to 30%.
In general, an engine is designed to run on a single fuel source and substituting one fuel for another may affect the thermal efficiency. Each fuel–engine combination requires adjusting the mix of air and fuel. This can be a manual adjustment using tools and test instruments or done automatically in computer-controlled fuel injected and multi-fuel vehicles. Forced induction for an internal combustion engine using supercharger or turbocharger may also affect the optimum fuel–air mix and thermal efficiency.
The overall efficiency of converting a unit of fuel to useful work (rotation of the driving wheels) includes consideration of thermal efficiency along with dynamic losses that are inherent and specific to the design of a given vehicle. Thermal efficiency is affected by both friction and heat losses; for internal combustion engines, some of the stored energy is lost as heat through the exhaust or cooling system. In addition, friction inside the engine happens along the cylinder walls, crankshaft rod bearings and main bearings, camshaft bearings, drive chains or gears, plus other miscellaneous and minor bearing surfaces. Other dynamic losses can be caused by friction outside the motor/engine, including loads from the generator / alternator, power steering pump, A/C compressor, transmission, transfer case (if four-wheel-drive), differential(s) and universal joints, plus rolling resistance of the pneumatic tires. The vehicle's external styling affects its aerodynamic drag, which is another dynamic loss that must be considered for overall efficiency.
In battery or electric vehicles, calculating the vehicle's overall efficiency of useful work begins with the charge–discharge rate of the battery pack, generally 80% to 90%. Next is the conversion of stored energy to distance traveled under power. Generally speaking, an electrical motor is far more efficient than an internal combustion engine at converting the stored potential energy into useful work; in an electric vehicle, traction motor efficiency can approach 90%, as there is minimal waste heat coming off the motor parts, and zero heat cast off by the coolant radiator and out of the exhaust. An electric motor typically has internal friction only at the main axle bearings. Additional losses will affect the overall efficiency, similar to a conventional internal combustion car, including rolling resistance, aerodynamic drag, accessory power, climate control, and drivetrain losses. See table below translating retail electricity costs for a GGE in BTU.
Overall efficiency is measured and reported, typically by government testing, through operating the vehicle in a standardized driving cycle designed to replicate typical use, while providing a consistent basis for comparison between vehicles. Cars sold in the United States are advertised by their measured overall efficiency (fuel economy) in miles per gallon (mpg). The MPG of a given vehicle starts with the thermal efficiency of the fuel and engine, less all of the above elements of friction. The fuel consumption is an equivalent measure for cars sold outside the United States, typically measured in litres per 100 km traveled; in general, the fuel consumption and miles per gallon would be reciprocals with appropriate conversion factors, but because different countries use different driving cycles to measure fuel consumption, fuel economy and fuel consumption are not always directly comparable.
Miles per gallon of gasoline equivalent (MPGe)
The MPGe metric was introduced in November 2010 by EPA in the Monroney label of the Nissan Leaf electric car and the Chevrolet Volt plug-in hybrid. The ratings are based on EPA's formula, in which 33.7 kilowatt hours of electricity is equivalent to one gallon of gasoline (giving a heating value of ), and the energy consumption of each vehicle during EPA's five standard drive cycle tests simulating varying driving conditions. All new cars and light-duty trucks sold in the U.S. are required to have this label showing the EPA's estimate of fuel economy of the vehicle.
Gasoline gallon equivalent tables
Rates per kWh for residential electricity in the USA range from $0.0728 (Idaho) to $0.166 (Alaska),
$0.22 (San Diego Tier 1, while Tier 2 is $.40) and $0.2783 (Hawaii).
Specific fuels
Compressed natural gas
One GGE of natural gas is at standard conditions. This volume of natural gas has the same energy content as one US gallon of gasoline (based on lower heating values: of natural gas and for gasoline).
One GGE of CNG pressurized at is . This volume of CNG at 2,400 psi has the same energy content as one US gallon of gasoline (based on lower heating values: of CNG and of gasoline. Using Boyle's law, the equivalent GGE at is .
The National Conference of Weights & Measurements (NCWM) has developed a standard unit of measurement for compressed natural gas, defined in the NIST Handbook 44 Appendix D as follows:
"1 Gasoline [US] gallon equivalent (GGE) means 2.567 kg (5.660 lb) of natural gas."
When consumers refuel their CNG vehicles in the US, the CNG is usually measured and sold in GGE units. This is fairly helpful as a comparison to gallons of gasoline.
Ethanol and blended fuels (E85)
of ethanol has the same energy content as of gasoline.
The energy content of ethanol is , compared to for gasoline. (see chart above)
A flex-fuel vehicle will experience about 76% of the fuel mileage MPG when using E85 (85% ethanol) products as compared to 100% gasoline. Simple calculations of the BTU values of the ethanol and the gasoline indicate the reduced heat values available to the internal combustion engine. Pure ethanol provides 2/3 of the heat value available in pure gasoline.
In the most common calculation, that is, the BTU value of pure gasoline vs gasoline with 10% ethanol, the latter has just over 96% BTU value of pure gasoline. Gasoline BTU varies relating to the Reid vapor pressure (causing easier vaporization in winter blends containing ethanol (ethanol is difficult to start a vehicle on when it is cold out) and anti-knock additives. Such additives offer a reduction in BTU value.
See also
Engine efficiency
Thermal efficiency
Potential energy
Work (thermodynamics)
Work (physics)
Diesel cycle engines
Efficiency
Friction
Kilowatt hour
References
Units of energy
Equivalent units | Gasoline gallon equivalent | [
"Mathematics"
] | 1,748 | [
"Equivalent quantities",
"Quantity",
"Units of energy",
"Equivalent units",
"Units of measurement"
] |
2,224,896 | https://en.wikipedia.org/wiki/Archibald%20Howie | Archibald "Archie" Howie (born 8 March 1934) is a British physicist and Emeritus Professor at the University of Cambridge, known for his pioneering work on the interpretation of transmission electron microscope images of crystals. Born in 1934, he attended Kirkcaldy High School and the University of Edinburgh. He received his PhD from the University of Cambridge, where he subsequently took up a permanent post. He has been a fellow of Churchill College since its foundation, and was President of its Senior Combination Room (SCR) until 2010.
In 1965, with Hirsch, Whelan, Pashley and Nicholson, he published the seminal text Electron Microscopy of Thin Crystals. He was elected to the Royal Society in 1978 and awarded their Royal Medal in 1999. In 1992 he was awarded the Guthrie Medal and Prize. He was elected an Honorary Fellow of the Royal Society of Edinburgh in 1995. He was head of the Cavendish Laboratory from 1989 to 1997.
References
1934 births
Living people
British physicists
British materials scientists
Alumni of the University of Edinburgh
Fellows of the Royal Society
Commanders of the Order of the British Empire
Fellows of Churchill College, Cambridge
Microscopists
Royal Medal winners
Alumni of Trinity College, Cambridge
Fellows of the Royal Microscopical Society
Presidents of the International Federation of Societies for Microscopy
Scientists of the Cavendish Laboratory
Presidents of the Cambridge Philosophical Society | Archibald Howie | [
"Chemistry"
] | 263 | [
"Microscopists",
"Microscopy"
] |
2,225,052 | https://en.wikipedia.org/wiki/International%20Society%20for%20the%20Interdisciplinary%20Study%20of%20Symmetry | The International Symmetry Society ("International Society for the Interdisciplinary Study of Symmetry"; abbreviated name SIS) is an international non-governmental, non-profit organization registered in Hungary (Budapest, Tisza u. 7, H-1029).
Its main objectives are:
to bring together artists and scientists, educators and students devoted to, or interested in, the research and understanding of the concept and application of symmetry (asymmetry, dissymmetry);
to provide regular information to the general public about events in symmetry studies;
to ensure a regular forum (including the organization of symposia and the publication of a periodical) for all those interested in symmetry studies.
The topic was introduced for the first time by Russian and Polish scholars. Then in 1952, Hermann Weyl published his fascinating book Symmetry, which was later translated into 10 languages. Since then, it has become an attractive subject of research in various fields. A variety of manifestations of the principle of symmetry in sculpture, painting, architecture, ornament, and design, in organic and inorganic nature, has been revealed; the philosophical and mathematical significance of this principle has been studied.
During the 1980s, the discussions concerning the nature of the world, whether it was essentially probabilistic or naturally geometric, revived the interest of the researchers in the topic. The intellectual atmosphere of this period facilitated the idea of the establishment of a new institution devoted to the study of all forms of complexity and patterns of symmetry and orderly structures pervading science, nature and society, which ultimately led to the establishment of the International Society for the Interdisciplinary Study of Symmetry.
The Society's community comprises several branches of science and art, while symmetry studies have gained the rank of an individual interdisciplinary field in the judgement of the scientific community. The Society has members on over 40 countries on all continents.
The Society was founded in 1989 following a successful international meeting in Budapest.
It has operated continuously since its foundation, publishing printed and web journals and hosting an International Congress and Exhibition entitled Symmetry: Art and Science every three years:
1989 in Budapest, Hungary
1992 in Hiroshima, Japan
1995 in Washington DC, US
1998 in Haifa, Israel
2001 in Sydney, Australia
2004 in Tihany, Hungary
2007 in Buenos Aires, Argentina
2010 in Gmünd, Austria
2013 in Crete, Greece
2016 in Adelaide, Australia
2019 in Kanazawa, Japan
2022 in Porto, Portugal
Interim, full conferences have been held in
Tsukuba Science City (co-organized with Katachi no kagaku kai, Japan), 1994 and 1998
Brussels (2002)
Lviv [Lemberg] (2008)
Kraków and Wroclaw (2008).
A new series of conferences under the general heading Logics of Image was launched in 2013 and is planned to take place every two years. This series is co-organised with the Research Group on Universal Logic:
ISSC 2016: Logics of Image - Visualization, Iconicity, Imagination and Human Creativity, in Santorini, Greece
ISSC 2018: Logics of Image - Visual Learning, Logic and Philosophy of Form in East and West, in Crete, Greece
The President of the International Society for the Interdisciplinary Study of Symmetry is Dénes Nagy.
The Society is governed by a number of special Boards and Committees.
The International Advisory Board consists of:
Rima Ajlouni (United States of America)
Alireza Behnejad (UK),
Beth Cardier (Australia)
Oleh Bodnar (Ukraine),
Beth Cardier (Australia),
Liu Dun (China),
Shozo Isihara (Japan),
Ritsuko Izuhara (Japan),
Eugene Katz (Israel),
Patricia Muñoz (Argentina, representing SEMA),
Janusz Rębielak (Poland),
Vera Viana (Portugal).
Dmitry Weise (Russia).
Among the Honorary Members of the Society are:
Carol Bier (USA)
Jürgen Bokowski (Germany)
Michael Burt (Israel)
Donald Crowe ([United States of America])
Istvan Hargittai (Hungary)
William Huff ([United States of America])
Peter Klein (Germany)
Koryo Miura (Japan)
Tohru Ogawa (Japan)
Werner Schulze (Austria)
Caspar Schwabe (Switzerland)
Dan Shechtman (Israel)
Ryuji Takaki (Japan)
Honorary Members of the Society (died)
Johann Jakob Burckhardt (Switzerland)
Harold S. M. Coxeter (Canada)
Victor A. Frank-Kamenetsky (Russia)
Heinrich Heesch (Germany)
Kodi Husimi (Japan)
Michael Longuet-Higgins (UK and [United States of America])
Yuval Ne’eman (Israel)
Ilarion I. Shafranovskii (Russia)
Cyril Smith ([United States of America])
Eugene P. Wigner ([United States of America])
External links
Home Page
Facebook site
References
Organizations established in 1989
Symmetry | International Society for the Interdisciplinary Study of Symmetry | [
"Physics",
"Mathematics"
] | 1,004 | [
"Geometry",
"Symmetry"
] |
2,225,215 | https://en.wikipedia.org/wiki/Phytolith | Phytoliths (from Greek, "plant stone") are rigid, microscopic mineral deposits found in some plant tissues, often persisting after the decay of the plant. Although some use "phytolith" to refer to all mineral secretions by plants, it more commonly refers to siliceous plant remains. Phytoliths come in varying shapes and sizes. The plants which exhibit them take up dissolved silica from the groundwater, whereupon it is deposited within different intracellular and extracellular structures of the plant.
The silica is absorbed in the form of monosilicic acid (Si(OH)4), and is carried by the plant's vascular system to the cell walls, cell lumen, and intercellular spaces. Depending on the plant taxa and soil condition, absorbed silica can range from 0.1% to 10% of the plant's total dry weight. When deposited, the silica replicates the structure of the cells, providing structural support to the plant. Phytoliths strengthen the plant against abiotic stressors such as salt runoff, metal toxicity, and extreme temperatures. Phytoliths can also protect the plant against biotic threats such as insects and fungal diseases.
Functions
There is still debate in the scientific community as to why plants form phytoliths, and whether silica should be considered an essential nutrient for plants. Studies that have grown plants in silica-free environments have typically found that plants lacking silica in the environment do not grow well. For example, the stems of certain plants will collapse when grown in soil lacking silica. In many cases, phytoliths appear to lend structure and support to the plant, much like the spicules in sponges and leather corals. Phytoliths may also provide plants with protection. These rigid silica structures help to make plants more difficult to consume and digest, lending the plant's tissues a grainy or prickly texture. Phytoliths also appear to provide physiologic benefits. Experimental studies have shown that the silicon dioxide in phytoliths may help to alleviate the damaging effects of toxic heavy metals, such as aluminum.
Finally, calcium oxalates serve as a reserve of carbon dioxide in Alarm photosynthesis. Cacti use these as a reserve for photosynthesis during the day when they close their pores to avoid water loss; baobabs use this property to make their trunks more flame-resistant.
History of phytolith research
According to Dolores Piperno, an expert in the field of phytolith analysis, there have been four important stages of phytolith research throughout history.
Discovery and exploratory stage (1835–1895): The first report on phytoliths was published by a German botanist named in 1835. During this time another German scientist named Christian Gottfried Ehrenberg was one of the leaders in the field of phytolith analysis. He developed the first classification system for phytoliths, and analyzed soil samples that were sent to him from all around the world. Most notably, Ehrenberg recorded phytoliths in samples he received from the famous naturalist, Charles Darwin, who had collected the dust from the sails of his ship, HMS Beagle, off the coast of the Cape Verde Islands.
Botanical phase of research (1895–1936): Phytolith structures in plants gained wide recognition and attention throughout Europe. Research on production, taxonomy and morphology exploded. Detailed notes and drawings on plant families that produce silica structures and morphology within families were published.
Period of ecological research (1955–1975): First applications of phytolith analysis to paleoecological work, mostly in Australia, the United States, the United Kingdom, and Russia. Classification systems for differentiation within plant families became popular.
Modern period of archaeological and paleoenvironmental research (1978–present): Archaeobotanists working in the Americas first consider and analyze phytolith assemblages in order to track prehistoric plant use and domestication. Also for the first time, phytolith data from pottery are used to track history of clay procurement and pottery manufacture. Around the same time, phytolith data are also used as a means of vegetation reconstruction among paleoecologists. A much larger reference collection on phytolith morphology within varying plant families is assembled.
Development in plants
Soluble silica, also called monosilicic or orthosilicic acid with a chemical formula of (Si(OH)4), is taken up from the soil when plant roots absorb groundwater. From there, it is carried to other plant organs by the xylem. By an unknown mechanism, which appears to be linked to genetics and metabolism, some of the silica is then laid down in the plant as silicon dioxide. This biological mechanism does not appear to be limited to specific plant structures, as some plants have been found with silica in their reproductive and sub-surface organs.
Chemical and physical characteristics
Phytoliths are composed mainly of noncrystalline silicon dioxide, and about 4% to 9% of their mass is water. Carbon, nitrogen, and other major nutrient elements comprise less than 5%, and commonly less than 1%, of phytolith material by mass. These elements are present in the living cells in which the silica concretions form, so traces are retained in the phytoliths. Such immobilised elements, in particular carbon, are valuable in that they permit radiometric dating in reconstructing past vegetation patterns.
The silica in phytoliths has a refractive index ranging from 1.41 to 1.47, and a specific gravity from 1.5 to 2.3. Phytoliths may be colorless, light brown, or opaque; most are transparent. Phytoliths exist in various three-dimensional shapes, some of which are specific to plant families, genera or species.
Single cell and conjoined phytoliths
Phytoliths may form within single cells, or multiple cells within a plant to form 'conjoined' or multi-cell phytoliths, which are three-dimensional replicas of sections of plant tissue. Conjoined phytoliths occur when conditions are particularly favourable for phytolith formation, such as on a silica rich substrate with high water availability
Pathogenic stress on phytolith formation
Silica is not considered an essential nutrient for plants such as nitrogen or phosphorus. However, silica-aided phytoliths can help a plant be more resilient against biotic and abiotic stressors. Silica is bioactive, meaning it is able to change the expression of certain plant genes to jumpstart a defensive response against these stressors. In terms of fungal infections, the deposition of silica has been shown to create a physical barrier between invading fungi and the plant. Some factors however can have very damaging effects on the plant and limit or alter phytolith production.
In 2009, researchers at the Rock Springs Agricultural Experiment Station at The Pennsylvania State University investigated the effects of pathogenic viruses on phytolith production in Cucurbita pepo var. Texana. The plants that were affected by either mosaic virus (carried by aphids) or bacterial wilt disease (carried by cucumber beetles) were infected on their own to replicate natural conditions and all plants were grouped into three categories: healthy plants sprayed to prevent insect herbivory, plants infected with mosaic disease, and plants infected with bacterial wilt disease.
Analysis after harvest yielded 1,072 phytoliths from forty-five plants. Plants affected by mosaic disease experienced a decrease in phytolith size. This is because the virus constricts overall plant growth and therefore phytolith growth as well. Contrastingly, plants affected with bacterial wilt disease resulted in much larger phytoliths but they were abnormally shaped. This could be due to the bacteria causing constriction of the hypodermal cells, causing an influx of silica deposits.
Patterns of phytolith production
Because identification of phytoliths is based on morphology, it is important to note taxonomical differences in phytolith production.
Families with high phytolith production; family and genus-specific phytolith morphology is common:
Acanthaceae, Aceraceae, Annonaceae, Arecaceae, Asteraceae, Boraginaceae, Bromeliaceae, Burseraceae, Chrysobalanaceae, Commelinaceae, Costaceae, Cucurbitaceae, Cyatheaceae, Cyperaceae, Dilleniaceae, Equisetaceae, Heliconiaceae, Hymenophyllaceae, Magnoliaceae, Marantaceae, Moraceae, Musaceae, Orchidaceae, Poaceae, Podostemaceae, Selaginellaceae, Ulmaceae, Urticaceae, Zingiberaceae
Families where phytolith production may not be high; family and genus-specific phytolith morphology is common:
Capparaceae, Cupressaceae, Dipterocarpaceae, Euphorbiaceae, Fagaceae, Flacourtiaceae, Flagellariaceae, Joinvilleaceae, Pinaceae, Polypodiaceae, Restionaceae, Taxaceae, Taxodiaceae
Families where phytolith production is common; family and genus-specific phytolith morphology is uncommon:
Aristolochiaceae, Chloranthaceae, Combretaceae, Hernandiaceae, Loranthaceae, Menispermaceae, Piperaceae, Sapotaceae, Verbenaceae
Families where phytolith productions varies; family and genus-specific phytolith morphology is uncommon:
Clusiaceae, Fabaceae, Malvaceae, Sterculiaceae
Families where phytolith production is rare or not observed:
Agavaceae, Alismataceae, Amaranthaceae, Amaryllidaceae, Apiaceae, Apocynaceae, Araceae, Araliaceae, Araucariaceae, Asclepiadaceae, Bignoniaceae, Bixaceae, Bombacaceae, Burmanniaceae, Cactaceae, Campanulaceae, Caricaceae, Cartonemataceae, Chenopodiaceae, Convolvulaceae, Cycadaceae, Cyclanthaceae, Dioscoreaceae, Ericaceae, Eriocaulaceae, Gnetaceae, Guttiferae, Hydrocharitaceae, Iridaceae, Juglandaceae, Juncaceae, Labiatae, Lacistemnaceae, Lauraceae, Lecythidaceae, Lentibulariaceae, Liliaceae, Loganiaceae, Malpighiaceae, Mayacaceae, Melastomataceae, Meliaceae, Myristicaceae, Myrtaceae, Myrsinaceae, Nymphaeaceae, Olacaceae, Oxalidaceae, Pedaliaceae, Podocarpaceae, Polygonaceae, Pontederiaceae, Potamogetonaceae, Primulaceae, Proteaceae, Ranunculaceae, Rhamnaceae, Rosaceae, Rubiaceae, Rutaceae, Salicaceae, Sapindaceae, Saxifragaceae, Smilacaceae, Solanaceae, Theaceae, Tiliaceae, Trioridaceae, Typhaceae, Vitaceae, Violaceae, Winteraceae, Xyridaceae, Zygophyllaceae
Archaeology
Phytoliths are very robust, and are useful in archaeology because they can help to reconstruct the plants present at a site when the rest of the plant parts have been burned up or dissolved. Because they are made of the inorganic substances silica or calcium oxalate, phytoliths don't decay with the rest of the plant and can survive in conditions that would destroy organic residues. Phytoliths can provide evidence of both economically important plants and those that are indicative of the environment at a particular time period.
Phytoliths may be extracted from residue on many sources: dental calculus (buildup on teeth); food preparation tools like rocks, grinders, and scrapers; cooking or storage containers; ritual offerings; and garden areas.
Sampling strategies
Cultural contexts: The most important consideration when designing a sampling strategy for a cultural context is to fit the sampling design to the research objectives. For example, if the objective of the study is to identify activity areas, it may be ideal to sample using a grid system. If the objective is to identify foodstuffs, it may be more beneficial to focus on areas where food processing and consumption took place. It is always beneficial to sample ubiquitously throughout the site, because it is always possible to select a smaller portion of the samples for analysis from a larger collection. Samples should be collected and labeled in individual plastic bags. It is not necessary to freeze the samples, or treat them in any special way because silica is not subject to decay by microorganisms.
Natural contexts: Sampling a natural context, typically for the purpose of environmental reconstruction, should be done in a context that is free of disturbances. Human activity can alter the makeup of samples of local vegetation, so sites with evidence of human occupation should be avoided. Bottom deposits of lakes are usually a good context for phytolith samples, because wind often will carry phytoliths from the topsoil and deposit them on water, where they will sink to the bottom, very similar to pollen. It is also possible and desirable to take vertical samples of phytolith data, as it can be a good indicator of changing frequencies of taxa over time.
Modern surfaces: Sampling modern surfaces for use with archeobotanical data may be used to create a reference collection, if the taxa being sampled are known. It may also serve to "detect downward movement of phytoliths into archaeological strata". Taking point samples for modern contexts is ideal.
Laboratory analysis
The first step in extracting phytoliths from the soil matrix involves removing all non-soil and non-sediment material. This can include stone or bone tools, teeth, or other various prehistoric artifacts. Clay has a strong ability for holding onto phytoliths and also must be removed using a centrifuge technique. Once the sample is left to only house soil and sediment components, phytoliths can be separated through a variety of techniques. Pressurized microwave extraction is a fast method but does not produce as pure of results as other methods. Dry ashing tends to break up phytoliths better than wet ashing. Ethanol can also be added to the sample and lit on fire, leaving only the phytoliths behind
One of the most effective methods of phytolith isolation is heavy liquid flotation. Over time, different liquids have been utilized as technology changes, each still carrying different advantages and disadvantages to the separation process. Current liquids used include zinc bromide, hydrochloric acid, or sodium polytungstate which are added to the sample. After flotation occurs, the separated phytoliths and liquid are moved to another container where water is added. This lowers the solution's density, causing the phytoliths to sink to the bottom of the container. The phytoliths are removed and rinsed several times to ensure all of the flotation solvent has been removed and they are placed in storage. Phytoliths can either be stored in a dry setting or in ethanol to prevent abrasion.
When examining the sample, polarized light microscopy, simple light microscopy, phase contrast microscopy, or scanning electron microscopy can be used. The sample should be placed in a mounting media on the slide which can be Canada Balsam, Benzyl Benzoate, silicon oil, glycerin, or water. The target phytolith count is dependent on the objectives, research design, and conditions of the archaeological site from which they were obtained. However, a count of two hundred phytoliths are recommended as a good starting point. If the conditions warrant, more should be counted. It is still not possible to isolate plant DNA from extracted phytoliths.
Burned phytoliths
When looking at a phytolith through a microscope lens, it will usually appear clear against the microscope's light. However phytoliths dark in color are found in the archeological record; these phytoliths display evidence of fire exposure. Gradation of darkness can be used to calculate past environmental fires. Darker phytoliths are correlated with higher carbon residue and fires with higher temperatures which can be measured on the Burnt Phytolith Index (BPI). Burned phytoliths can also appear melted in addition to darkened color.
Fires which cause burned phytoliths can be ignited by anthropogenic or non-anthropogenic sources and can be determined through charcoal and burned phytolith analysis. It is believed that during prehistoric times, an increase in intensive land use such as through agriculture, caused an increase in anthropogenic fires while non-anthropogenic fires could have resulted from lightning strikes. Fire intensity depends on available biomass which usually peaks in the dry, fall season.
Contribution to archaeobotanical knowledge
Phytolith analysis is particularly useful in tropical regions, where other types of plant remains are typically not well preserved.
Phytolith analysis has been used to retrace the domestication and ancestral lineage of various plants. For example, research tracing modern lineages of maize in South America and the American Southwest using phytolith remains on ceramics and pottery has proven to be enlightening. Recent genetic data suggests that the oldest ancestor of Zea mays is teosinte, a wild grass found in southwest Mexico. The Zea mays lineage split off from this grass about six to seven thousand years ago. Phytolith analyses from Bolivia suggest that several varieties of maize were present in the Lake Titicaca region of Bolivia almost 1000 years before the Tiwanaku expansion, when it was previously thought to have been introduced in the region. This case is not isolated. Around the same time, certain varieties of maize could be found with ubiquity across part of South America, suggesting a highly frequented and established trade route existed. Phytolith data from the southeastern United States suggest that two different lineages of maize were introduced from two different sources. Research that hopes to discover more specific information about the spread of maize throughout the southeastern United States is currently under way.
To date, phytolith analyses have also been popular for studies of rice. Because the morphology of rice phytoliths has been significantly documented, studies concerning the domestication of rice, as well as crop processing models using phytolith analyses, are insightful. In one study, phytolith analysis was used to complement macro-remains sampling in order to infer concentrations of plant parts and predict crop processing stages.
Phytolith analysis has been useful in identifying early agriculture in South East Asia during the Early Holocene.
Tracing the history of plant-human interactions
Jigsaw puzzle-shaped phytoliths observed from sites in Greece but not from Israel may relate to climatic difference, possibly relating to irrigation performed for legume plant management.
Cucurbita (squash and gourd) phytolith data from early Holocene sites in Ecuador indicate that the plant food production occurred across lowland South America independent from Mesoamerica.
Problems with phytolith analysis of remains
Multiplicity: different parts of a single plant may produce different phytoliths.
Redundancy: different plants can produce the same kind of phytolith.
Some plants produce large numbers of phytoliths while others produce only few.
Taxonomic resolution issues deriving from the multiplicity and redundancy problems can be dealt with by integrating phytolith analysis with other areas, such as micromorphology and morphometric approaches used in soil analysis.
It is suggested that using phytolith data from food residues (on ceramics, usually) can decrease the bias from both of these problems, because phytolith analysis is more likely to represent crop products and identification of phytoliths can be made with more confidence. Also, food residues do not usually accumulate extraneous deposits. In other words, the samples are more likely to represent a primary context.
Palaeontology and paleoenvironmental reconstructions
Phytoliths occur abundantly in the fossil record, and have been reported from the Late Devonian onwards. Robustness of phytoliths make them available to be found in various remains including sedimentary deposits, coprolites, and dental calculus from diverse environmental conditions. In addition to reconstructing human-plant interactions since the Pleistocene, phytoliths can be used to identify palaeoenvironments and to track vegetational change. More and more studies are acknowledging phytolith records as a valuable tool for reconstructing pre-Quaternary vegetation changes (e.g.,). Occasionally, paleontologists find and identify phytoliths associated with extinct plant-eating animals (i.e. herbivores). Findings such as these reveal useful information about the diet of these extinct animals, and also shed light on the evolutionary history of many different types of plants. Paleontologists in India have recently identified grass phytoliths in dinosaur dung (coprolites), strongly suggesting that the evolution of grasses began earlier than previously thought.
Phytolith records in the context of the global silica cycle, along with CO2 concentrations and other paleoclimatological records, can help constrain estimates of certain long-term terrestrial, biogeochemical cycles and interrelated climate changes.
Light intensity (e.g., open versus closed canopies) can affect cell morphology, especially cell length and area, which can be measured from phytolith fossils. These can be useful for tracing fluctuations in the ancient light regime and canopy cover.
Freshwater oases and related landscape changes that could have affected plant-human interactions were reconstructed through synthesizing phytolith, pollen, and paleoenvironmental data in the well-known early hominin site of Olduvai Gorge in Tanzania.
Comparisons between paleorecords of phytolith remains and modern reference remains in the same region can aid reconstructing how plant composition and related environments changed over time.
Though further testing is required, evolution and development of phytoliths in vascular plants seem to be related to certain types of plant-animal interactions in which phytoliths function as a defensive mechanism for herbivores or related to adaptive changes to habitats.
Japanese and Korean archaeologists refer to grass and crop plant phytoliths as "plant opal" in archaeological literature.
Gallery
For extended examples of phytolith taxonomy, see the University of Sheffield's comprehensive Phytolith Interpretation page.
Carbon sequestration
Research, particularly since 2005 has shown that carbon in phytoliths can be resistant to decomposition for millennia and can accumulate in soils. While researchers had previously known that phytoliths could persist in some soils for thousands of years and that there was carbon occluded within phytoliths that could be used for radiocarbon dating, research into the capacity of phytoliths as a method of storing carbon in soils was pioneered by Parr and Sullivan who suggested that there was a real opportunity to sequester carbon securely in soils for the long term, in the form of carbon inclusions in durable silica phytoliths.
During the mineralization process which creates the phytolith, many different nutrients are absorbed from the soil including carbon which forms Phytolith Occluded Carbon (PhytOC). Phytoliths are able to hold PhytOC in the soil for thousands of years, much longer than other organic methods. While this yields phytoliths as an important area of study regarding carbon sequestration, not all plant species produce analogous results. For example, phytoliths derived from oats can hold 5.0% to 5.8% carbon while sugarcane phytoliths can yield 3.88% to 19.26% carbon. Different species and subspecies hold different carbon storage potential within the silica rather than within the plant itself. Therefore, total PhytOC sequester largely depends on the condition of the biome such as grassland, forest, or cropland, and is influenced by climate and soil conditions. Proper upkeep of these ecosystems can boost biomass production and therefore more silica and carbon uptake. Proper conservation methods could include controlled grazing or fires.
While carbon sequestration is a potentially important way to limit atmospheric greenhouse gas concentrations in the long term, the use of phytoliths to achieve this must be balanced against other uses that might be made of the same biomass carbon (or land for producing biomass) to reduce Greenhouse gas emissions (GHG) by other means including, for example, the production of bioenergy to offset fossil fuel emissions. If enhanced phytolith production results in a reduced availability of biomass for other GHG mitigation strategies, its effectiveness for lowering net GHG emissions may be reduced or negated.
See also
Biomineralization
Druse (botany) crystals of calcium oxalate, silicates, or carbonates present in plants
Raphide elongate calcium oxalate crystals in plants
References
Bibliography
Thorn, V. C. 2004. An annotated bibliography of phytolith analysis and atlas of selected New Zealand subantarctic and subalphine phytoliths.
Kealhofer, L. 1998. Opal phytoliths in Southeast Asian flora.
External links
What is the phytolith?
Ecological significance of phytoliths
Background from St. Cloud laboratory
Association of Environmental Archaeology
Russian Scientific Association for Phytolith Research
Steve Archer, "About Phytoliths": https://web.archive.org/web/20070506230653/http://research.history.org/Archaeological_Research/Collections/CollArchaeoBot/PhytoFAQs.cfm .
Terry B. Ball, "Phytolith Literature Review": http://www.ou.edu/cas/botany-micro/ben/ben282.html .
Dr. Sanjay Eksambekar's 'Phytolith Research Institute': http://www.phytolithresearch.com
Deborah Pearsall's MU Phytolith Database, https://web.archive.org/web/20070422163808/http://web.missouri.edu/~umcasphyto/index.shtml
"What are Phytoliths?" Sandstone Archaeology Paleoethnobotany Laboratory https://web.archive.org/web/20080820003629/http://www.sandstonearchaeology.com/paleoethnobotany.html
Neumann, Chevalier, and Vrydaghs, "Phytoliths in archaeology: recent advances": https://link.springer.com/article/10.1007/s00334-016-0598-3
"Grass-opal phytoliths as climatic indicators of the Great Plains Pleistocene": http://www.kgs.ku.edu/Publications/Bulletins/GB5/Twiss/index.html
Huang et al., "Intensive Management Increases Phytolith-Occluded Carbon Sequestration in Moso Bamboo Plantations in Subtropical China"https://www.mdpi.com/1999-4907/10/10/883/htm
Plant morphology
Plant anatomy
Plant physiology | Phytolith | [
"Biology"
] | 5,768 | [
"Plant physiology",
"Plant morphology",
"Plants"
] |
2,225,308 | https://en.wikipedia.org/wiki/Alpha%20Crateris | Alpha Crateris (α Crateris, abbreviated Alpha Crt, α Crt), officially named Alkes , is a star in the constellation of Crater. It is a cool giant star about away.
Nomenclature
α Crateris (Latinised to Alpha Crateris) is the star's Bayer designation.
It bore the traditional name Alkes, from the Arabic الكاس alkās or الكأس alka's "the cup". In the catalogue of stars in the Calendarium of Al Achsasi al Mouakket, this star was designated Aoul al Batjna (أول ألباطیة awwal al-bāṭiya), which was translated into Latin as Prima Crateris, meaning "first [star] of the Cup". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Alkes for this star on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Wings (asterism), refers to an asterism consisting of Alpha Crateris, Gamma Crateris, Zeta Crateris, Lambda Crateris, Nu Hydrae, Eta Crateris, Delta Crateris, Iota Crateris, Kappa Crateris, Epsilon Crateris, HD 95808, HD 93833, Theta Crateris, HD 102574, HD 100219, Beta Crateris, HD 99922, HD 100307, HD 96819, Chi1 Hydrae, HD 102620 and HD 103462. Consequently, Alpha Crateris itself is known as (, ).
Namesake
was a United States Navy named after the star.
Properties
Alpha Crateris is an orange giant of spectral type K1III. It has an apparent magnitude of 4.07, and is 174 light-years from Earth. It is thought to be a horizontal branch star, meaning it is fusing helium in its core after a helium flash. Cool horizontal branch stars are often called red clump giants as they form a noticeable grouping near the hot edge of the red giant branch in the H–R diagrams of clusters with near-solar metallicity. On this basis it is calculated to have a mass of , a luminosity of , and an age around two billion years. Its surface temperature is 4691 K. Or it might be a red-giant branch star, still fusing hydrogen in a shell around an inert helium core, in which case it would be slightly less massive, older, cooler, larger, and more luminous.
References
Crater (constellation)
Crateris, Alpha
Crateris, 07
095272
053740
K-type giants
Alkes
4287
Durchmusterung objects
Horizontal-branch stars | Alpha Crateris | [
"Astronomy"
] | 590 | [
"Crater (constellation)",
"Constellations"
] |
2,225,318 | https://en.wikipedia.org/wiki/Omicron%20Persei | Omicron Persei (ο Persei, abbreviated Omicron Per, ο Per) is a triple star system in the constellation of Perseus. From parallax measurements taken during the Hipparcos mission it is approximately 1,100 light-years (330 parsecs) from the Sun.
The system consists of a spectroscopic binary pair designated Omicron Persei A and a third companion Omicron Persei B. A's two components are themselves designated Omicron Persei Aa (officially named Atik , the traditional name of the system) and Ab.
Etymology
ο Persei (Latinised to Omicron Persei) is the system's Bayer designation. The designations of the two constituents as Omicron Persei A and B, and those of A's components - Omicron Persei Aa and Ab - derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
It bore the traditional name Atik (also Ati, Al Atik), Arabic for "the shoulder". Some sources attribute the name Atik to the nearby, brighter star Zeta Persei. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Atik for the component Omicron Persei A on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Rolled Tongue, refers to an asterism consisting of Omicron Persei, Nu Persei, Epsilon Persei, Xi Persei, Zeta Persei and 40 Persei. Consequently, the Chinese name for Omicron Persei itself is (), "the Fifth Star of Rolled Tongue".
Properties
Omicron Persei A is a spectroscopic binary consisting of a spectral type B1 giant and a type B2 dwarf orbiting each other every 4.4 days. The orbit is near-circular although its inclination is not precisely known. The two stars are separated by approximately , the exact value depending on the inclination. The primary is approximately one magnitude brighter than the secondary at visual wavelengths. The binary pair forms a rotating ellipsoidal variable star, which varies in brightness from visual magnitude 3.79 to 3.88 during the orbital period.
Omicron Persei lies just north of the open cluster IC 348, but is not catalogued as a member. Both IC 348 and Omicron Persei belong to the Perseus OB2 association.
Culture
In the TV series Futurama, the fictional planet Omicron Persei 8 is home to medicinal plants and reptilian extraterrestrials who often attack Earth.
The USS Atik, named for Omicron Persei Aa, was a ship of the United States Navy.
References
External links
Atik
Perseus (constellation)
Persei, Omicron
B-type giants
B-type main-sequence stars
Spectroscopic binaries
Atik
Persei, 38
1131
023180
017448
Durchmusterung objects | Omicron Persei | [
"Astronomy"
] | 673 | [
"Perseus (constellation)",
"Constellations"
] |
2,225,328 | https://en.wikipedia.org/wiki/Sigma%20Scorpii | Sigma Scorpii (or σ Scorpii, abbreviated Sigma Sco or σ Sco), is a multiple star system in the constellation of Scorpius, located near the red supergiant Antares, which outshines it. This system has a combined apparent visual magnitude of +2.88, making it one of the brighter members of the constellation. Based upon parallax measurements made during the Hipparcos mission, the distance to Sigma Scorpii is roughly 696 light-years (214 parsecs). North et al. (2007) computed a more accurate estimate of light years ( parsecs).
The system consists of a spectroscopic binary with components designated Sigma Scorpii Aa1 (officially named Alniyat , the traditional name for the entire star system) and a Beta Cephei variable) and Aa2; a third component (designated Sigma Scorpii Ab) at 0.4 arcseconds from the spectroscopic pair, and a fourth component (Sigma Scorpii B) at about 20 arcseconds.
Nomenclature
σ Scorpii (Latinised to Sigma Scorpii) is the star system's Bayer designation. The designations of the four components as Sigma Scorpii Aa1, Aa2, Ab and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Sigma Scorpii and Tau Scorpii together bore the traditional name Al Niyat (or Alniyat) derived from the Arabic النياط al-niyāţ "the arteries" and referring to their position flanking the star Antares, the scorpion's heart, with Sigma Scorpii just to the north.
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Alniyat for the component Sigma Scorpii Aa1 on February 1, 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Heart, refers to an asterism consisting of Sigma Scorpii, Antares and Tau Scorpii. Consequently, the Chinese name for Sigma Scorpii itself is (), "the First Star of Heart".
The indigenous Boorong people of northwestern Victoria in Australia saw this star and Tau Scorpii as wives of Djuit (Antares).
Properties
The brightest component of the system, Sigma Scorpii Aa, is a double-lined spectroscopic binary, which means that the pair has not been resolved using a telescope. Instead, their orbit is determined by changes in their combined spectrum caused by the Doppler shift. This indicates that the pair complete an orbit every 33.01 days and have an orbital eccentricity of 0.32.
The primary component of the spectroscopic binary, Sigma Scorpii Aa1, is an evolved giant star with a stellar classification of B1 III. It has around 18 times the mass of the Sun and 12 times the Sun's radius. This star is radiating about times the luminosity of the Sun from its outer envelope at an effective temperature of . This is a variable star of the Beta Cephei type, causing the apparent magnitude to vary between +2.86 and +2.94 with multiple periods of , , and 8.2 days. During each pulsation cycle, the temperature of the star varies by . The other member of the core pair, Sigma Scorpii Aa2, is a main sequence star with a classification of B1 V.
Orbiting this binary at a separation of half an arcsecond, or at least 120 Astronomical units (AU), four times the Sun–Neptune distance, is the magnitude +5.2 Sigma Scorpii Ab, which has an orbital period of over a hundred years. Even farther out at 20 arcseconds, or more than 4500 AU, is Sigma Scorpii B with a magnitude of +8.7. It is classified as a B9 dwarf.
Given its position, youth, and space velocity, the Sigma Scorpii system is a likely member of the Gould Belt, and in particular the Upper Scorpius subgroup of the Scorpius–Centaurus association (Sco OB2). Recent isochronal age estimates for the system yield ages of 8–10 million years through comparison of the HR diagram positions for the stars to modern evolutionary tracks. This agrees well with the mean age for the Upper Scorpius group which is approximately 11 million years.
References
Scorpius
Scorpii, Sigma
Scorpii, 20
Beta Cephei variables
B-type giants
B-type main-sequence stars
4
Alniyat
Upper Scorpius
080112
147165
Durchmusterung objects
6084 | Sigma Scorpii | [
"Astronomy"
] | 1,053 | [
"Scorpius",
"Constellations"
] |
2,225,351 | https://en.wikipedia.org/wiki/Epsilon%20Leonis | Epsilon Leonis (ε Leo, ε Leonis) is the fifth-brightest star in the constellation Leo, consistent with its Bayer designation Epsilon. It is known as Algenubi or Ras Elased Australis. Both names mean "the southern star of the lion's head". Australis is Latin for "southern" and Genubi is Arabic for "south".
Properties
Epsilon Leonis has a stellar classification of G1 II, with the luminosity class of II indicating that, it has evolved into a bright giant. It is much larger and brighter than the Sun with a luminosity 282 times and a radius 21 times solar. Consequently, its absolute magnitude is actually –1.49, making it one of the more luminous stars in the constellation, significantly more than Regulus. Its apparent brightness, though, is only 2.98. Given its distance of about , the star is more than three times the distance from the Sun than Regulus. At this distance, the visual magnitude of Epsilon Leonis is reduced by 0.03 as a result of extinction caused by intervening gas and dust.
Epsilon Leonis exhibits the characteristics of a Cepheid-like variable, changing by an amplitude of 0.3 magnitude every few days. It has around four times the mass of the Sun and a projected rotational velocity of . Based upon its iron abundance, the metallicity of this star's outer atmosphere is only around 52% of the Sun's. That is, the abundance of elements other than hydrogen and helium is about half that in the Sun.
See also
List of stars in Leo
Class G Stars
Variable star
References
Leo (constellation)
Leonis, Epsilon
Algenubi
Leonis, 17
Classical Cepheid variables
G-type bright giants
047908
3873
084441
Suspected variables
Durchmusterung objects | Epsilon Leonis | [
"Astronomy"
] | 382 | [
"Leo (constellation)",
"Constellations"
] |
2,225,361 | https://en.wikipedia.org/wiki/Zeta%20Leonis | Zeta Leonis (ζ Leonis, abbreviated Zeta Leo, ζ Leo), also named Adhafera , is a third-magnitude star in the constellation of Leo, the lion. It forms the second star (after Gamma Leonis) in the blade of the sickle, which is an asterism formed from the head of Leo.
Nomenclature
ζ Leonis (Latinised to Zeta Leonis) is the star's Bayer designation. It has the traditional name Adhafera (Aldhafera, Adhafara), which comes from the Arabic الضفيرة aḍ-ḍafīrah 'the braid/curl', a reference to its position in the lion's mane. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Adhafera for this star.
Properties
Adhafera is a giant star with a stellar classification of F0 III. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. Its apparent magnitude is +3.44, making it relatively faint for a star that is visible to the naked eye. Nevertheless, it shines with 85 times the luminosity of the Sun. Adhafera has about three times the Sun's mass and six times the radius of the Sun. Parallax measurements from the Hipparcos satellite yield an estimated distance to Adhafera of from the Sun.
Adhafera forms a double star with an optical companion that has an apparent magnitude of 5.90. Known as 35 Leonis, this star is separated from Adhafera by 325.9 arcseconds along a position angle of 340°. The two stars do not form a binary star system as 35 Leo is only 100 light years from Earth, thus separating the two stars by approximately .
References
Leo (constellation)
Leonis, Zeta
Adhafera
F-type giants
Leonis, 36
050335
Suspected variables
4031
089025
Durchmusterung objects | Zeta Leonis | [
"Astronomy"
] | 454 | [
"Leo (constellation)",
"Constellations"
] |
2,225,377 | https://en.wikipedia.org/wiki/Omicron%20Leonis | Omicron Leonis (ο Leonis, abbreviated Omicron Leo, ο Leo) is a multiple star system in the constellation of Leo, west of Regulus, some 130 light-years from the Sun, where it marks one of the lion's forepaws.
It consists of a binary pair, designated Omicron Leonis A and an optical companion, Omicron Leonis B. A's two components are themselves designated Omicron Leonis Aa (officially named Subra , the traditional name for the system) and Ab.
Nomenclature
ο Leonis (Latinised to Omicron Leonis) is the star's Bayer designation. The designations of the two constituents as Omicron Leonis A and B, and those of A's components—Omicron Leonis Aa and Ab—derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
It bore the traditional name Subra, from the Arabic زبرة zubra (upper part of the back), originally applied to Delta and Theta Leonis.
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Subra for the component Omicron Leonis Aa on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
Properties
The two members of the spectroscopic pair have similar brightnesses, but are very different stars: the primary is given the type F8-G0III giant; and the secondary is a type A7m dwarf. Their combined apparent magnitude is +3.52.
The visible companion, component B, is a much fainter star that has increased its separation from about an arc-minute to one and a half arc-minutes in the 350 years since it was first measured. It is an 11th-magnitude star a little more massive and hotter than the Sun, but much further away than the spectroscopic pair.
References
External links
Omicron Leo/Subra in Kaler Stars
Subra (HIP 47508) Relates to the A star, Subra-B
Subra - Omi Leonis brief data.
Leonis, Omicron
Binary stars
Leo (constellation)
F-type giants
A-type main-sequence stars
Subra
Leonis, 14
047508
3852
083808
Durchmusterung objects
Am stars | Omicron Leonis | [
"Astronomy"
] | 534 | [
"Leo (constellation)",
"Constellations"
] |
2,225,396 | https://en.wikipedia.org/wiki/Mu%20Leonis | Mu Leonis (μ Leonis, abbreviated Mu Leo, μ Leo), also named Rasalas , is a star in the constellation of Leo. The apparent visual magnitude of this star is 3.88, which is bright enough to be seen with the naked eye. Based upon an annual parallax shift of 0.02628 arc seconds as measured by the Hipparcos satellite, this system is from the Sun. In 2014, an exoplanet was discovered to be orbiting the star.
Nomenclature
μ Leonis (Latinised to Mu Leonis) is the star's Bayer designation.
It bore the traditional names Rasalas and Alshemali, both abbreviations of Ras al Asad al Shamaliyy. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Rasalas for this star on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
Properties
Mu Leonis is an evolved K-type red giant star with a stellar classification of . It is believed to be on the red giant branch, where it is fusing hydrogen into helium in a shell surrounding an inert helium core. The trailing notation indicates that, for a star of its type, it has stronger than normal absorption lines of cyanogen and calcium in its spectrum. It has around 1.5 times the Sun's mass and is estimated to be 5 billion years old, older than the Sun's age of 4.6 billion years. Using interferometry with the Navy Precision Optical Interferometer, its diameter was determined to be 11.8 times that of the Sun. Mu Leonis shines with 57 times the luminosity of the Sun from an outer atmosphere that has an effective temperature of 4,606 K.
Planetary system
In 2014 it was announced that Mu Leonis has a planetary companion that is at least 2.4 times as massive as Jupiter and orbits with a period of 358 days. This planet was detected by measuring radial velocity variations caused by gravitational displacement from the orbiting body.
Later in 2024, a study using astrometry from the Gaia spacecraft find a mass of , which the authors interpret as a likely upper limit, as the large level of RUWE in the astrometric solutionwhich could be caused by a companion around the starmight be just the result of systematic calibration errors. This indicate that Mu Leonis b lies in the planetary-mass regime and is not a brown dwarf.
References
External links
K-type giants
CN stars
Planetary systems with one confirmed planet
Leo (constellation)
Leonis, Mu
Rasalas
Leonis, 24
048455
3905
085503
Durchmusterung objects | Mu Leonis | [
"Astronomy"
] | 578 | [
"Leo (constellation)",
"Constellations"
] |
2,225,410 | https://en.wikipedia.org/wiki/Lambda%20Leonis | Lambda Leonis (λ Leonis, abbreviated Lam Leo, λ Leo), formally named Alterf , is a star in the constellation of Leo. The star is bright enough to be seen with the naked eye, having an apparent visual magnitude of 4.32 Based upon an annual parallax shift of 0.00991 arcseconds, it is located about 329 light-years from the Sun. At that distance, the visual magnitude of the star is reduced by an interstellar absorption factor of 0.06 because of extinction.
Nomenclature
λ Leonis (Latinised to Lambda Leonis) is the star's Bayer designation.
It bore the traditional name Alterf, from the Arabic الطرف aṭ-ṭarf "the view (of the lion)". In 2016, the International Astronomical Union (IAU) organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Alterf for this star on February 1, 2017 and it is now so included in the List of IAU-approved Star Names.
This star, along with Xi Cancri, were the Persian Nahn, "the Nose", and the Coptic Piautos, "the Eye", both lunar asterisms.
Properties
This is a K-type giant star with a stellar classification of K4.5 III. It is a suspected variable star with a reported magnitude range of 4.28−4.34. Lambda Leonis is 29% more massive than the Sun and is 3.6 billion years old. The interferometry-measured angular diameter of this star, after correcting for limb darkening, is , which, at its estimated distance, equates to a physical radius of nearly 45 times the radius of the Sun. It shines with around 540 times the luminosity of the Sun, from an outer atmosphere that has an effective temperature of 4,150 K.
References
K-type giants
Leo (constellation)
Alterf
Leonis, Lambda
Leonis, 04
046750
Suspected variables
082308
3773
Durchmusterung objects | Lambda Leonis | [
"Astronomy"
] | 442 | [
"Leo (constellation)",
"Constellations"
] |
2,225,420 | https://en.wikipedia.org/wiki/Projection%20clock | A projection clock (also called ceiling clock) is an analogue or digital clock equipped with a projector that creates an enlarged image of the clock face or display on any surface usable as a projection screen, most often the ceiling.
The clock can be placed almost anywhere if only the projected image must be seen. The image generated by most projection clocks is large enough that a nearsighted person can see it from a distance without glasses or contact lenses. Clocks usually have a conventional display on their body, in addition to the projector,
Projection clocks are also used in advertising and merchandising. High-brightness analogue projection clocks can superimpose a business' logo on top of the clock face, while there are low-brightness projection clocks designed for home use that project, for example, a logo in addition to the time.
Some projection clocks are radio-controlled, synchronising with a broadcast time standard and always displaying the right time without the need to set them. They may also display other information such as temperature and humidity.
History
Projection clocks were patented at least twice: once in 1909, and another time in 1940. Both patents have expired.
Early projection clocks were universally analogue but with the widespread adoption of digital clocks, digital projection clocks became the standard.
Technology
A projection clock usually needs a backlight like an incandescent bulb or LED.
There are low-brightness and high-brightness clocks. While the projection created by low-brightness clocks can be viewed only in a darkened room, high-brightness ones can also be viewed in bright light or daylight.
Low-brightness projection clocks
Most modern projection clocks have a red LED-based projector. Additional optional features not specific to projection clocks are the inclusion of an LED or LCD display in addition to the projector, an alarm function, and synchronisation to a broadcast time standard.
High-brightness projection clocks
Modern high-brightness projection clocks are in most cases analogue and have a halogen bulb backlight. In most cases, they use a set of rotating and fixed transparent discs with hands and a face. An LCD is integrated into some clocks to combine analogue and digital information on the projected image.
Projectors used in projection clocks are similar to other projectors and use the same optical principles. They usually use lenses although some projectors use the principle of shadow theatre , vector or raster scanning.
References
Clocks | Projection clock | [
"Physics",
"Technology",
"Engineering"
] | 479 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
2,225,423 | https://en.wikipedia.org/wiki/AMOLF | AMOLF is a research institute and part of the institutes organization of the Dutch Research Council (NWO). AMOLF carries out fundamental research on the physics and design principles of natural and man-made complex matter. AMOLF uses these insights to create novel functional materials and find new solutions to societal challenges in renewable energy, green ICT and healthcare. AMOLF is located at the Amsterdam Science Park.
AMOLF used to be part of the Dutch Foundation for Fundamental Research on Matter (FOM). On 31 December 2016 FOM integrated in NWO.
History
The institute was established in 1949 by the government as the FOM Laboratory for Mass Spectrography. In 1960, it was renamed to Laboratory for Mass Separation, and in 1966 it was reorganized into a research institute and renamed FOM Institute for Atomic and Molecular Physics (AMOLF).
The original research goal was to demonstrate the separation of uranium isotopes by electromagnetic separation methods, a topic of great strategic importance after World War II. To reach this goal, a number of novel analytical instruments were developed, starting with the development of mass-spectrometric tools. In 1953 AMOLF was the first European institute to successfully enrich Uranium. Soon after, research on thermal diffusion in gases followed, as did ultracentrifuge concepts, cathode dispersion, excitation of gases by using energetic ions and research on molecular beams. The gas-ultracentrifuge developed at AMOLF (under ) provided a base for the commercial enrichment of Uranium at the today well-known company of URENCO in Almelo.
Structure and organization
AMOLF functions as an incubator for Dutch science, both in terms of launching new research themes and in terms of training talented scientists. AMOLF is headed by its director Huib Bakker, who succeeded on 1 February 2016. The organization has 19 research groups headed by tenured or tenure-track group leaders. AMOLF employs about 130 researchers and 70 employees for technical and administrative support.
Research
AMOLF’s research program consists of four intertwined themes.
Nanophotonics: controlling and manipulating light with structures at the nanometer scale
Nanophotovoltaics: improving solar cells with nanomaterials
Designer Matter: research and design of new smart materials
Living Matter: research of biomaterials and multi-cellular systems
AMOLF publishes each year on average 15 PhD theses and over 120 papers.
Notable researchers
Huib Bakker
Marileen Dogterom (worked at AMOLF from 1997 to 2013)
Daan Frenkel
Ad Lagendijk
Albert Polman
References
External links
AMOLF
Nanotechnology institutions
Organisations based in Amsterdam
Physics research institutes
Research institutes in the Netherlands | AMOLF | [
"Materials_science"
] | 545 | [
"Nanotechnology",
"Nanotechnology institutions"
] |
2,225,961 | https://en.wikipedia.org/wiki/Functional%20block%20diagram | A functional block diagram, in systems engineering and software engineering, is a block diagram that describes the functions and interrelationships of a system.
The functional block diagram can picture:
functions of a system pictured by blocks
input and output elements of a block pictured with lines
the relationships between the functions, and
the functional sequences and paths for matter and or signals
The block diagram can use additional schematic symbols to show particular properties.
Since the late 1950s, functional block diagrams have been used in a wide range applications, from systems engineering to software engineering. They became a necessity in complex systems design to "understand thoroughly from exterior design the operation of the present system and the relationship of each of the parts to the whole."
Many specific types of functional block diagrams have emerged. For example, the functional flow block diagram is a combination of the functional block diagram and the flowchart. Many software development methodologies are built with specific functional block diagram techniques. An example from the field of industrial computing is the Function Block Diagram (FBD), a graphical language for the development of software applications for programmable logic controllers.
See also
Function model
Functional flow block diagram
References
Diagrams
Systems engineering
Management cybernetics | Functional block diagram | [
"Engineering"
] | 241 | [
"Systems engineering"
] |
2,225,997 | https://en.wikipedia.org/wiki/C%20Traps%20and%20Pitfalls | C Traps and Pitfalls is a slim computer programming book by former AT&T Corporation researcher and programmer Andrew Koenig, its first edition still in print in 2017, which outlines the many ways in which beginners and even sometimes quite experienced C programmers can write poor, malfunctioning and dangerous source code.
It evolved from an earlier technical report, by the same name, published internally at Bell Labs. This, in turn was inspired by a prior paper given by Koenig on "PL/I Traps and Pitfalls" at a SHARE conference in 1977. Koenig wrote that this title was inspired by a 1968 science fiction anthology by Robert Sheckley, "The People Trap and other Pitfalls, Snares, Devices and Delusions, as Well as Two Sniggles and a Contrivance".
References
1989 non-fiction books
Computer programming books
Software bugs
C (programming language)
Software anomalies
Addison-Wesley books | C Traps and Pitfalls | [
"Technology"
] | 193 | [
"Computer book stubs",
"Technological failures",
"Software anomalies",
"Computer errors",
"Computing stubs"
] |
2,226,053 | https://en.wikipedia.org/wiki/J%C3%BAlio%20C%C3%A9sar%20de%20Mello%20e%20Souza | Júlio César de Mello e Souza (Rio de Janeiro, May 6, 1895 – Recife, June 18, 1974), was a Brazilian writer and mathematics teacher. He was well known in Brazil and abroad for his books on recreational mathematics, most of them published under the pen names of Malba Tahan and Breno de Alencar Bianco.
He wrote 69 novels and 51 books of mathematics and other subjects, with over than two million books sold by 1995. His most famous work, The Man Who Counted, saw its 54th printing in 2001.
Júlio César's most popular books, including The Man Who Counted, are collections of mathematical problems, puzzles, curiosities, and embedded in tales inspired by the Arabian Nights. He thoroughly researched his subject matters — not only the mathematics, but also the history, geography, and culture of the Islamic Empire which was the backdrop and connecting thread of his books. Yet Júlio César's travels outside Brazil were limited to short visits to Buenos Aires, Montevideo, and Lisbon: he never set foot in the deserts and cities which he so vividly described in his books.
Júlio César was very critical of the educational methods used in Brazilian classrooms, especially for mathematics. "The mathematics teacher is a sadist," he claimed, "who loves to make everything as complicated as possible." In education, he was decades ahead of his time, and his proposals are still more praised than implemented today.
For his books, Júlio César received a prize by the prestigious Brazilian Literary Academy and was made a member of the Pernambuco Literary Academy. The Malba Tahan Institute was founded in 2004 at Queluz to preserve his legacy. The State Legislature of Rio de Janeiro determined his birthday, May 6, to be commemorated as the Mathematician's Day.
Early life
Júlio César was born in Rio de Janeiro but spent most of his childhood in Queluz, a small rural town in the State of São Paulo. His father, João de Deus de Mello e Souza, was a civil servant with limited salary and eight (some reports say nine) children to support.
In 1905 he was sent with his older brother, João Batista, to Rio de Janeiro to attend preparatory classes for admission to the prestigious Colégio Militar do Rio de Janeiro, where he studied from 1906 to 1909, and later at Colégio Pedro II.
As a student, Júlio César was not academically successful. In a 1905 letter to their parents, João Batista tells that little Júlio "is bad at writing, and a failure in mathematics". His grade reports at Colégio Pedro II show that he once failed an Algebra exam, and barely passed one on Arithmetic. He later attributed these results to the teaching practices of the time, based on "the detestable method of salivation".
However, he did show signs of his originality and non-conventional approaches in other ways. As a child in Queluz, he used to keep frogs as pets, and at one point he had some 50 animals in his yard. One of them, nicknamed "Monsignor", would follow him through the town. As an adult, he kept up with this hobby by assembling a large collection of frog statuettes.
His career as a writer began while he was still in high school, when one of his classmates offered him a brand-new pen and a postage stamp from Chile in exchange for an essay on the theme of "Hope", the homework for the next day. According to his memoirs, Júlio was called late at night by other anxious students, and by the next morning he had provided four different essays on "Hope", at 400 réis a piece. He kept on this activity for the rest of the year, writing on "Hate", "Nostalgia", and whatever else the teacher demanded.
Many years later he met his teacher, Silva Ramos, and told him of those dubious activities. When Silva Ramos introduced him jokingly to Raul Pederneiras as a "merchant of Hope and Hate", he got from the man prophetic advice: "Forget Hate and go on selling Hope. Take up this poetic profession, Merchant of Hope: since that business is profitable for the buyer, and even more so for the seller."
Career
Writing
Júlio began to write tales on his own while still in his teens but did not impress the critics in his family. His brother João Batista recalls that Júlio's tales were full of superfluous characters with bizarre names like "Mardukbarian" or "Protocholóski".
In 1918, at the age of 23, Júlio César presented five of his tales to the editor of the newspaper O Imparcial, where he worked, but his boss did not even look at them. Undaunted, Júlio picked up the manuscripts and brought them back a few days later, this time pretending that they were translations of the work of a certain "R. S. Slade," supposedly the rage in New York City. The first of those tales, The Jew's Revenge, was published in the front page of the next issue of the newspaper; and the rest followed suit.
This experience convinced Júlio to assume a "foreign" pen name. He chose an Arabian identity — because, as he declared in an interview, the Arabs were unsurpassed in the art of storytelling. For the next seven years he prepared himself by studying Arabic and reading all he could on Islamic culture. In 1925, he sold the idea of a series of tales on Oriental themes to Irineu Marinho, editor of the newspaper A Noite (which would later become a huge Brazilian media conglomerate, the Organizações Globo). His stories, published in the column Contos de Malba Tahan ("Tales of Malba Tahan"), were attributed to a fictitious Arabian scholar of that name, and ostensibly translated by an equally fictitious "Professor Breno Alencar Bianco".
Whether for the catchy pseudonym, or (more likely) for the author's lively style and imagination, his books were a resounding success, and he became a national celebrity. Even though his identity soon was known to everybody, he continued to use the name of Malba Tahan in his public life. He had a rubber stamp made with that name in Arabic script, which he used when grading his student's homework; and, in 1952 — by special permission of Brazilian President Getúlio Vargas — he added "Malba Tahan" to his own legal name.
Teaching
Before becoming a teacher, he worked for a time as general assistant at the National Library.
Júlio César graduated as an elementary schoolteacher at the Escola Normal do Distrito Federal in Rio de Janeiro, and as a civil engineer at the Escola Politécnica in 1913. He started lecturing as a substitute teacher at the Colégio Pedro II, and later became a teacher at the Escola Normal.
He began teaching history, geography and physics, and only later moved to mathematics.
In time he became Chair at the Colégio Pedro II, at the Instituto de Educação, at the teacher's school of the Universidade do Brasil (which would become the Federal University of Rio de Janeiro) and at the National School of Education, where he got the title of Professor Emeritus.
Besides his classes at the teacher's school, he delivered over 2000 lectures on the teaching of mathematics and wrote many books on the subject. In all his works Júlio defended the use of games as teaching aids, and the replacement of chalk-and-blackboard lectures by "mathematics laboratories" where students could engage in creative activities, self-study, and object manipulation — a proposal that was seen as heretical at the time.
In the Brazilian 0-to-10 grading system, Júlio would never give a zero grade. "Why give a zero, when there are so many numbers to choose from?" he used to say. He would give the brightest students the task of teaching the weaker ones: "by the end of the first semester, they would all be above the pass line." he claimed.
While his methods and style charmed all his students, he had the opposition of many of his colleagues, who found his approach of connecting mathematics to everyday life as demeaning.
Julio César also spread his message through radio programmes of several stations in Rio de Janeiro, including the Rádio Nacional, Radio Clube, and Rádio Mayrink Veiga, as well as in television, at the TV Tupi of Rio and the TV Cultura of São Paulo.
Júlio César's last public lecture was delivered in Recife, at the age of 79, to an audience of future teachers. It was about the art of storytelling. Back to his hotel room he apparently suffered a heart attack and expired.
He had left instructions for his funeral. He did not want people to wear black: quoting a song by Noel Rosa, he explained that "Black clothes are vanities/of those who enjoy fancy dress;/I only wish for your memories/and memories are colorless".
Other activities
Júlio was an energetic campaigner for the cause of the Hanseniacs (lepers), who had historically been banned and confined in leper colonies. For over 10 years he edited the magazine Damião, which preached the end of the prejudice and re-incorporation of former inmates into the society. In his testament, he left a message to the Hanseniacs, to be read at his funeral.
Books
Aventuras do Rei Baribê, "Adventures of King Baribê"
A Caixa do Futuro, "The Box of the Future."
Céu de Alá, "Allah's Heaven"
A Sombra do Arco-Íris, "The Rainbow's Shadow" (the author's favorite)O Homem que Calculava, "The Man Who Counted", 224p. (1938)Lendas do Céu e da Terra, "Legends of Heaven and Earth"Lendas do Deserto, "Legends of the Desert"Lendas do Oásis, "Legends of the Oasis"Lendas do Povo de Deus, "Legends of God's People"Maktub!, "It is Written!"Matemática Divertida e Curiosa, "Enjoyable and Curious Mathematics", 158p., .Os Melhores Contos, "The Best Tales"Meu Anel de Sete Pedras, "My Ring of Seven Stones"Mil Histórias Sem Fim, "A Thousand Unending Tales" (2 volumes)Minha Vida Querida, "My Dear Life"Novas Lendas Orientais, "New Oriental Legends"Salim, o Mágico, "Salim, the Magician"Acordaram-me de Madrugada, "They Woke Me Up In the Middle of the Night" (memoirs).
References
Luiza Villamea, article in Revista Nova Escola, September 1995.
João Batista de Mello e Souza, Os Meninos de Queluz'' – "The Boys from Queluz".
External links
A Biography in English by Andréa Estevão, at the Brazil-Arab News Agency.
(in Portuguese).
Another biography (in Portuguese).
And another (in Portuguese).
1895 births
1974 deaths
Brazilian male writers
Brazilian mathematicians
Recreational mathematicians
Mathematics popularizers
Academic staff of the Federal University of Rio de Janeiro | Júlio César de Mello e Souza | [
"Mathematics"
] | 2,334 | [
"Recreational mathematics",
"Recreational mathematicians"
] |
2,226,057 | https://en.wikipedia.org/wiki/1458%20%28number%29 | 1458 is the integer after 1457 and before 1459.
The maximum determinant of an 11 by 11 matrix of zeroes and ones is 1458.
1458 is one of three numbers which, when its base 10 digits are added together, produces a sum which, when multiplied by its reversed self, yields the original number:
1 + 4 + 5 + 8 = 18
18 × 81 = 1458
The only other non-trivial numbers with this property are 81 and 1729, as well as the trivial solutions 1 and 0. It was proven by Masahiko Fujiwara.
References
Integers | 1458 (number) | [
"Mathematics"
] | 122 | [
"Mathematical objects",
"Number stubs",
"Elementary mathematics",
"Integers",
"Numbers"
] |
2,226,410 | https://en.wikipedia.org/wiki/Sonication | Sonication is the act of applying sound energy to agitate particles in a sample, for various purposes such as the extraction of multiple compounds from plants, microalgae and seaweeds. Ultrasonic frequencies (> 20 kHz) are usually used, leading to the process also being known as ultrasonication or ultra-sonication.
In the laboratory, it is usually applied using an ultrasonic bath or an ultrasonic probe, colloquially known as a sonicator. In a paper machine, an ultrasonic foil can distribute cellulose fibres more uniformly and strengthen the paper.
Effects
Sonication has numerous effects, both chemical and physical. The scientific field concerned with understanding the effect of sonic waves on chemical systems is called sonochemistry. The chemical effects of ultrasound do not come from a direct interaction with molecular species. Studies have shown that no direct coupling of the acoustic field with chemical species on a molecular level can account for sonochemistry or sonoluminescence. Instead, in sonochemistry the sound waves migrate through a medium, inducing pressure variations and cavitations that grow and collapse, transforming the sound waves into mechanical energy.
Applications
Sonication can be used for the production of nanoparticles, such as nanoemulsions, nanocrystals, liposomes and wax emulsions, as well as for wastewater purification, degassing, extraction of seaweed polysaccharides and plant oil, extraction of anthocyanins and antioxidants, production of biofuels, crude oil desulphurization, cell disruption, polymer and epoxy processing, adhesive thinning, and many other processes. It is applied in pharmaceutical, cosmetic, water, food, ink, paint, coating, wood treatment, metalworking, nanocomposite, pesticide, fuel, wood product and many other industries.
Sonication can be used to speed dissolution, by breaking intermolecular interactions. It is especially useful when it is not possible to stir the sample, as with NMR tubes. It may also be used to provide the energy for certain chemical reactions to proceed. Sonication can be used to remove dissolved gases from liquids (degassing) by sonicating the liquid while it is under a vacuum. This is an alternative to the freeze-pump-thaw and sparging methods.
In biological applications, sonication may be sufficient to disrupt or deactivate a biological material. For example, sonication is often used to disrupt cell membranes and release cellular contents. This process is called sonoporation. Small unilamellar vesicles (SUVs) can be made by sonication of a dispersion of large multilamellar vesicles (LMVs). Sonication is also used to fragment molecules of DNA, in which the DNA subjected to brief periods of sonication is sheared into smaller fragments.
Sonication is commonly used in nanotechnology for evenly dispersing nanoparticles in liquids. Additionally, it is used to break up aggregates of micron-sized colloidal particles.
Sonication can also be used to initiate crystallisation processes and even control polymorphic crystallisations. It is used to intervene in anti-solvent precipitations (crystallisation) to aid mixing and isolate small crystals.
Sonication is the mechanism used in ultrasonic cleaning—loosening particles adhering to surfaces. In addition to laboratory science applications, sonicating baths have applications including cleaning objects such as spectacles and jewelry.
Sonication is used in food industry as well. Main applications are for dispersion to save expensive emulgators (mayonnaise) or to speed up filtration processes (vegetable oil etc.). Experiments with sonication for artificial ageing of liquors and other alcoholic beverages were conducted .
Soil samples are often subjected to ultrasound in order to break up soil aggregates; this allows the study of the different constituents of soil aggregates (especially soil organic matter) without subjecting them to harsh chemical treatment.
Sonication is also used to extract microfossils from rock.
An ultrasonic bath or an ultrasonic probe system is used for extraction. For instance, this technique was suggested to remove isoflavones from soybeans and phenolic compounds from wheat bran and coconut shell powder. The outcomes differ for every raw material and solvent utilized and the other extraction techniques. Acoustic or ultrasonic cavitation is the basis for the operation of ultrasound-assisted extraction.
Equipment
Substantial intensity of ultrasound and high ultrasonic vibration amplitudes are required for many processing applications, such as nano-crystallization, nano-emulsification, deagglomeration, extraction, cell disruption, as well as many others. Commonly, a process is first tested on a laboratory scale to prove feasibility and establish some of the required ultrasonic exposure parameters. After this phase is complete, the process is transferred to a pilot (bench) scale for flow-through pre-production optimization and then to an industrial scale for continuous production. During these scale-up steps, it is essential to make sure that all local exposure conditions (ultrasonic amplitude, cavitation intensity, time spent in the active cavitation zone, etc.) stay the same. If this condition is met, the quality of the final product remains at the optimized level, while the productivity is increased by a predictable "scale-up factor". The productivity increase results from the fact that laboratory, bench and industrial-scale ultrasonic processor systems incorporate progressively larger ultrasonic horns, able to generate progressively larger high-intensity cavitation zones and, therefore, to process more material per unit of time. This is called "direct scalability". It is important to point out that increasing the power capacity of the ultrasonic processor alone does not result in direct scalability, since it may be (and frequently is) accompanied by a reduction in the ultrasonic amplitude and cavitation intensity. During direct scale-up, all processing conditions must be maintained, while the power rating of the equipment is increased in order to enable the operation of a larger ultrasonic horn.
Finding the optimum operation condition for this equipment is a challenge for process engineers and needs deep knowledge about side effects of ultrasonic processors.
See also
Ultrasonics
Ultrasonic cleaning
Kenneth S. Suslick
References
Ultrasound
Laboratory techniques
Fluid dynamics
Medical ultrasonography | Sonication | [
"Chemistry",
"Engineering"
] | 1,303 | [
"Piping",
"Chemical engineering",
"nan",
"Fluid dynamics"
] |
2,226,665 | https://en.wikipedia.org/wiki/Outside%20broadcasting | Outside broadcasting (OB) is the electronic field production (EFP) of television or radio programmes (typically to cover television news and sports television events) from a mobile remote broadcast television studio. Professional video camera and microphone signals come into the production truck for processing, recording and possibly transmission.
Some outside broadcasts use a mobile production control room (PCR) inside a production truck.
History
Outside radio broadcasts have been taking place since the early 1920s and television ones since the late 1920s. The first outside broadcast by the British Broadcasting Company was of the British National Opera Company production of The Magic Flute from the Royal Opera House, Covent Garden on 8 January, 1923. The first large-scale outside broadcast was the televising of the Coronation of George VI and Elizabeth in May 1937, done by the BBC's first Outside Broadcast truck, MCR 1 (short for Mobile Control Room).
After the Second World War, the first notable outside broadcast was of the 1948 Summer Olympics. The Coronation of Elizabeth II followed in 1953, with 21 cameras being used to cover the event.
In December 1963 instant replays were used for the first time. Director Tony Verna used the technique on the Army-Navy game which aired on CBS Sports on December 7, 1963.
The 1968 Summer Olympics was the first with competitions televised in colour. The 1972 Olympic Games were the first where all competitions were captured by outside broadcast cameras.
During the 1970s, ITV franchise holder Southern Television was unique in having an outside broadcast boat, named Southener.
The wedding of Prince Charles and Lady Diana Spencer in July 1981 was the biggest outside broadcast at the time, with an estimated 750 million viewers.
New technology
In 2008, the first 3D outside broadcast took place with the transmission of a Calcutta Cup rugby match, but only to an audience of industry professionals who had been invited by BBC Sport.
In March 2010, the first public 3D outside broadcast took place with an NHL game between the New York Rangers and New York Islanders.
The first commercial ultra-high definition outside broadcast was a Premier League game between Stoke City v West Ham, televised by Sky Sports in August 2013.
Tests in 8K resolution outside broadcasts began to take place during the 2010s, including tests by NHK and BT Sport. The first public 8K outside broadcast in the UK took place in February 2020.
Modern applications
Modern outside broadcasts now use specially designed OB vehicles, many of which are now built based around IP technology rather than relying on coaxial cable.
There has been an increasing rise in the use of flyaway or flypack Portable Production Units, which allow for an increased level of customisation and can be rigged in a larger variety of venues.
In the past many outside broadcasting applications have relied on using satellite uplinks to broadcast live audio and video back to the studio. While this has its advantages such as the ability to set up anywhere covered by the respective geostationary satellite, satellite uplinking is relatively expensive and the round trip latency is in the range of 240 to 280 milliseconds.
As more venues install fiber optic cable, this is increasingly used. For news gathering, contribution over public internet is also now used. Modern applications such as hardware and software IP codecs have allowed the use of public 3G/4G networks to broadcast video and audio. The latency of 3G is around 100–500 ms, while 4G is less than 100 ms.
Gallery
See also
Production truck
Satellite truck
Electronic news-gathering (ENG)
References
External links
Recreation of a full 1970s BBC Outside Broadcast production
Technical planning stage of a 1970s Outside Broadcast production
Demonstration of the 'lining up' process for EMI 2001 OB camera from the 1970s
Discussion and demonstration of the microphone and communications set up for a sports OB
BBC Outside Broadcast crew reflect on their careers in OB production
TV Outside Broadcast History Website
Broadcast engineering
Television terminology
fr:Régie (spectacle)#Télévision | Outside broadcasting | [
"Engineering"
] | 779 | [
"Broadcast engineering",
"Electronic engineering"
] |
2,226,687 | https://en.wikipedia.org/wiki/Organizational%20unit | In computing, an organizational unit (OU) provides a way of classifying objects located in directories, or names in a digital certificate hierarchy, typically used either to differentiate between objects with the same name (John Doe in OU "marketing" versus John Doe in OU "customer service"), or to parcel out authority to create and manage objects (for example: to give rights for user-creation to local technicians instead of having to manage all accounts from a single central group). Organizational units most commonly appear in X.500 directories, X.509 certificates, Lightweight Directory Access Protocol (LDAP) directories, Active Directory (AD), and Lotus Notes directories and certificate trees, but they may feature in almost any modern directory or digital certificate container grouping system.
In most systems, organizational units appear within a top-level organization grouping or organization certificate, called a domain. In many systems one OU can also exist within another OU. When OUs are nested, as one OU contains another OU, this creates a relationship where the contained OU is called the child and the container is called the parent. Thus, OUs are used to create a hierarchy of containers within a domain. Only OUs within the same domain can have relationships. OUs of the same name in different domains are independent.
Specific uses
The name organizational unit appears to represent a single organization with multiple units (departments) within that organization. However, OUs do not always follow this model. They might represent geographical regions, job-functions, associations with other (external) groups, or the technology used in relation to the objects.
Examples would include:
Department (e.g. human resources) within a corporation
Division (e.g. LifeScan, Inc.) that is owned by but separate from a parent corporation (Johnson & Johnson), although this would commonly be placed in a separate domain
Association (e.g. contractors) that is external to the organization.
To identify geographically distinct regions (e.g. Kansas City) the X.521 standard recommends a "locality" entry instead.
Job types or functions (e.g. managers, storage servers) that runs across all divisions of a company should be represented by an "organizational role" entry.
Sun Enterprise Directory Server and Active Directory
In Sun Java System Directory Server and Microsoft Active Directory (AD), an organizational unit (OU) can contain any other unit, including other OUs, users, groups, and computers. Organizational units in separate domains may have identical names but are independent of each other.
OUs let an administrator group computers and users so as to apply a common policy to them. Organizational Units give a hierarchical structure, and when properly designed can ease administration.
Origins with X.500, Novell, and Lotus software
Novell and Lotus supplied the two largest software directory systems. Each of these companies started with flat account and directory structures, and encountered the support and name-conflict limitations inherent in their flat structures. They adopted the X.500 OU concept into their next-generation software around 1993 – Novell with the release of Novell Directory Services (subsequently known as eDirectory), and Lotus with the release of the third version of Lotus Notes. Microsoft allegedly used Novell's directory as a blueprint for the first released versions of AD, but this claim appears suspect, given that X.500 served as the "granddaddy" of all directory systems.
References
Computer networking
Identity management | Organizational unit | [
"Technology",
"Engineering"
] | 700 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
2,226,899 | https://en.wikipedia.org/wiki/Notification%20system | In information technology, a notification system is a combination of software and hardware that provides a means of delivering a message to a set of recipients. It commonly shows activity related to an account. Such systems constitute an important aspect of modern Web applications.
The widespread adoption of notification systems was a major technological development of the 20th century. A notification is a combination of software, hardware, and psychology that provides a means of delivering a message to a group of recipients. Notifications show activity that relate to an event, account, or person. A push notification is a message that appears on a mobile device such as a text, sports score, limited-time deal, or an e-mail announcing when a computer network will be down for a scheduled maintenance. Notifications are sent from app publishers at any time, in an effort to get users to open up their app or website. Notifications appear on a user's lock screen and also at the top of their phone screen when the phone is unlocked and in use. Push notifications can be valuable and convenient for both the app user and the developer due to the immediacy and display location of notifications. Notifications also pair with sounds to reach multiple senses of a user and get maximum attention. For app publishers, push notifications are a way for them to speak directly to the user without being caught by spam filters or being pushed to the side by the flood of emails within an inbox. Because of this, these push click-through rates can be twice as high as email. They invite users to open an app or spend time and money in a certain way by the app publisher, even when the app isn't open. This means that for developers, publishers, and businesses, notifications are the most effective way to take attention and ultimately make money.
Notifications utilize a concept known as variable rewards, which is a technique that slot machines use to hook gamblers. Similarly, variable reward systems keep users compulsively checking their phones due to the possibility of social approval awaiting them. Notifications have taken over our world and are now utilized by every software, website, program, and person in the world.
Ramsay Brown, co-founder of FKA Dopamine Lab, CEO of Mission Control, and leader of AI Responsibility Lab, says "The brain isn't particularly craving any one little feel-good signal as much as it does a good rhythm and pattern". Social media apps cater to the timing of the notifications that they deliver to deliver literal hits of dopamine to users at algorithmically determined times. Oftentimes these companies will stockpile these notifications before delivering them all in a batch in order to maximize the emotional impact that a user experiences. Another man, Jonathan Haidt, who is a social psychologist at NYU Stern School of Business points to concerns of mental health directly relating to social media and the notification system. He points to the increase in depression and suicide rates among teens and young adults since the early 2000s and Haidy states that this trend starts the year social media was made available on cell phones. Tristan Harris, former design ethicist at Google and co-founder of the Center for Human Technology states that there is a "disinformation-for-profit business model" and companies profit by allowing "unregulated messages to reach anyone for the best price". This becomes problematic as companies have unlimited and often unwarranted access to you and your focus through the notification system. This is always used to drive larger profits, whether that means that companies use notifications to simply promote their newest product, or if they subtly try to get you back onto the app in order to take more of your time. There is overwhelming evidence that notifications are associated with decreased productivity, poorer concentration, and increased distraction at work, school, and home.
See also
Emergency notification system
Emergency communication system
Emergency broadcast system
Emergency alert system
Emergency telephone number
ePrompter, an e-mail notification system
References
Human–computer interaction
Information systems | Notification system | [
"Technology",
"Engineering"
] | 811 | [
"Information systems",
"Information technology",
"Human–machine interaction",
"Human–computer interaction"
] |
3,062,954 | https://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman%20absorber%20theory | The Wheeler–Feynman absorber theory (also called the Wheeler–Feynman time-symmetric theory), named after its originators, the physicists Richard Feynman and John Archibald Wheeler, is a theory of electrodynamics based on a relativistic correct extension of action at a distance electron particles. The theory postulates no independent electromagnetic field. Rather, the whole theory is encapsulated by the Lorentz-invariant action of particle trajectories defined as
where .
The absorber theory is invariant under time-reversal transformation, consistent with the lack of any physical basis for microscopic time-reversal symmetry breaking. Another key principle resulting from this interpretation, and somewhat reminiscent of Mach's principle and the work of Hugo Tetrode, is that elementary particles are not self-interacting. This immediately removes the problem of electron self-energy giving an infinity in the energy of an electromagnetic field.
Motivation
Wheeler and Feynman begin by observing that classical electromagnetic field theory was designed before the discovery of electrons: charge is a continuous substance in the theory. An electron particle does not naturally fit in to the theory: should a point charge see the effect of its own field? They reconsider the fundamental problem of a collection of point charges, taking up a field-free action at a distance theory developed separately by Karl Schwarzschild, Hugo Tetrode, and Adriaan Fokker. Unlike instantaneous action at a distance theories of the early 1800s these "direct interaction" theories are based on interaction propagation at the speed of light. They differ from the classical field theory in three ways 1) no independent field is postulated; 2) the point charges do not act upon themselves; 3) the equations are time symmetric. Wheeler and Feynman propose to develop these equations into a relativistically correct generalization of electromagnetism based on Newtonian mechanics.
Problems with previous direct-interaction theories
The Tetrode-Fokker work left unsolved two major problems. First, in a non-instantaneous action at a distance theory, the equal action-reaction of Newton's laws of motion conflicts with causality. If an action propagates forward in time, the reaction would necessarily propagate backwards in time. Second, existing explanations of radiation reaction force or radiation resistance depended upon accelerating electrons interacting with their own field; the direct interaction models explicitly omit self-interaction.
Absorber and radiation resistance
Wheeler and Feynman postulate the "universe" of all other electrons as an absorber of radiation to overcome these issues and extend the direct interaction theories.
Rather than considering an unphysical isolated point charge, they model all charges in the universe with a uniform absorber in a shell around a charge. As the charge moves relative to the absorber, it radiates into the absorber which "pushes back", causing the radiation resistance.
Key result
Feynman and Wheeler obtained their result in a very simple and elegant way. They considered all the charged particles (emitters) present in our universe and assumed all of them to generate time-reversal symmetric waves. The resulting field is
Then they observed that if the relation
holds, then , being a solution of the homogeneous Maxwell equation, can be used to obtain the total field
The total field is then the observed pure retarded field.
The assumption that the free field is identically zero is the core of the absorber idea. It means that the radiation emitted by each particle is completely absorbed by all other particles present in the universe. To better understand this point, it may be useful to consider how the absorption mechanism works in common materials. At the microscopic scale, it results from the sum of the incoming electromagnetic wave and the waves generated from the electrons of the material, which react to the external perturbation. If the incoming wave is absorbed, the result is a zero outgoing field. In the absorber theory the same concept is used, however, in presence of both retarded and advanced waves.
Arrow of time ambiguity
The resulting wave appears to have a preferred time direction, because it respects causality. However, this is only an illusion. Indeed, it is always possible to reverse the time direction by simply exchanging the labels emitter and absorber. Thus, the apparently preferred time direction results from the arbitrary labelling. Wheeler and Feynman claimed that thermodynamics picked the observed direction; cosmological selections have also been proposed.
The requirement of time-reversal symmetry, in general, is difficult to reconcile with the principle of causality. Maxwell's equations and the equations for electromagnetic waves have, in general, two possible solutions: a retarded (delayed) solution and an advanced one. Accordingly, any charged particle generates waves, say at time and point , which will arrive at point at the instant (here is the speed of light), after the emission (retarded solution), and other waves, which will arrive at the same place at the instant , before the emission (advanced solution). The latter, however, violates the causality principle: advanced waves could be detected before their emission. Thus the advanced solutions are usually discarded in the interpretation of electromagnetic waves.
In the absorber theory, instead charged particles are considered as both emitters and absorbers, and the emission process is connected with the absorption process as follows: Both the retarded waves from emitter to absorber and the advanced waves from absorber to emitter are considered. The sum of the two, however, results in causal waves, although the anti-causal (advanced) solutions are not discarded a priori.
Alternatively, the way that Wheeler/Feynman came up with the primary equation is: They assumed that their Lagrangian only interacted when and where the fields for the individual particles were separated by a proper time of zero. So since only massless particles propagate from emission to detection with zero proper time separation, this Lagrangian automatically demands an electromagnetic like interaction.
New interpretation of radiation damping
One of the major results of the absorber theory is the elegant and clear interpretation of the electromagnetic radiation process. A charged particle that experiences acceleration is known to emit electromagnetic waves, i.e., to lose energy. Thus, the Newtonian equation for the particle must contain a dissipative force (damping term), which takes into account this energy loss. In the causal interpretation of electromagnetism, Hendrik Lorentz and Max Abraham proposed that such a force, later called Abraham–Lorentz force, is due to the retarded self-interaction of the particle with its own field. This first interpretation, however, is not completely satisfactory, as it leads to divergences in the theory and needs some assumptions on the structure of charge distribution of the particle. Paul Dirac generalized the formula to make it relativistically invariant. While doing so, he also suggested a different interpretation. He showed that the damping term can be expressed in terms of a free field acting on the particle at its own position:
However, Dirac did not propose any physical explanation of this interpretation.
A clear and simple explanation can instead be obtained in the framework of absorber theory, starting from the simple idea that each particle does not interact with itself. This is actually the opposite of the first Abraham–Lorentz proposal. The field acting on the particle at its own position (the point ) is then
If we sum the free-field term of this expression, we obtain
and, thanks to Dirac's result,
Thus, the damping force is obtained without the need for self-interaction, which is known to lead to divergences, and also giving a physical justification to the expression derived by Dirac.
Developments since original formulation
Gravity theory
Inspired by the Machian nature of the Wheeler–Feynman absorber theory for electrodynamics, Fred Hoyle and Jayant Narlikar proposed their own theory of gravity in the context of general relativity. This model still exists in spite of recent astronomical observations that have challenged the theory. Stephen Hawking had criticized the original Hoyle-Narlikar theory believing that the advanced waves going off to infinity would lead to a divergence, as indeed they would, if the universe were only expanding.
Transactional interpretation of quantum mechanics
Again inspired by the Wheeler–Feynman absorber theory, the transactional interpretation of quantum mechanics (TIQM) first proposed in 1986 by John G. Cramer, describes quantum interactions in terms of a standing wave formed by retarded (forward-in-time) and advanced (backward-in-time) waves. Cramer claims it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes, such as quantum nonlocality, quantum entanglement and retrocausality.
Attempted resolution of causality
T. C. Scott and R. A. Moore demonstrated that the apparent acausality suggested by the presence of advanced Liénard–Wiechert potentials could be removed by recasting the theory in terms of retarded potentials only, without the complications of the absorber idea.
The Lagrangian describing a particle () under the influence of the time-symmetric potential generated by another particle () is
where is the relativistic kinetic energy functional of particle , and and are respectively the retarded and advanced Liénard–Wiechert potentials acting on particle and generated by particle . The corresponding Lagrangian for particle is
It was originally demonstrated with computer algebra and then proven analytically that
is a total time derivative, i.e. a divergence in the calculus of variations, and thus it gives no contribution to the Euler–Lagrange equations. Thanks to this result the advanced potentials can be eliminated; here the total derivative plays the same role as the free field. The Lagrangian for the N-body system is therefore
The resulting Lagrangian is symmetric under the exchange of with . For this Lagrangian will generate exactly the same equations of motion of and . Therefore, from the point of view of an outside observer, everything is causal. This formulation reflects particle-particle symmetry with the variational principle applied to the N-particle system as a whole, and thus Tetrode's Machian principle. Only if we isolate the forces acting on a particular body do the advanced potentials make their appearance. This recasting of the problem comes at a price: the N-body Lagrangian depends on all the time derivatives of the curves traced by all particles, i.e. the Lagrangian is infinite-order. However, much progress was made in examining the unresolved issue of quantizing the theory. Also, this formulation recovers the Darwin Lagrangian, from which the Breit equation was originally derived, but without the dissipative terms. This ensures agreement with theory and experiment, up to but not including the Lamb shift. Numerical solutions for the classical problem were also found. Furthermore, Moore showed that a model by Feynman and Albert Hibbs is amenable to the methods of higher than first-order Lagrangians and revealed chaotic-like solutions. Moore and Scott showed that the radiation reaction can be alternatively derived using the notion that, on average, the net dipole moment is zero for a collection of charged particles, thereby avoiding the complications of the absorber theory.
This apparent acausality may be viewed as merely apparent, and this entire problem goes away. An opposing view was held by Einstein.
Alternative Lamb shift calculation
As mentioned previously, a serious criticism against the absorber theory is that its Machian assumption that point particles do not act on themselves does not allow (infinite) self-energies and consequently an explanation for the Lamb shift according to quantum electrodynamics (QED). Ed Jaynes proposed an alternate model where the Lamb-like shift is due instead to the interaction with other particles very much along the same notions of the Wheeler–Feynman absorber theory itself. One simple model is to calculate the motion of an oscillator coupled directly with many other oscillators. Jaynes has shown that it is easy to get both spontaneous emission and Lamb shift behavior in classical mechanics. Furthermore, Jaynes' alternative provides a solution to the process of "addition and subtraction of infinities" associated with renormalization.
This model leads to the same type of Bethe logarithm (an essential part of the Lamb shift calculation), vindicating Jaynes' claim that two different physical models can be mathematically isomorphic to each other and therefore yield the same results, a point also apparently made by Scott and Moore on the issue of causality.
Relationship to quantum field theory
This universal absorber theory is mentioned in the chapter titled "Monster Minds" in Feynman's autobiographical work Surely You're Joking, Mr. Feynman! and in Vol. II of the Feynman Lectures on Physics. It led to the formulation of a framework of quantum mechanics using a Lagrangian and action as starting points, rather than a Hamiltonian, namely the formulation using Feynman path integrals, which proved useful in Feynman's earliest calculations in quantum electrodynamics and quantum field theory in general. Both retarded and advanced fields appear respectively as retarded and advanced propagators and also in the Feynman propagator and the Dyson propagator. In hindsight, the relationship between retarded and advanced potentials shown here is not so surprising in view of the fact that, in quantum field theory, the advanced propagator can be obtained from the retarded propagator by exchanging the roles of field source and test particle (usually within the kernel of a Green's function formalism). In quantum field theory, advanced and retarded fields are simply viewed as mathematical solutions of Maxwell's equations whose combinations are decided by the boundary conditions.
See also
Abraham–Lorentz force
Causality
Paradox of radiation of charged particles in a gravitational field
Retrocausality
Symmetry in physics and T-symmetry
Transactional interpretation
Two-state vector formalism
Notes
Sources
Electromagnetism
Richard Feynman | Wheeler–Feynman absorber theory | [
"Physics"
] | 2,920 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
3,063,086 | https://en.wikipedia.org/wiki/Cyanidin | Cyanidin is a natural organic compound. It is a particular type of anthocyanidin (glycoside version called anthocyanins). It is a pigment found in many red berries including grapes, bilberry, blackberry, blueberry, cherry, chokeberry, cranberry, elderberry, hawthorn, loganberry, açai berry and raspberry. It can also be found in other fruits such as apples and plums, and in red cabbage and red onion. It has a characteristic reddish-purple color, though this can change with pH; solutions of the compound are red at pH < 3, violet at pH 7-8, and blue at pH > 11. In certain fruits, the highest concentrations of cyanidin are found in the seeds and skin. Cyanidin has been found to be a potent sirtuin 6 (SIRT6) activator.
List of cyanidin derivatives
Antirrhinin (cyanidin-3-rutinoside or 3-C-R), found in black raspberry
Cyanidin-3-xylosylrutinoside, found in black raspberry
Cyanidin-3,4′-di-O-β-glucopyranoside, found in red onion
Cyanidin-4′-O-β-glucoside, found in red onion
Chrysanthemin (cyanidin-3-O-glucoside), found in blackcurrant pomace
Ideain (cyanidin 3-O-galactoside), found in Vaccinium species
Cyanin (cyanidin-3,5-O-diglucoside), found in red wine
Biosynthesis
Cyanidin can be synthesized in berry plants through the shikimate pathway and polyketide synthase (PKS) III. The shikimate pathway is a biosynthetic pathways that uses the starting materials Phosphoenolpyruvic acid (PEP) and Erythrose 4-phosphate to form shikimic acid, which then further reacts to form specific aromatic amino acids. L-phenylalanine, which is necessary in the production of cyanidin, is synthesized through the shikimate pathway.
In the synthesis of L-phenylalanine, chorismate undergoes a Claisen rearrangement by a Chorismate mutase enzyme to form prephenate. Prephenate undergoes dehydration, decarboxylation, and transamination with Pyridoxal phosphate (PLP) and alpha-Ketoglutaric acid to form L-phenylalanine (figure 1).
L-phenylalanine then undergoes an elimination of the primary amine with Phenylalanine ammonia-lyase (PAL) to form cinnamate. Through an oxidation with molecular oxygen and NADPH, a hydroxyl group is added to the para position of the aromatic ring. The compound then reacts with Coenzyme A (CoA), CoA ligase, and ATP to attach CoA to the carboxylic acid group. The compound reacts with naringenin-chalcone synthase and three malonyl CoA molecules to add six carbon atoms and three more keto groups ring through PKS III. Aureusidin synthase catalyses the aromatization and cyclization of the newly added carbonyl groups and facilitates the release of CoA. The compound then spontaneously cyclizes to form naringenin (figure 2).
Naringenin is then converted to cyanidin through several oxidizing and reducing steps. First naringenin is reacted with two equivalents of oxygen, alpha-Ketogluteratic acid, and flavanone 3-hydroxylase to form dihydrokaempferol. The compound then reacts with NADPH and dihydroflavonol 4-reductase to form leucopelargonidin, which is further oxidized with oxygen, alpha-Ketogluteratic acid, and anthocyanidin synthase. This compound spontaneously loses a water molecule and a hydroxide ion to form cyanidin (figure 3).
Activation
Among many anthocyanidins studied, cyanidin most potently stimulated activity of the sirtuin 6 enzyme.
References
Anthocyanidins | Cyanidin | [
"Chemistry",
"Materials_science"
] | 926 | [
"Titration",
"PH indicators",
"Chromism",
"Chemical tests",
"Equilibrium chemistry"
] |
3,063,191 | https://en.wikipedia.org/wiki/Nielsen%20realization%20problem | The Nielsen realization problem is a question asked by about whether finite subgroups of mapping class groups can act on surfaces, that was answered positively by .
Statement
Given an oriented surface, we can divide the group Diff(S), the group of diffeomorphisms of the surface to itself, into isotopy classes to get the mapping class group π0(Diff(S)). The conjecture asks whether a finite subgroup of the mapping class group of a surface can be realized as the isometry group of a hyperbolic metric on the surface.
The mapping class group acts on Teichmüller space. An equivalent way of stating the question asks whether every finite subgroup of the mapping class group fixes some point of Teichmüller space.
History
asked whether finite subgroups of mapping class groups can act on surfaces.
claimed to solve the Nielsen realization problem but his proof depended on trying to show that Teichmüller space (with the Teichmüller metric) is negatively curved. pointed out a gap in the argument, and showed that Teichmüller space is not negatively curved. gave a correct proof that finite subgroups of mapping class groups can act on surfaces using left earthquakes.
References
Geometric topology
Homeomorphisms
Theorems in topology | Nielsen realization problem | [
"Mathematics"
] | 258 | [
"Mathematical theorems",
"Homeomorphisms",
"Theorems in topology",
"Geometric topology",
"Topology",
"Mathematical problems"
] |
3,063,351 | https://en.wikipedia.org/wiki/Nubian%20Sandstone%20Aquifer%20System | The Nubian Sandstone Aquifer System (NSAS) is the world's largest known fossil water aquifer system. It is located underground in the Eastern end of the Sahara desert and spans the political boundaries of four countries in north-eastern Africa. NSAS covers a land area spanning just over two million km2, including north-western Sudan, north-eastern Chad, south-eastern Libya, and most of Egypt. Containing an estimated 150,000 km3 of groundwater, the significance of the NSAS as a potential water resource for future development programs in these countries is large. The Great Man-Made River (GMMR) project in Libya makes use of the system, extracting substantial amounts of water from this aquifer, removing an estimated 2.4 km3 of fresh water for consumption and agriculture per year.
Characteristics
Since 2001, the Nubian Sandstone aquifer situated between the Toshka and Abu Simbel areas of Egypt has undergone intensive drilling and development as part of a land reclamation project. Drilling information was used to conduct a variety of studies regarding the hydrogeological setting of the area's aquifer. Results indicated that lithological characteristics and tectonic settings are having a substantial effect on groundwater flow patterns and the area's overall aquifer potentiality, which is considered relatively low when compared to neighboring areas in eastern Oweinat or Dakhla.
Geology
The aquifer is largely composed of hard ferruginous sandstone with great shale and clay intercalation, having a thickness that ranges between 140 and 230 meters. Groundwater type varies from fresh to slightly brackish (salinity ranges from 240 to 1300 ppm). The ion dominance ordering shows that sodium cation is most commonly predominating over calcium and magnesium – whereas chloride is predominant over sulfate and bicarbonate. The groundwater is of meteoric origin (the term meteoric water refers to water that originated as precipitation; most groundwater is meteoric in origin). High concentrations of sodium, chloride, and sulfates reflect the leaching and dissolution processes of gypsiferous shales and clay, in addition to a lengthy duration of water residence. Two recharge locations have been identified by Reika Yokochi et al.: one 38,000 years ago originating from the Mediterranean, and the second dated at around 361,000 years ago from the tropical Atlantic.
International development projects
Since 2006, the International Atomic Energy Agency has been working in cooperation with the four NSAS countries to help increase understanding of the aquifer's complexities through the IAEA-UNDP-GEF Nubian Project. Project partners include the United Nations Development Programme (UNDP)/Global Environment Facility (GEF), IAEA, United Nations Educational, Scientific and Cultural Organization (UNESCO) and government representatives from the NSAS countries. The project's long-term goal is establishing rational and equitable management of the NSAS as a productive way of advancing socio-economic development in the region and protecting biodiversity and land resources.
See also
Lake Ptolemy
African humid period
References
Bibliography
Essay and Maps: Groundwater Resources of the Nubian Aquifer System
Dahab, K.A., El Sayed, E.A. Study of Hydrogeological Conditions of the Nubian Sandstone Aguifer in the Area Between Abu Simbel & Toschka, Western Desert, Egypt. American Geophysical Union, Spring 2001
Aquifers
Aquifers of Africa
Springs of Africa
Sahara
Geography of Libya
Geology of Libya
Springs of Libya
Springs of Chad
Springs of Egypt
Springs of Sudan | Nubian Sandstone Aquifer System | [
"Environmental_science"
] | 729 | [
"Hydrology",
"Aquifers"
] |
3,063,673 | https://en.wikipedia.org/wiki/Control%20variates | The control variates method is a variance reduction technique used in Monte Carlo methods. It exploits information about the errors in estimates of known quantities to reduce the error of an estimate of an unknown quantity.
Underlying principle
Let the unknown parameter of interest be , and assume we have a statistic such that the expected value of m is μ: , i.e. m is an unbiased estimator for μ. Suppose we calculate another statistic such that is a known value. Then
is also an unbiased estimator for for any choice of the coefficient .
The variance of the resulting estimator is
By differentiating the above expression with respect to , it can be shown that choosing the optimal coefficient
minimizes the variance of . (Note that this coefficient is the same as the coefficient obtained from a linear regression.) With this choice,
where
is the correlation coefficient of and . The greater the value of , the greater the variance reduction achieved.
In the case that , , and/or are unknown, they can be estimated across the Monte Carlo replicates. This is equivalent to solving a certain least squares system; therefore this technique is also known as regression sampling.
When the expectation of the control variable, , is not known analytically, it is still possible to increase the precision in estimating (for a given fixed simulation budget), provided that the two conditions are met: 1) evaluating is significantly cheaper than computing ; 2) the magnitude of the correlation coefficient is close to unity.
Example
We would like to estimate
using Monte Carlo integration. This integral is the expected value of , where
and U follows a uniform distribution [0, 1].
Using a sample of size n denote the points in the sample as . Then the estimate is given by
Now we introduce as a control variate with a known expected value and combine the two into a new estimate
Using realizations and an estimated optimal coefficient we obtain the following results
The variance was significantly reduced after using the control variates technique. (The exact result is .)
See also
Antithetic variates
Importance sampling
Notes
References
Ross, Sheldon M. (2002) Simulation 3rd edition
Averill M. Law & W. David Kelton (2000), Simulation Modeling and Analysis, 3rd edition.
S. P. Meyn (2007) Control Techniques for Complex Networks, Cambridge University Press. . Downloadable draft (Section 11.4: Control variates and shadow functions)
Monte Carlo methods
Statistical randomness
Computational statistics
Variance reduction | Control variates | [
"Physics",
"Mathematics"
] | 502 | [
"Monte Carlo methods",
"Computational statistics",
"Computational mathematics",
"Computational physics"
] |
3,063,823 | https://en.wikipedia.org/wiki/Architecture%20of%20Kievan%20Rus%27 | The architecture of Kievan Rus' comes from the medieval state of Kievan Rus' which incorporated parts of what is now modern Ukraine, Russia, and Belarus, and was centered on Kiev and Novgorod. Its architecture is the earliest period of Russian and Ukrainian architecture, using the foundations of Byzantine culture but with great use of innovations and architectural features. Most remains are Russian Orthodox churches or parts of the gates and fortifications of cities.
After the disintegration of Kievan Rus' followed by Mongol invasion in the first half of the 13th century, the architectural tradition continued in the principalities of Novgorod, Vladimir-Suzdal, Galicia-Volhynia and eventually had direct influence on the Russian, Ukrainian, and Belarusian architecture. The Old Russian architecture of churches originates from the pre-Christian Slavic ( - construction).
Church architecture
The great churches of Kievan Rus', built after the adoption of Christianity in 988, were the first examples of monumental architecture in the East Slavic lands. The architectural style of the Kievan state, which quickly established itself, was strongly influenced by Byzantine architecture. Early Eastern Orthodox churches were mainly made of wood with the simplest form of church becoming known as a cell church. Major cathedrals often featured scores of small domes, which led some art historians to take this as an indication of what the pagan Slavic temples should have looked like. The 10th-century Church of the Tithes in Kiev was the first cult building to be made of stone. The earliest Kievan churches were built and decorated with frescoes and mosaics by Byzantine masters.
Another great example of an early church of Kievan Rus' was the thirteen-domed Saint Sophia Cathedral in Kiev (1037–54), built by Yaroslav the Wise. Much of its exterior has been altered with time, extending over the area and eventually acquiring 25 domes.
Saint Sophia Cathedral in Novgorod (1045–1050), on the other hand, expressed a new style that exerted a strong influence on Russian church architecture. Its austere thick walls, small narrow windows, and helmeted cupolas have much in common with the Romanesque architecture of Western Europe.
Even further departure from Byzantine models is evident in succeeding cathedrals of Novgorod: St Nicholas's (1113), St Anthony's (1117–19), and St George's (1119). Along with cathedrals, of note was the architecture of monasteries of these times. The 12th–13th centuries were the period of feudal division of Kievan Rus into princedoms which were in nearly permanent feud, with multiplication of cathedrals in emerging princedoms and courts of local princes (knyazes).
By the end of the 12th century, the divide of the country was final and new centers of power took the Kievan style and adopted it to their traditions. In the northern principality of Vladimir-Suzdal the local churches were built of white stone. The Suzdal style is also known as "white-stone architecture" (""). The first white-stone church was the St. Boris and Gleb Church commissioned by Yuri Dolgoruky, a church-fortress in Kideksha near Suzdal, at the supposed place of the stay of knyazes Boris and Gleb on their pilgrimage to Kiev. The white-stone churches mark the highest point of pre-Mongolian Rus' architecture. The most important churches in Vladimir are the Assumption Cathedral (built 1158–60, enlarged 1185–98, frescoes 1408) and St Demetrios Cathedral (built 1194–97).
In the western splinter of Kingdom of Galicia-Volhynia churches in a traditional Kievan style were built for some time, but eventually the style began to drift towards Central European Romanesque tradition. The white stone masonry of Galician school of architecture was likely the inspiration of the development of a similar style in Vladimir-Suzdal.
Celebrated as these structures are, the contemporaries were even more impressed by churches of Southern Rus', particularly the Svirskaya Church of Smolensk (1191–94). As southern structures were either ruined or rebuilt, restoration of their original outlook has been a source of contention between art historians. The most memorable reconstruction is the Piatnytska Church (1196–99) in Chernigov (modern Chernihiv, Ukraine), by Peter Baranovsky.
Secular architecture
There were very few examples of secular (non-religious) architecture in Kievan Rus. Golden Gates of Vladimir, despite much 18th-century restoration, could be regarded as an authentic monument of the pre-Mongolian period.
In Kyiv, the capital of Ukraine, no secular monuments survived aside from pieces of walls and ruins of gates. The Golden Gates of Kyiv were destroyed completely over the years with only the ruins remaining. In the 20th century a museum was erected above the ruins. It is a close image of the gates of the Kievan Rus period but is not a monument of the time.
One of the best examples, the fortress of Bilhorod Kyivskyi, is still lying under the ground waiting major excavation. In the 1940s, the archaeologist Nikolai Voronin discovered the well-preserved remains of Andrei Bogolyubsky's palace in Bogolyubovo, dating from 1158 to 1165.
Examples
Examples in Belarus
Examples in Russia
Examples in Ukraine
See also
List of buildings of pre-Mongol Kievan Rus'
Ukrainian architecture
List of Russian church types
Old Russian ornament
References
External links
Directory of Orthodox Architecture in Russia - photogallery of church architecture
Culture of Kievan Rus'
Architecture by region
Architectural history
Architecture in Belarus
Architecture in Russia
Architecture in Ukraine
Architecture in Ukraine by period or style
Architecture in Kyiv | Architecture of Kievan Rus' | [
"Engineering"
] | 1,162 | [
"Architecture by region",
"Architectural history",
"Architecture"
] |
3,064,022 | https://en.wikipedia.org/wiki/Outline%20of%20the%20creation%E2%80%93evolution%20controversy | The following outline is provided as an overview of and topical guide to the creation–evolution controversy.
Essence
Creationism, and more specifically:
Creation science, Intelligent design, Neo-Creationism, Old Earth and Young Earth creationism
Evolution, and more specifically:
Natural selection, Common descent, Origins of life, Age of the Earth/Universe
Intelligent design
Objections to evolution
History
History of creationism
History of evolutionary thought
Reaction to Darwin's theory
Arguments
Entropy and life
Evidence of common descent
Evolutionary argument against naturalism
Fine-tuned universe
Irreducible complexity
Specified complexity
Transitional fossil (commonly known as a missing link)
Acceptance
Evolution as theory and fact
Level of support for evolution
Teach the Controversy
Wedge strategy
Supporters of evolution:
A Scientific Support for Darwinism
List of scientific societies rejecting intelligent design
Project Steve
Clergy Letter Project
Supporters of creation or intelligent design
A Scientific Dissent From Darwinism
Answers in Genesis
Discovery Institute
Physicians and Surgeons who Dissent from Darwinism
Politics
Intelligent design in politics
Politics of creationism
Specific religious views
Ahmadiyya views on evolution
Evolution and the Roman Catholic Church
Hindu views on evolution
Jainism and non-creationism
Jewish views on evolution
Mormon views on evolution
Public education
Creation and evolution in public education
Creation and evolution in public education in the United States
Butler Act
Scopes trial, 1925
Epperson v. Arkansas, 1968
Daniel v. Waters, 1975
Segraves v. State of California, 1981
McLean v. Arkansas, 1982
Edwards v. Aguillard, 1987
Webster v. New Lenox School District, 1990
Freiler v. Tangipahoa Parish Board of Education, 1994
Kansas evolution hearings, 2005
Kitzmiller v. Dover Area School District, 2005
Selman v. Cobb County School District, 2005
See also
References
External links
Talk.origins Index to Creationist Claims
Creation-evolution controversy
Creation-evolution controversy
Creation-evolution controversy
Creation-evolution controversy
Creation-evolution controversy
Creation-evolution controversy, topics | Outline of the creation–evolution controversy | [
"Biology"
] | 381 | [
"Creationism",
"Biology theories",
"Obsolete biology theories"
] |
3,064,285 | https://en.wikipedia.org/wiki/Mutual%20authentication | Mutual authentication or two-way authentication (not to be confused with two-factor authentication) refers to two parties authenticating each other at the same time in an authentication protocol. It is a default mode of authentication in some protocols (IKE, SSH) and optional in others (TLS).
Mutual authentication is a desired characteristic in verification schemes that transmit sensitive data, in order to ensure data security. Mutual authentication can be accomplished with two types of credentials: usernames and passwords, and public key certificates.
Mutual authentication is often employed in the Internet of Things (IoT). Writing effective security schemes in IoT systems is challenging, especially when schemes are desired to be lightweight and have low computational costs. Mutual authentication is a crucial security step that can defend against many adversarial attacks, which otherwise can have large consequences if IoT systems (such as e-Healthcare servers) are hacked. In scheme analyses done of past works, a lack of mutual authentication had been considered a weakness in data transmission schemes.
Process steps and verification
Schemes that have a mutual authentication step may use different methods of encryption, communication, and verification, but they all share one thing in common: each entity involved in the communication is verified. If Alice wants to communicate with Bob, they will both authenticate the other and verify that it is who they are expecting to communicate with before any data or messages are transmitted. A mutual authentication process that exchanges user IDs may be implemented as follows:
Alice sends a message encrypted with Bob's public key to Bob to show that Alice is a valid user.
Bob verifies the message:
Bob checks the format and timestamp. If either is incorrect or invalid, the session is aborted.
The message is then decrypted with Bob's secret key, giving Alice's ID.
Bob checks if the message matches a valid user. If not, the session is aborted.
Bob sends Alice a message back to show that Bob is a valid user.
Alice verifies the message:
Alice checks the format and timestamp. If either is incorrect or invalid, the session is aborted.
The message is then decrypted with Alice's secret key, giving Bob's ID.
Alice checks if the message matches a valid user. If not, the session is aborted.
At this point, both parties are verified to be who they claim to be and safe for the other to communicate with. Lastly, Alice and Bob will create a shared secret key so that they can continue communicating in a secure manner.
To verify that mutual authentication has occurred successfully, Burrows-Abadi-Needham logic (BAN logic) is a well regarded and widely accepted method to use, because it verifies that a message came from a trustworthy entity. BAN logic first assumes an entity is not to be trusted, and then will verify its legality.
Defenses
Mutual authentication supports zero trust networking because it can protect communications against adversarial attacks, notably:
Man-in-the-middle attack Man-in-the-middle (MITM) attacks are when a third party wishes to eavesdrop or intercept a message, and sometimes alter the intended message for the recipient. The two parties openly receive messages without verifying the sender, so they do not realize an adversary has inserted themselves into the communication line. Mutual authentication can prevent MITM attacks because both the sender and recipient verify each other before sending them their message keys, so if one of the parties is not verified to be who they claim they are, the session will end.
Replay attack A replay attack is similar to a MITM attack in which older messages are replayed out of context to fool the server. However, this does not work against schemes using mutual authentication because timestamps are a verification factor that are used in the protocols. If the change in time is greater than the maximum allowed time delay, the session will be aborted. Similarly, messages can include a randomly generated number to keep track of when a message was sent.
Spoofing attack Spoofing attacks rely on using false data to pose as another user in order to gain access to a server or be identified as someone else. Mutual authentication can prevent spoofing attacks because the server will authenticate the user as well, and verify that they have the correct session key before allowing any further communication and access.
Impersonation attacks Impersonation attacks refer to malicious attacks where a user or individual pretends to be an authorized user to gain unauthorized access to a system while feigning permission. When each party authenticates the other, they send each other a certificate that only the other party knows how to unscramble, verifying themselves as a trusted source. In this way, adversaries cannot use impersonation attacks because they do not have the correct certificate to act as if they are the other party.
Mutual authentication also ensures information integrity because if the parties are verified to be the correct source, then the information received is reliable as well.
mTLS
By default the TLS protocol only proves the identity of the server to the client using X.509 certificates, and the authentication of the client to the server is left to the application layer. TLS also offers client-to-server authentication using client-side X.509 authentication. As it requires provisioning of the certificates to the clients and involves less user-friendly experience, it's rarely used in end-user applications.
Mutual TLS authentication (mTLS) is more often used in business-to-business (B2B) applications, where a limited number of programmatic and homogeneous clients are connecting to specific web services, the operational burden is limited, and security requirements are usually much higher as compared to consumer environments.
mTLS is also used in microservices-based applications based on runtimes such as Dapr, via systems like SPIFFE.
Lightweight schemes vs. secured schemes
While lightweight schemes and secure schemes are not mutually exclusive, adding a mutual authentication step to data transmissions protocols can often increase performance runtime and computational costs. This can become an issue for network systems that cannot handle large amounts of data or those that constantly have to update for new real-time data (e.g. location tracking, real-time health data).
Thus, it becomes a desired characteristic of many mutual authentication schemes to have lightweight properties (e.g. have a low memory footprint) in order to accommodate the system that is storing a lot of data. Many systems implement cloud computing, which allows quick access to large amounts of data, but sometimes large amounts of data can slow down communication. Even with edge-based cloud computing, which is faster than general cloud computing due to a closer proximity between the server and user, lightweight schemes allow for more speed when managing larger amounts of data. One solution to keep schemes lightweight during the mutual authentication process is to limit the number of bits used during communication.
Applications that solely rely on device-to-device (D2D) communication, where multiple devices can communicate locally in close proximities, removes the third party network. This in turn can speed up communication time. However, the authentication still occurs through insecure channels, so researchers believe it is still important to ensure mutual authentication occurs in order to keep a secure scheme.
Schemes may sacrifice a better runtime or storage cost when ensuring mutual authentication in order to prioritize protecting the sensitive data.
Password-based schemes
In mutual authentication schemes that require a user's input password as part of the verification process, there is a higher vulnerability to hackers because the password is human-made rather than a computer-generated certificate. While applications could simply require users to use a computer-generated password, it is inconvenient for people to remember. User-made passwords and the ability to change one's password are important for making an application user-friendly, so many schemes work to accommodate the characteristic. Researchers note that a password based protocol with mutual authentication is important because user identities and passwords are still protected, as the messages are only readable to the two parties involved.
However, a negative aspect about password-based authentication is that password tables can take up a lot of memory space. One way around using a lot of memory during a password-based authentication scheme is to implement one-time passwords (OTP), which is a password sent to the user via SMS or email. OTPs are time-sensitive, which means that they will expire after a certain amount of time and that memory does not need to be stored.
Multi-factor authentication
Recently, more schemes have higher level authentication than password based schemes. While password-based authentication is considered as "single-factor authentication," schemes are beginning to implement smart card (two-factor) or biometric-based (three-factor) authentication schemes. Smart cards are simpler to implement and easy for authentication, but still have risks of being tampered with. Biometrics have grown more popular over password-based schemes because it is more difficult to copy or guess session keys when using biometrics, but it can be difficult to encrypt noisy data. Due to these security risks and limitations, schemes can still employ mutual authentication regardless of how many authentication factors are added.
Certificate based schemes and system applications
Mutual authentication is often found in schemes employed in the Internet of Things (IoT), where physical objects are incorporated into the Internet and can communicate via IP address. Authentication schemes can be applied to many types of systems that involve data transmission. As the Internet's presence in mechanical systems increases, writing effective security schemes for large numbers of users, objects, and servers can become challenging, especially when needing schemes to be lightweight and have low computational costs. Instead of password-based authentication, devices will use certificates to verify each other's identities.
Radio networks
Mutual authentication can be satisfied in radio network schemes, where data transmissions through radio frequencies are secure after verifying the sender and receiver.
Radio frequency identification (RFID) tags are commonly used for object detection, which many manufacturers are implementing into their warehouse systems for automation. This allows for a faster way to keep up with inventory and track objects. However, keeping track of items in a system with RFID tags that transmit data to a cloud server increases the chances of security risks, as there are now more digital elements to keep track of. A three way mutual authentication can occur between RFID tags, the tag readers, and the cloud network that stores this data in order to keep RFID tag data secure and unable to be manipulated.
Similarly, an alternate RFID tag and reader system that assigns designated readers to tags has been proposed for extra security and low memory cost. Instead of considering all tag readers as one entity, only certain readers can read specific tags. With this method, if a reader is breached, it will not affect the whole system. Individual readers will communicate with specific tags during mutual authentication, which runs in constant time as readers use the same private key for the authentication process.
Many e-Healthcare systems that remotely monitor patient health data use wireless body area networks (WBAN) that transmit data through radio frequencies. This is beneficial for patients that should not be disturbed while being monitored, and can reduced the workload for medical worker and allow them to focus on the more hands-on jobs. However, a large concern for healthcare providers and patients about using remote health data tracking is that sensitive patient data is being transmitted through unsecured channels, so authentication occurs between the medical body area network user (the patient), the Healthcare Service Provider (HSP) and the trusted third party.
Cloud based computing
e-Healthcare clouds are another way to store patient data collected remotely. Clouds are useful for storing large amounts of data, such as medical information, that can be accessed by many devices whenever needed. Telecare Medical Information Systems (TMIS), an important way for medical patients to receive healthcare remotely, can ensure secured data with mutual authentication verification schemes. Blockchain is one way that has been proposed to mutually authenticate the user to the database, by authenticating with the main mediBchain node and keeping patient anonymity.
Fog-cloud computing is a networking system that can handle large amounts of data, but still has limitations regarding computational and memory cost. Mobile edge computing (MEC) is considered to be an improved, more lightweight fog-cloud computing networking system, and can be used for medical technology that also revolves around location-based data. Due to the large physical range required of locational tracking, 5G networks can send data to the edge of the cloud to store data. An application like smart watches that track patient health data can be used to call the nearest hospital if the patient shows a negative change in vitals.
Fog node networks can be implemented in car automation, keeping data about the car and its surrounding states secure. By authenticating the fog nodes and the vehicle, vehicular handoff becomes a safe process and the car’s system is safe from hackers.
Machine to machine verification
Many systems that do not require a human user as part of the system also have protocols that mutually authenticate between parties. In unmanned aerial vehicle (UAV) systems, a platform authentication occurs rather than user authentication. Mutual authentication during vehicle communication prevents one vehicle's system from being breached, which can then affect the whole system negatively. For example, a system of drones can be employed for agriculture work and cargo delivery, but if one drone were to be breached, the whole system has the potential to collapse.
External links
Two types of Mutual Authentication
References
Authentication methods
Computer access control | Mutual authentication | [
"Engineering"
] | 2,772 | [
"Cybersecurity engineering",
"Computer access control"
] |
3,064,421 | https://en.wikipedia.org/wiki/British%20Power%20International | British Power International (BPI) provides design and advisory solutions to the power sector and is based in Colchester, Essex. BPI is part of the Freedom Group, who has recently been acquired by NG Bailey.
The company's design practice focuses on power system planning, design, asset management and provides associated project management, safety and quality assurance services. Its consulting practice provides advice on the technical, financial, safety and environmental aspects of power generation, transmission, distribution and supply in a range of regulatory frameworks and market conditions. Based in the UK it operates worldwide.
Customers
The company has a broad customer base. BPI has worked for the UK Government and UK Regulatory Authority (Ofgem) as an adviser and auditor of electricity companies’ performance. It provides design services to electricity network operators in the UK.
Internationally it works for NGOs, including the Asian Development bank, as well as Governments, Regulatory Authorities and actual and potential investors in the power sector.
Community
The company has recognised the shortage of qualified engineers in the UK and has taken initiatives to encourage students to take an interest in the electricity sector. BPI has supported the work of both Colchester Institute in setting up a course specifically aimed at preparing students for a career in the power sector, and the Masters Programme at the University of Newcastle.
History
Formed in 1979 as British Electricity International, it was the international arm of the UK's electricity supply industry, exporting technical and commercial expertise worldwide. Following the privatisation of the electricity industry, ownership passed to a successor company and from 1996 (renamed as British Power International) it became part of Eastern Group, one of the UK's leading power companies.
In 1999 it became an independent consultancy practice (British Power International Limited).
In 2008 Spice plc acquired British Power International ltd. In September 2011 Spice Limited rebranded as EnServe Group.
References
International engineering consulting firms
Engineering consulting firms of the United Kingdom
Electrical engineering companies of the United Kingdom
British companies established in 1979
1979 establishments in England | British Power International | [
"Engineering"
] | 399 | [
"Engineering consulting firms",
"International engineering consulting firms"
] |
3,064,522 | https://en.wikipedia.org/wiki/Predictive%20maintenance | Predictive maintenance techniques are designed to help determine the condition of in-service equipment in order to estimate when maintenance should be performed. This approach claims more cost savings over routine or time-based preventive maintenance, because tasks are performed only when warranted. Thus, it is regarded as condition-based maintenance carried out as suggested by estimations of the degradation state of an item.
The main appeal of predictive maintenance is to allow convenient scheduling of corrective maintenance, and to prevent unexpected equipment failures. By taking into account measurements of the state of the equipment, maintenance work can be better planned (spare parts, people, etc.) and what would have been "unplanned stops" are transformed to shorter and fewer "planned stops", thus increasing plant availability. Other potential advantages include increased equipment lifetime, increased plant safety, fewer accidents with negative impact on environment, and optimized spare parts handling.
Predictive maintenance differs from preventive maintenance because it does take into account the current condition of equipment (with measurements), instead of average or expected life statistics, to predict when maintenance will be required. Machine Learning approaches are adopted for the forecasting of its future states.
Some of the main components that are necessary for implementing predictive maintenance are data collection and preprocessing, early fault detection, fault detection, time to failure prediction, and maintenance scheduling and resource optimization. Predictive maintenance has been considered to be one of the driving forces for improving productivity and one of the ways to achieve "just-in-time" in manufacturing.
Overview
Predictive maintenance evaluates the condition of equipment by performing periodic (offline) or continuous (online) equipment condition monitoring. The ultimate goal of the approach is to perform maintenance at a scheduled point in time when the maintenance activity is most cost-effective and before the equipment loses performance within a threshold. This results in a reduction in unplanned downtime costs because of failure, where costs can be in the hundreds of thousands per day depending on industry. In energy production, in addition to loss of revenue and component costs, fines can be levied for non-delivery, increasing costs even further. This is in contrast to time- and/or operation count-based maintenance, where a piece of equipment gets maintained whether it needs it or not. Time-based maintenance is labor intensive, ineffective in identifying problems that develop between scheduled inspections, and therefore is not cost-effective.
The "predictive" component of predictive maintenance stems from the goal of predicting the future trend of the equipment's condition. This approach uses principles of statistical process control to determine at what point in the future maintenance activities will be appropriate.
Most predictive inspections are performed while equipment is in service, thereby minimizing disruption of normal system operations. Adoption of predictive maintenance can result in substantial cost savings and higher system reliability. In today's dynamic landscape of service maintenance, prolonged repair processes present a significant challenge for organizations striving to maintain operational excellence. Extended downtime, increased Mean Time to Repair (MTTR), and production losses not only affect profitability but also disrupt service continuity and diminish customer satisfaction. As equipment ages and maintenance requirements intensify, the quest for innovative solutions becomes increasingly urgent.
Reliability-centered maintenance emphasizes the use of predictive maintenance techniques in addition to traditional preventive measures. When properly implemented, it provides companies with a tool for achieving lowest asset net present costs for a given level of performance and risk.
One goal is to transfer the predictive maintenance data to a computerized maintenance management system so that the equipment condition data is sent to the right equipment object to trigger maintenance planning, work order execution, and reporting. Unless this is achieved, the predictive maintenance solution is of limited value, at least if the solution is implemented on a medium to large size plant with tens of thousands pieces of equipment. In 2010, the mining company Boliden, implemented a combined Distributed Control System and predictive maintenance solution integrated with the plant computerized maintenance management system on an object to object level, transferring equipment data using protocols like Highway Addressable Remote Transducer Protocol, IEC61850 and OLE for process control.
Technologies
To evaluate equipment condition, predictive maintenance utilizes nondestructive testing technologies such as infrared, acoustic (partial discharge and airborne ultrasonic), corona detection, vibration analysis, sound level measurements, oil analysis, and other specific online tests. A new approach in this area is to utilize measurements on the actual equipment in combination with measurement of process performance, measured by other devices, to trigger equipment maintenance. This is primarily available in collaborative process automation systems (CPAS). Site measurements are often supported by wireless sensor networks to reduce the wiring cost.
Vibration analysis is most productive on high-speed rotating equipment and can be the most expensive component of a PdM program to get up and running. Vibration analysis, when properly done, allows the user to evaluate the condition of equipment and avoid failures. The latest generation of vibration analyzers comprises more capabilities and automated functions than its predecessors. Many units display the full vibration spectrum of three axes simultaneously, providing a snapshot of what is going on with a particular machine. But despite such capabilities, not even the most sophisticated equipment successfully predicts developing problems unless the operator understands and applies the basics of vibration analysis.
In certain situations, strong background noise interferences from several competing sources may mask the signal of interest and hinder the industrial applicability of vibration sensors. Consequently, motor current signature analysis (MCSA) is a non-intrusive alternative to vibration measurement which has the potential to monitor faults from both electrical and mechanical systems.
Remote visual inspection is the first non-destructive testing. It provides a cost-efficient primary assessment. Essential information and defaults can be deduced from the external appearance of the piece, such as folds, breaks, cracks, and corrosion. The remote visual inspection has to be carried out in good conditions with a sufficient lighting (350 LUX at least). When the part of the piece to be controlled is not directly accessible, an instrument made of mirrors and lenses called endoscope is used. Hidden defects with external irregularities may indicate a more serious defect inside.
Acoustical analysis can be done on a sonic or ultrasonic level. New ultrasonic techniques for condition monitoring make it possible to "hear" friction and stress in rotating machinery, which can predict deterioration earlier than conventional techniques. Ultrasonic technology is sensitive to high-frequency sounds that are inaudible to the human ear and distinguishes them from lower-frequency sounds and mechanical vibration. Machine friction and stress waves produce distinctive sounds in the upper ultrasonic range.
Changes in these friction and stress waves can suggest deteriorating conditions much earlier than technologies such as vibration or oil analysis. With proper ultrasonic measurement and analysis, it's possible to differentiate normal wear from abnormal wear, physical damage, imbalance conditions, and lubrication problems based on a direct relationship between asset and operating conditions.
Sonic monitoring equipment is less expensive, but it also has fewer uses than ultrasonic technologies. Sonic technology is useful only on mechanical equipment, while ultrasonic equipment can detect electrical problems and is more flexible and reliable in detecting mechanical problems.
Infrared monitoring and analysis has the widest range of application (from high- to low-speed equipment), and it can be effective for spotting both mechanical and electrical failures; some consider it to currently be the most cost-effective technology.
Oil analysis is a long-term program that, where relevant, can eventually be more predictive than any of the other technologies. It can take years for a plant's oil program to reach this level of sophistication and effectiveness.
Analytical techniques performed on oil samples can be classified in two categories: used oil analysis and wear particle analysis. Used oil analysis determines the condition of the lubricant itself, determines the quality of the lubricant, and checks its suitability for continued use. Wear particle analysis determines the mechanical condition of machine components that are lubricated. Through wear particle analysis, you can identify the composition of the solid material present and evaluate particle type, size, concentration, distribution, and morphology.
The use of Model Based Condition Monitoring for predictive maintenance programs is becoming increasingly popular over time. This method involves spectral analysis on the motor's current and voltage signals and then compares the measured parameters to a known and learned model of the motor to diagnose various electrical and mechanical anomalies. This process of "model based" condition monitoring was originally designed and used on NASA's space shuttle to monitor and detect developing faults in the space shuttle's main engine. It allows for the automation of data collection and analysis tasks, providing round the clock condition monitoring and warnings about faults as they develop. Other predictive maintenance methods are related to smart testing strategies.
Applications
Environmental monitoring
Detect changes of the calibration distribution (i.e., statistical distribution of pollutants and environmental conditions) for low-cost gas sensor systems.
Railway
Detect warning signs before they cause downtime for linear, fixed and mobile assets.
Improving safety and track void detection through a new vehicle cab-based monitoring system
Can also identify the type of track asset that the void is located under and provide an indication of the severity of the void
Health Monitoring of point Machines (devices used to operate railway turnouts) can aid in detecting early symptoms of degradation prior to failure.·
Manufacturing
Early fault detection and diagnosis in the manufacturing industry.
Manufacturers increasingly collect big data from Internet of Things (IoT) sensors in their factories and products and using different algorithms for the collected data to detect warning signs of expensive failures before they occur.
Oil and gas
Oil and gas companies often lack visibility into the condition of their equipment, especially in remote offshore and deep-water locations.
Big data can provide insight to oil and gas companies, this way equipment failures and the optimal lifetime of the system and components can be analyzed and predicted.
See also
Computerized maintenance management system
Intelligent maintenance system
Production flow analysis
RCASE
Root cause analysis
References
Safety engineering
Maintenance | Predictive maintenance | [
"Engineering"
] | 2,028 | [
"Safety engineering",
"Systems engineering",
"Maintenance",
"Mechanical engineering"
] |
3,064,553 | https://en.wikipedia.org/wiki/Cayley%E2%80%93Bacharach%20theorem | In mathematics, the Cayley–Bacharach theorem is a statement about cubic curves (plane curves of degree three) in the projective plane . The original form states:
Assume that two cubics and in the projective plane meet in nine (different) points, as they do in general over an algebraically closed field. Then every cubic that passes through any eight of the points also passes through the ninth point.
A more intrinsic form of the Cayley–Bacharach theorem reads as follows:
Every cubic curve over an algebraically closed field that passes through a given set of eight points also passes through (counting multiplicities) a ninth point which depends only on .
A related result on conics was first proved by the French geometer Michel Chasles and later generalized to cubics by Arthur Cayley and Isaak Bacharach.
Details
If seven of the points lie on a conic, then the ninth point can be chosen on that conic, since will always contain the whole conic on account of Bézout's theorem. In other cases, we have the following.
If no seven points out of are co-conic, then the vector space of cubic homogeneous polynomials that vanish on (the affine cones of) (with multiplicity for double points) has dimension two.
In that case, every cubic through also passes through the intersection of any two different cubics through , which has at least nine points (over the algebraic closure) on account of Bézout's theorem. These points cannot be covered by only, which gives us .
Since degenerate conics are a union of at most two lines, there are always four out of seven points on a degenerate conic that are collinear. Consequently:
If no seven points out of lie on a non-degenerate conic, and no four points out of lie on a line, then the vector space of cubic homogeneous polynomials that vanish on (the affine cones of) has dimension two.
On the other hand, assume are collinear and no seven points out of are co-conic. Then no five points of and no three points of are collinear. Since will always contain the whole line through on account of Bézout's theorem, the vector space of cubic homogeneous polynomials that vanish on (the affine cones of) is isomorphic to the vector space of quadratic homogeneous polynomials that vanish (the affine cones of) , which has dimension two.
Although the sets of conditions for both dimension two results are different, they are both strictly weaker than full general positions: three points are allowed to be collinear, and six points are allowed to lie on a conic (in general two points determine a line and five points determine a conic). For the Cayley–Bacharach theorem, it is necessary to have a family of cubics passing through the nine points, rather than a single one.
According to Bézout's theorem, two different cubic curves over an algebraically closed field which have no common irreducible component meet in exactly nine points (counted with multiplicity). The Cayley–Bacharach theorem thus asserts that the last point of intersection of any two members in the family of curves does not move if eight intersection points (without seven co-conic ones) are already prescribed.
Applications
A special case is Pascal's theorem, in which case the two cubics in question are all degenerate: given six points on a conic (a hexagon), consider the lines obtained by extending opposite sides – this yields two cubics of three lines each, which intersect in 9 points – the 6 points on the conic, and 3 others. These 3 additional points lie on a line, as the conic plus the line through any two of the points is a cubic passing through 8 of the points.
A second application is Pappus's hexagon theorem, similar to the above, but the six points are on two lines instead of on a conic.
Finally, a third case is found for proving the associativity of elliptic curve point addition. Let a first cubic contain the three lines BC, O(A+B) and A(B+C); and a second cubic containing the three lines AB, O(B+C) and C(A+B). The following eight points are common to both cubics: A, B, C, A+B, -A-B, B+C, -B-C, O. Hence their ninth points must be the same -A-(B+C)=-(A+B)-C, giving the associativity.
Dimension counting
One can understand the Cayley–Bacharach theorem, and why it arises for degree 3, by dimension counting. Simply stated, nine points determine a cubic, but in general define a unique cubic. Thus if the nine points lie on more than one cubic, equivalently on the intersection of two cubics (as ), they are not in general position – they are overdetermined by one dimension – and thus cubics passing through them satisfying one additional constraint, as reflected in the "eight implies nine" property. The general phenomenon is called superabundance; see Riemann–Roch theorem for surfaces.
Details
Formally, first recall that given two curves of degree , they define a pencil (one-parameter linear system) of degree curves by taking projective linear combinations of the defining equations; this corresponds to two points determining a projective line in the parameter space of curves, which is simply projective space.
The Cayley–Bacharach theorem arises for high degree because the number of intersection points of two curves of degree , namely (by Bézout's theorem), grows faster than the number of points needed to define a curve of degree , which is given by
These first agree for , which is why the Cayley–Bacharach theorem occurs for cubics, and for higher degree is greater, hence the higher degree generalizations.
In detail, the number of points required to determine a curve of degree is the number of monomials of degree , minus 1 from projectivization. For the first few these yield:
2 and 1: two points determine a line, two lines intersect in a point,
5 and 4: five points determine a conic, two conics intersect in four points,
9 and 9: nine points determine a cubic, two cubics intersect in nine points,
14 and 16.
Thus these first agree for 3, and the number of intersections is larger when .
The meaning of this is that the 9 points of intersection of two cubics are in special position with respect to cubics, a fortiori for higher degree, but unlike for lower degree: two lines intersect in a point, which is trivially in general linear position, and two quadratics intersect in four points, which (assuming the quadratics are irreducible so no three points are collinear) are in general quadratic position because five points determine a quadratic, and any four points (in general linear position) have a pencil of quadratics through them, since the system is underdetermined. For cubics, nine points determine a cubic, but in general they determine a unique cubic – thus having two different cubics pass through them (and thus a pencil) is special – the solution space is one dimension higher than expected, and thus the solutions satisfy an additional constraint, namely the "8 implies 9" property.
More concretely, because the vector space of homogeneous polynomials of degree three in three variables has dimension , the system of cubic curves passing through eight (different) points is parametrized by a vector space of dimension (the vanishing of the polynomial at one point imposes a single linear condition). It can be shown that the dimension is exactly two if no four of the points are collinear and no seven points lie on a conic. The Cayley–Bacharach theorem can be deduced from this fact.
See also
Linear system of divisors
References
Footnotes
Bibliography
Michel Chasles, Traité des sections coniques, Gauthier-Villars, Paris, 1885.
Edward D. Davis, Anthony V. Geramita, and Ferruccio Orecchia, Gorenstein algebras and Cayley–Bacharach theorem, Proceedings of the American Mathematical Society 93 (1985), 593–597.
David Eisenbud, Mark Green, and Joe Harris, Cayley–Bacharach theorems and conjectures, Bulletin of the American Mathematical Society 33 (1996), no. 3, 295—324.
Algebraic curves
Theorems in projective geometry
Theorems in algebraic geometry | Cayley–Bacharach theorem | [
"Mathematics"
] | 1,797 | [
"Theorems in algebraic geometry",
"Theorems in projective geometry",
"Theorems in geometry"
] |
3,064,768 | https://en.wikipedia.org/wiki/Suicide%20legislation | Suicide is a crime in some parts of the world. However, while suicide has been decriminalized in many countries, the act is almost universally stigmatized and discouraged. In some contexts, suicide could be utilized as an extreme expression of liberty, as is exemplified by its usage as an expression of devout dissent towards perceived tyranny or injustice which occurred occasionally in cultures such as ancient Rome, medieval Japan, or today's Tibet Autonomous Region.
While a person who has died by suicide is beyond the reach of the law, there can still be legal consequences regarding treatment of the corpse or the fate of the person's property or family members. The associated matters of assisting a suicide and attempting suicide have also been dealt with by the laws of some jurisdictions. Some countries criminalise suicide attempts.
History
In ancient Athens, a person who had died of suicide (without the approval of the state) was denied the honours of a normal burial. The person would be buried alone, on the outskirts of the city, without a headstone or marker.
A criminal ordinance issued by Louis XIV in 1670 intended to be far more severe in its "punishment" ritual of an obviously-already-dead body: their corpse was drawn through the streets, face down, and then hung or thrown on a garbage heap. Additionally, all of the person's property was confiscated; this measure was intended to deter suicide by punishing their heirs financially.
The Interments (felo de se) Act 1882 abolished the legal requirement in England of burying suicides at crossroads.
Assisted suicide
In many jurisdictions, it is a crime to assist others, directly or indirectly, in taking their own lives. Such legislation requires manufacturers of weapons to refuse sales to those deemed at potential risk of suicide. In some jurisdictions, it is also illegal to encourage people to attempt suicide, though the classification of the crime and its punishment varies. Sometimes an exception applies for physician assisted suicide, under strict conditions.
Laws in individual jurisdictions (table)
Africa
North and South Americas
Asia
Europe
Oceania
Laws in individual jurisdictions
Australia
Victoria
In the Australian state of Victoria, while suicide itself is no longer a crime, a survivor of a suicide pact can be charged with manslaughter rather than murder if they killed the deceased party. Also, it is a crime to counsel, incite, or aid and abet another in attempting suicide, and the law explicitly allows any person to use "such force as may reasonably be necessary" to prevent another from committing suicide.
On 29 November 2017, the state of Victoria passed the Voluntary Assisted Dying Act, making it legal for a doctor to assist a terminally ill patient with less than six months to live to end their own life. The law came into effect on 19 June 2019.
Queensland
On 17 September 2021, the state of Queensland passed the Voluntary Assisted Dying Act 2021. The law went into effect on 1 January 2023.
Past laws
Australian Capital Territory
Australian Capital Territory (ACT) governments had regularly advocated for the right to legalise euthanasia-related schemes between 1997 and 2022, when the federal ban was in practice. Shortly after the federal ban was repealed, the ACT government confirmed it would seek to introduce legislation into the ACT Legislative Assembly in 2023 to permit voluntary assisted dying. A formal consultation period was opened by the government in February 2023, which culminated in a report endorsing the establishment of a voluntary assisted dying scheme, published on 29 June 2023. On 31 October 2023, the Voluntary Assisted Dying Bill 2023 was introduced into the Legislative Assembly and immediately referred to a select committee for further consultation, to report back by 29 February 2024. Under the current legislation, a person would be eligible for voluntary assisted dying if they are aged over 18, seeking it voluntarily with decision-making capability, intolerably suffering an advanced-progressive condition expected to cause death, and lives local to the ACT for at least 12 months or with a significant Canberra connection.
New South Wales
On 21 September 2017 National Party MLC Trevor Khan introduced the Voluntary Assisted Dying Bill 2017 into the New South Wales Parliament. The Bill was modelled on the Oregon Death With Dignity Act, and was developed by a cross party working group that considered 72 "substantial" submissions. The Bill contained what advocates labelled a "raft of safeguards" including a seven-person oversight board to review all assisted deaths. The upper house debated the bill throughout several sittings in November 2017, and on 16 November the bill was voted down 20 votes to 19.
In October 2021 independent MLA Alex Greenwich introduced the Voluntary Assisted Dying Bill into the lower house of the Parliament. The legislation was subjected to a cross-party conscience vote, after Premier and Liberal Party leader Dominic Perrottet indicated he would grant Liberal members a conscience vote. The legislation was passed in the Legislative Assembly on 26 November 2021 by 52 votes to 32, and proceeded to the Legislative Council. The bill passed the Legislative Council by 23 votes to 15 on 19 May 2022, with amendments attached, that were agreed to by the Assembly that same day. The legislation received royal assent on 27 May 2022, and will go into effect 18 months thereafter.
Under the provisions of the legislation, a person may make a request for a voluntary assisted death to a specialist doctor, which is lodged with the Voluntary Assisted Dying Board. If the doctor is satisfied that the person has the capacity to make the decision and is doing so voluntarily and determines that the person meets the criteria (i.e: they have a terminal illness that will result in death within six months, or a neurodegenerative condition that will result in death within 12 months, and whose suffering is such that it creates a painful condition that cannot be tolerably relieved), they can approve the request. The same process must then be followed by a second independent doctor. The person may then make a written request declaring their intention to end their life, which must be witnessed by two people and then be submitted to the board. A final request must be made five days later and a review done by the first doctor, who can then apply to the Voluntary Assisted Dying Board to allow access to a substance to end their patient's life. The person may administer the relevant substance themselves or have a health practitioner do it.
Northern Territory
Euthanasia was legalised in Australia's Northern Territory, by the Rights of the Terminally Ill Act 1995. It passed the Northern Territory Legislative Assembly by a vote of 15 to 10. In August 1996 a repeal bill was brought before the Parliament but was defeated by 14 votes to 11. The law was later voided by the federal Euthanasia Laws Act 1997, which is a federal law that was in effect until 13 December 2022 and prevented parliaments of territories (Specifically the Northern Territory, the Australian Capital Territory and Norfolk Island) from legalising euthanasia or assisted dying. Before the federal override occurred, three people died through physician assisted suicide under the legislation, aided by Dr Philip Nitschke. The first person was a carpenter, Bob Dent, who died on 22 September 1996.
Queensland
In November 2018, the Premier of Queensland, Annastacia Palaszczuk, launched an inquiry considering the possible legalisation of voluntary assisted dying in the state. The inquiry also took into account care of the aged, end of life, and palliative care.
In May 2021, Palaszczuk announced that voluntary assisted dying legislation would be introduced to the Queensland Parliament for consideration. The bill would allow euthanasia, if the patient meets the following criteria:
Has an eligible condition that is advanced and progressive, with the potential for death within the subsequent 12 months;
Is capable of making a decision with sound mind;
Is acting voluntarily and without coercion;
Is at least 18 years old; and
Is a resident of Australia and has lived in Queensland for at least twelve months.
On 16 September 2021, the Queensland Legislative Assembly passed the Voluntary Assisted Dying Act 2021 with 61 votes in favour and 31 opposed. The legislation was subject to a conscience vote. It received royal assent on 23 September 2021 went into effect on 1 January 2023.
South Australia
In November 2016, the South Australian House of Assembly narrowly rejected a private member's bill which would have legalised a right to request voluntary euthanasia in circumstances where a person is in unbearable pain and suffering from a terminal illness. The bill was the first ever euthanasia bill to pass a second reading stage (27 votes to 19) though the bill was rejected during the clauses debate of the bill (23 votes all, with the Speaker's casting vote against the bill).
In late June 2021, a voluntary euthanasia bill similar to that of other states passed the Parliament of South Australia. The legislation mirrors most of the provisions of the Victorian law, though also allows private hospitals and individual practitioners to conscientiously object from participating in the scheme, provided they refer patients to a place where they can access the scheme. Residents in aged care and retirement villages can also access the scheme in their own homes or units. The Voluntary Assisted Dying Act 2021 went into effect on 31 January 2023.
Tasmania
Tasmania came close to legalising voluntary euthanasia in November 2013, when a Greens-initiated voluntary euthanasia bill was narrowly defeated in the House of Assembly by a vote of 13–12. The bill would have allowed terminally ill Tasmanians to end their lives 10 days after making three separate requests to their doctor. Although both major parties allowed a conscience vote, all ten Liberals voted against the legislation, with Labor splitting seven in favour and three against, and all five Greens voting in favour.
In December 2019, independent Legislative Council member Mike Gaffney announced he would introduce a private member's bill to legalise voluntary assisted dying the following year. The End of Life Choices (Voluntary Assisted Dying) Bill was introduced to the Council on 27 August and was passed on 10 November 2020, without a formal vote being recorded. It proceeded to the Legislative Assembly, where it was passed with amendments attached on 4 March 2021 by 16 votes to 6. After the Council approved of the Assembly's amendments, the legislation received royal assent on 22 April 2021. The legislation went into effect on 23 October 2022.
Under the provisions of the legislation, in order to access the scheme a person must be at least 18 years of age, have decision-making capacity, be acting voluntarily and be suffering intolerably from a medical condition that is advanced, incurable, irreversible and will cause the person's death in the next six months, or 12 months for neurodegenerative disorders. The person must also be an Australian citizen or have resided in the country for at least three continuous years, and for at least 12 months in Tasmania immediately before making their first request. In total three separate requests must be made to access the scheme, each of which comes with progressively more stringent checks and balances.
Victoria
Since 19 June 2019, Victoria permits assisted dying. On 20 September 2017, the Voluntary Assisted Dying Bill 2017 was introduced into the Victorian Parliament by the Andrews Labor Government, permitting assisted suicide. The bill was modelled on the recommendations of an expert panel chaired by former Australian Medical Association president Professor Brian Owler. The bill passed the parliament, with amendments made in the Legislative Council, on 29 November 2017. The upper house voted in favour 22 votes to 18. The lower house voted in favour 47 votes to 37. In passing the bill, Victoria became the first state to legislate for voluntary assisted dying (VAD). The law received royal assent on 5 December 2017 and came into effect on 19 June 2019. Implementation of the legislation was an ongoing process which took approximately 18 months. Challenges identified with implementation which were by noted by the Medical Journal of Australia included restricting access to those who were eligible, while ensuring it did not unfairly prevent those who were eligible from accessing it and translating the legislation into appropriate clinical practice, as well as supporting and managing doctors with conscientious objections.
Under the provisions of the legislation, assisted suicide (otherwise referred to as voluntary assisted dying) may be available in Victoria under the following conditions:
A person must be suffering from an incurable, advanced and progressive disease, illness or medical condition, and experiencing intolerable suffering.
The condition must be assessed by two medical practitioners to be expected to cause death within six months (an exception exists for a person suffering from a neurodegenerative condition, where instead the condition must be expected to cause death within 12 months).
A person must be over the age of 18 and have lived in Victoria for at least 12 months and have decision-making capacity.
Though mental illness or disability are not grounds for access, people who meet all other criteria and who have a disability or mental illness will not be denied access to assisted dying.
Other processes and safeguards associated with the scheme are in place.
Western Australia
In November 2018 the McGowan Government announced it would introduce an assisted dying bill early in the new year.
On 10 December 2019, the Voluntary Assisted Dying Act 2019 passed the Western Australian Parliament. The legislation had passed the Legislative Council by 24 votes to 11, having previously passed the Legislative Assembly 45 votes to 11. Under the legislation, an eligible person would have to be terminally ill with a condition that is causing intolerable suffering and is likely to cause death within six months, or 12 months for a neurodegenerative condition. The person would have to make two verbal requests and one written request, with each request signed off by two independent doctors. Self-administration of lethal medication is then permitted, though in a departure from the Victorian system, a patient can choose for a medical practitioner to administer the drug. The legislation goes into effect on a day to be fixed by proclamation, though the government has advised of an 18-month implementation period. The law went into effect on 1 July 2021.
Legislation decriminalizing suicide in Australian States and Territories
Canada
The common-law crimes of attempting suicide and of assisting suicide were codified in Canada when Parliament enacted the Criminal Code in 1892. It carried a maximum penalty of 2 years' imprisonment. Eighty years later, in 1972, Parliament repealed the offence of attempting suicide from the Criminal Code based on the argument that a legal deterrent was unnecessary. The prohibition on assisting suicide remained, as s 241 of the Criminal Code:
Counselling or aiding suicide
241. Every one who
(a) counsels a person to commit suicide, or
(b) aids or abets a person to commit suicide,
whether suicide ensues or not, is guilty of an indictable offence and liable to imprisonment for a term not exceeding fourteen years.
However, the law against assisted suicide, including physician-assisted suicide, was the subject of much debate including two reports of the Law Reform Commission of Canada in 1982 and 1983, though these did not support changing the law.
In 1993, the offence of assisted suicide survived a constitutional challenge in the Supreme Court of Canada, in the case of Rodriguez v. British Columbia (Attorney General). The plaintiff, Sue Rodriguez, had been diagnosed with amyotrophic lateral sclerosis (ALS) in early 1991. She wished to be able to die of suicide at a time of her own choosing but would require assistance to do so because her physical condition prevented her from doing so without assistance. By a 5-4 majority, the Court held that the prohibition on assisted suicide did not infringe s 7 of the Canadian Charter of RIghts and Freedoms, which provides constitutional protection for liberty and security of the person. The majority held that while the law did affect those rights, it did so in a manner consistent with the principles of fundamental justice. The majority also held that the prohibition on assisted suicide did not infringe the Charter's prohibition against cruel and unusual treatment or punishment. Assuming the prohibition did discriminate on basis of disability, the majority held that the infringement was a justifiable restriction under s 1 of the Canadian Charter of Rights and Freedoms.
In 1995 the Senate issued a report on assisted suicide entitled Of Life and Death. In 2011, the Royal Society of Canada published its report on end-of-life decision-making. In the report it recommended that the Criminal Code be modified so as to permit assistance in dying under some circumstances. In 2012, a Select Committee on Dying with Dignity of the Quebec National Assembly produced a report recommending amendments to legislation to recognize medical aid in dying as being an appropriate component of end-of-life care. That report resulted in An Act respecting end-of-life care, which came into force on December 10, 2015.
On June 15, 2012, in Carter v Canada (AG), the British Columbia Supreme Court ruled that the criminal offence prohibiting physician assistance of suicide was unconstitutional on the grounds that denying people access to assisted suicide in hard cases was contrary to the Charter of Rights and Freedoms guarantee of equality under Section 15.
This decision was subsequently overturned by the majority of the British Columbia Court of Appeal (2:1) on the basis that the issue had already been decided by the Supreme Court of Canada in the Rodriguez case, invoking stare decisis.
A landmark Supreme Court of Canada decision on February 6, 2015 overturned the 1993 Rodriguez decision that had ruled against this method of dying. The unanimous decision
in the further appeal of Carter v Canada (AG), stated that a total prohibition of physician-assisted death is unconstitutional. The court's ruling limits exculpation of physicians engaging physician-assisted death to hard cases of "a competent adult person who clearly consents to the termination of life and has a grievous and irremediable medical condition, including an illness, disease or disability, that causes enduring suffering that is intolerable to the individual in the circumstances of his or her condition." The ruling was suspended for 12 months to allow the Canadian parliament to draft a new, constitutional law to replace the existing one.
Specifically, the Supreme Court held that the current legislation was overbroad in that it prohibits "physician‑assisted death for a competent adult person who (1) clearly consents to the termination of life and (2) has a grievous and irremediable medical condition (including an illness, disease or disability) that causes enduring suffering that is intolerable to the individual in the circumstances of his or her condition." The court decision includes a requirement that there must be stringent limits that are "scrupulously monitored." This will require the death certificate to be completed by an independent medical examiner, not the treating physician, to ensure the accuracy of reporting the cause of death.
The federal government of 2015 subsequently requested a six-month extension for implementation; the arguments for this request were scheduled to be heard by the Supreme Court in January 2016.
The Canadian Medical Association (CMA) reported that not all doctors would be willing to help a patient die. The belief in late 2015 was that no physician would be forced to do so. The CMA was already offering educational sessions to members as to the process that would be used after the legislation had been implemented.
India
The Indian penal code 309 deals with punishment for attempted suicide. The Mental Health Care Act 2017 greatly limits the scope for the code to be implemented. The bill states, "Any person who attempts to commit suicide shall be presumed, unless proved otherwise, to have severe stress and shall not be tried and punished under the said Code". State governments are required to provide adequate care and rehabilitation for such individuals as to prevent a recurrence of an attempt to suicide.
Iran
The act of suicide has not been criminalized in the penal law of the Islamic Republic of Iran. However, no one is allowed to ask another to kill him/her. In addition, threatening to kill oneself is not an offense by the law, however if this act of threatening is done by a prisoner in a prison, then that would be considered as violation of the prisons' regulations and the offender may be punished according to penal law.
According to the Act. 836 of the civil law of the Islamic Republic of Iran if a suicidal person prepares for suicide and writes a testament, if he/she dies, then by law the will is considered void and if he/she does not die, then the will is officially accepted and can be carried out.
According to the theory of "borrowed crime", suicide itself is not a crime in penal law and thus any type of assistance in an individual's suicide is not considered a crime and the assistant is not punished. Assisting in suicide is considered a crime only when it becomes the "cause" of the suicidal person's death; for example when someone takes advantage of someone else's unawareness or simplicity and convince him/her to kill him/herself. In such cases assisting in suicide is treated as murder and the offender is punished accordingly. In addition, assisting in suicide is considered a crime under section 2 of the Act. 15 of the cyber crimes law of the Islamic Republic of Iran which was legislated on June 15, 2009. According to the mentioned act, any type of encouragement, stimulation, invitation, simplification of access to lethal substances and/or methods and teaching of suicide with the help of computer or any other media network is considered assisting in suicide and thus, is punished by imprisonment from 91 days up to one year or fines from five to 20 million Iranian Rials or both.
Ireland
Attempted suicide is not a criminal offence in Ireland and, under Irish law, self-harm is not generally seen as a form of attempted suicide. It was decriminalized in 1993. Assisted suicide and euthanasia are illegal. This has been challenged in the High Court in 2012. , assisted suicide remains illegal in Ireland.
Malaysia
Under section 309 of the Penal Code of Malaysia, whoever attempts to commit suicide, and does any act
towards the commission of such offence, shall be punished with imprisonment for a term which may extend to one year or with fine
or with both. There are ongoing efforts to decriminalize attempted suicide, although rights groups and non-governmental organisations such as the local chapter of Befrienders note that progress has been slow. Proponents of decriminalization argue that suicide legislation may deter people from seeking help, and may even strengthen the resolve of would-be suicides to end their lives to avoid prosecution. The first reading of a bill to repeal Section 309 of the Penal Code was tabled in Parliament in April 2023, bringing Malaysia one step closer towards decriminalizing attempted suicide.
On 22nd May 2023, the Dewan Rakyat unanimously passed a bill to decriminalised suicide while the upper House (Dewan Negara) passed it on 21st June 2023.
Netherlands
In the Netherlands, being present and giving moral support during someone's suicide is not a crime; neither is supplying general information on suicide techniques. However, it is a crime to participate in the preparation for or execution of a suicide, including supplying lethal means or instruction in their use.
New Zealand
As with many other western societies, New Zealand has, since the 1961 Crimes Act, had no laws against suicide in itself, as a personal and unassisted act. Assisted suicide and voluntary euthanasia will be legal in certain circumstances as from November 2021.
Norway
Neither suicide nor attempted suicide is illegal in Norway. However, complicity is.
Romania
Suicide itself is not illegal in Romania, however encouraging or facilitating the suicide of another person is a criminal offense and is punishable by maximum 20 years imprisonment, depending on circumstances.
Russian Federation
In Russia, a person whose mental disorder "poses a direct danger to themself" can be put into a psychiatric hospital. In addition, after hospitalization in a psychiatric hospital, such a citizen of the Russian Federation may be subject to medical restrictions in the form of a driver's license or non-admission to obtain them, as well as such citizens are not allowed to serve in the army, police and other law enforcement agencies and many other restrictions on employment.
In practice, this happens as follows: A failed suicider, detained by the police, for example, is taken to the department, then a psychiatric ambulance is called, a psychiatrist on duty who arrives at the scene decides whether a citizen detained by the police needs hospitalization. In case of hospitalization in a psychiatric hospital, the patient is placed in a ward of enhanced supervision for the first three days, then transferred to the suicidology department. In most cases, such citizens are kept in hospital for no more than one month, in rare cases longer, but very rarely they are discharged less than a month after hospitalization.
Incitement to suicide:
Inciting someone to suicide by threats, cruel treatment, or systematic humiliation is punishable by up to 5 years in prison. (Article 110 of the Criminal Code of the Russian Federation)
Federal law of Russian Federation no. 139-FZ of 2012-07-28 prescribes censorship of information about methods of suicide on the Internet. According to a website created by the Pirate Party of Russia, some pages with suicide jokes have been blacklisted, which may have led to blocking of an IP address of Wikia.
Singapore
Suicide has been decriminalized since 5 May 2019, with the passing of the Criminal Law Reform Act, which repealed Section 309 of the Singapore Penal Code. The law took effect on 1 January 2020.
South Africa
South African courts, including the Appellate Division, have ruled that suicide and attempted suicide are not crimes under the Roman-Dutch law, or that if they ever were crimes, they have been abrogated by disuse. Attempted suicide was from 1886 to 1968 a crime in the Transkei, a former bantustan, under the Transkeian Territories Penal Code.
United Kingdom
England, Wales and Northern Ireland
Suicide was never a statutory criminal offence. English common law perceived suicide as an immoral, criminal offence against God and also against the Crown. The common law offence of felo de se was used to punish people who had attempted suicide and their surviving relatives. A person who had died by suicide could have been denied burial, or their estate forfeited to the Crown, while survivors of suicide attempts could be punished by probation orders (by far the most common sanction) imprisonment or, more rarely, a fine. Posthumous punishment stopped in the 19th century, and appetite for punishing survivors of suicide attempts waned until this was decriminalized by the passing of the Suicide Act 1961 and the Criminal Justice Act (Northern Ireland) 1966; these same acts made it an offence to assist in a suicide.
With respect to civil law the simple act of suicide is lawful but the consequences of dying by suicide might turn an individual event into an unlawful act, as in the case of Reeves v Commissioners of Police of the Metropolis [2000] 1 AC 360, where a man in police custody hanged himself and was held equally liable with the police (a cell door defect enabled the hanging) for the loss suffered by his widow; the practical effect was to reduce the police damages liability by 50%. In 2009, the House of Lords ruled that the law concerning the treatment of people who accompanied those who died of assisted suicide was unclear, following Debbie Purdy's case that this lack of clarity was a breach of her human rights. (In her case, as someone with multiple sclerosis, she wanted to know whether her husband would be prosecuted for accompanying her abroad where she might eventually wish to die of assisted suicide, if her illness progressed).
Scotland
Suicide was never a statutory criminal offence. Under Scots Law, survivors of suicide attempts may be arrested and prosecuted for associated common law offences such as breach of the peace or culpable and reckless conduct. Although the Scottish Government has never legislated to formally decriminalize suicide, a 2009 Appeal Court case, which found that a breach of the peace must have an element of disruption to the community, substantially reduced the likelihood of securing a successful prosecution for suicide attempts. Subsequently the Crown Office and Procurator Fiscal Service instructed Police Scotland to deal with cases of attempted suicide which come to their notice by means other than arrest, even where an offence such as breach of the peace may have been committed. Police Scotland has advised officers: Despite these recommendations, occasional arrests and prosecutions for suicide attempts continue. Consequential liability upon a person attempting suicide (or if dead, his/her estate) might arise under civil law where it parallels the civil liabilities recognized in the (English Law) Reeves case mentioned above.
Assisting a suicide in Scotland can in some circumstances constitute murder, culpable homicide, or no offence depending upon the facts of each case. No modern examples of cases devoid of direct application of intentional or unintentional harm (such as helping a person to inject themselves) seem to be available; it was noted in a consultation preceding the introduction of the Assisted Suicide (Scotland) Bill that "the law appears to be subject to some uncertainty, partly because of a lack of relevant case law".
United States
In the United States of America, some topics are determined by federal law whereas others differ across states. The information on suicide prevention legislation will be discussed at the federal level first and will be followed by those states that have some form of legislation.
Federal legislation
In 2004, Congress passed the Garrett Lee Smith Memorial Act (GLSMA). The GLSMA made federal funding available for the first time to states, tribes, and colleges across the nation to implement community-based youth and young adult suicide prevention programs. Many of these programs had goals based on the National Suicide Prevention Strategy that was designed in 2001, including increased community-based prevention and stigma reduction among others.
In October 2020, the National Suicide Hotline Designation Act came into effect. This law states that there was to be a transition from a 10-digit hotline number to a universal 3-digit hotline number, which should be familiar and recognizable to everyone. On top of that, in May 2021, the Suicide Prevention Act passed the House, and was being considered by the Senate. This Act would authorize a pilot program to intensify surveillance of self-harm and establish a grant program to provide more self-harm and suicide prevention services across the country.
On July 16, 2022, the US transitioned the National Suicide Hotline from the former 10-digit number into the 988 Suicide & Crisis Lifeline, linking both the National Suicide Hotline, the Veterans Crisis Line, and a network of more than 200 state and local call centers run through SAMHSA, the Substance Abuse and Mental Health Services Administration.
State legislation
Historically, various states listed the act of suicide as a felony, but these policies were sparsely enforced. In the late 1960s, 18 U.S. states had no laws against suicide. By the late 1980s, 30 of the 50 states had no laws against suicide or suicide attempts, but every state had laws declaring it to be a felony to aid, advise, or encourage another person to suicide. By the early 1990s only two states still listed suicide as a crime, and these have since removed that classification. In some U.S. states, suicide is still considered an unwritten "common law crime," as stated in Blackstone's Commentaries. (So held the Virginia Supreme Court in 1992. Wackwitz v. Roy, 418 S.E.2d 861 (Va. 1992)). As a common law crime, suicide can bar recovery for the late suicidal person's family in a lawsuit unless the suicidal person can be proven to have been "of unsound mind." That is, the suicide must be proven to have been an involuntary act of the victim in order for the family to be awarded monetary damages by the court. This can occur when the family of the deceased sues the caregiver (perhaps a jail or hospital) for negligence in failing to provide appropriate care. Some American legal scholars look at the issue as one of personal liberty. According to Nadine Strossen, former President of the ACLU, "The idea of government making determinations about how you end your life, forcing you...could be considered cruel and unusual punishment in certain circumstances, and Justice Stevens in a very interesting opinion in a right-to-die [case] raised the analogy."
As of 2019 suicide is illegal in Maryland, and has been prosecuted at least ten times between 2009 and 2019.
In New York State in 1917, while suicide was "a grave public wrong", an attempt to commit suicide was a felony, punishable by a maximum penalty of two years' imprisonment.
A 2018 bill in Virginia to decriminalize suicide attempts failed to pass, and has not been reintroduced as of 2019.
Physician-assisted suicide is legal in ten states (Oregon, Washington, Montana, Vermont, California, Colorado, Hawaii, New Jersey, Maine, and New Mexico) and Washington D.C. as of 2024. For the terminally ill, it is legal in the state of Oregon under the Oregon Death with Dignity Act. In Washington state, it became legal in 2009, when a law modeled after the Oregon act, the Washington Death with Dignity Act was passed. A patient must be diagnosed as having less than six months to live, be of sound mind, make a request orally and in writing, have it approved by two different doctors, then wait 15 days and make the request again. A doctor may prescribe a lethal dose of a medication but may not administer it.
California
The State of California has introduced several bills related to suicide over the last couple of years, most of which are related to youth. In 2016, Assembly Bill 2246 was passed, which required school districts to have a suicide prevention policy that addresses the needs of their highest-risk pupils in grades 7 to 12. Since then, the Bill has been amended twice. First, in 2018, AB 2639 was passed, which required school districts to update their policy once every five years. Then, in 2019, AB 1767 was passed. Because of this amendment, districts serving kindergarten to 6th grade will also have to have a suicide prevention policy.
Lastly, also in 2019, the governor signed AB 984. This Bill allows people to send their excess tax payments to a special Suicide Prevention Fund. This fund is supposed to award grants and help fund crisis centers. In California, medical facilities are empowered or required to commit anyone whom they believe to be suicidal for evaluation and treatment.
Utah
The State of Utah has passed the most bills relating to suicide prevention as of 2021, with a total of 21 suicide-related bills. A large number of these bills have been for school-based suicide prevention, including suicide prevention training for all school staff (HB 501), grant awards for programs in elementary schools to increase peer-to-peer suicide prevention (HB 346), and an expanded scope to specifically include the suicide risk of youth not accepted by family, especially LGBTQ youth (HB 393). Other bills have included topics such as increased attention for suicide prevention in substance use treatment (HB 346), bereavement services (HB 336), and suicide prevention programs related to firearm use (HB 17). Moreover, the Utah Division of Substance Abuse and Mental Health (DSAMH) has Zero Suicides as one of their policies, using this as a framework to guide their actions.
See also
Legality of euthanasia
State-assisted suicide
Suicide prevention
Notes
References
External links
Large Europe majorities for assisted suicide: survey
Should suicide be legal? - Wikidebate at Wikiversity
Abetment to Suicide
Legislation
Statutory law | Suicide legislation | [
"Biology"
] | 7,221 | [
"Behavior",
"Human behavior",
"Suicide"
] |
3,064,797 | https://en.wikipedia.org/wiki/End-of-Transmission-Block%20character | End-of-Transmission-Block (ETB) is a communications control character used to indicate the end of a block of data for communications purposes. ETB is used for segmenting data into blocks when the block structure is not necessarily related to the processing function.
In ASCII, ETB is code point 23 (0x17, or in caret notation) in the C0 control code set. In EBCDIC, ETB is code point 0x26.
Unicode also includes a character for the visual representation of the character: .
References
Nichols A., Nichols et al.: Data Communications for Microcomputers (1982)
ASCII
Control characters | End-of-Transmission-Block character | [
"Technology"
] | 138 | [
"Computing stubs"
] |
3,064,846 | https://en.wikipedia.org/wiki/Philosophy%20of%20suicide | In ethics and other branches of philosophy, suicide poses difficult questions, answered differently by various philosophers. The French Algerian essayist, novelist, and playwright Albert Camus (1913–1960) began his philosophical essay The Myth of Sisyphus with the famous line "There is but one truly serious philosophical problem and that is suicide." ().
Philosophical stances on suicide can be divided into two broad groups. Religious philosophy almost universally condemns suicide, while nonreligious stances tend towards toleration, with some seeing it as laudatory, depending on circumstance. Utilitarianism offers perhaps a confusing stance. For example, using Jeremy Bentham's hedonistic calculus, it may be concluded that although suicide offers utility by ending personal suffering, the grief it causes others may outweigh its utility. The calculation cannot be determined at a philosophical level.
Arguments against suicide
Common philosophical opinion of suicide since modernization reflected a spread in cultural beliefs of western societies that suicide is immoral and unethical. One popular argument is that many of the reasons for committing suicide—such as depression, emotional pain, or economic hardship—are transitory and can be ameliorated by therapy and through making changes to some aspects of one's life. A common adage in the discourse surrounding suicide prevention sums up this view: "Suicide is a permanent solution to a temporary problem." However, the argument against this is that while emotional pain may seem transitory to most people, in other cases it may be extremely difficult or even impossible to resolve, even through counseling or lifestyle change, depending upon the severity of the affliction and the person's ability to cope with their pain. Examples of this are incurable disease or lifelong mental illness.
Absurdism
Camus saw the goal of absurdism in establishing whether suicide is a necessary response to a world which appears to be mute both on the question of God's existence (and thus what such an existence might answer) and for our search for meaning and purpose in the world. For Camus, suicide was the rejection of freedom. He thought that fleeing from the absurdity of reality into illusions, religion, or death is not the way out. Instead of fleeing the absurd meaninglessness of life, we should embrace life passionately.
Existentialist Sartre describes the position of Meursault, the protagonist of Camus' The Stranger who is condemned to death, in the following way:
Christian-inspired philosophy
Christian theology almost universally condemns suicide as being a crime against God. G. K. Chesterton calls suicide "the ultimate and absolute evil, the refusal to take an interest in existence". He argues that a person who kills himself, as far as he is concerned, destroys the entire world (apparently exactly repeating Maimonides' view).
Liberalism
John Stuart Mill argued, in his influential essay "On Liberty," that since the of liberty is the power of the individual to make choices, any choice that one might make that would deprive one of the ability to make further choices should be prevented. Thus, for Mill, selling oneself into slavery should be prevented in order to avoid precluding the ability to make further choices. Concerning these matters, Mill writes in "On Liberty":
"Not only persons are not held to engagements which violate the rights of third parties, but it is sometimes considered a sufficient reason for releasing them from an engagement, that it is injurious to themselves. In this and most other civilized countries, for example, an engagement by which a person should sell himself, or allow himself to be sold, as a slave, would be null and void; neither enforced by law nor by opinion. The ground for thus limiting his power of voluntarily disposing of his own lot in life, is apparent, and is very clearly seen in this extreme case. The reason for not interfering, unless for the sake of others, with a person's voluntary acts, is consideration for his liberty. His voluntary choice is evidence that what he so chooses is desirable, or at the least endurable, to him, and his good is on the whole best provided for by allowing him to take his own means of pursuing it. But by selling himself for a slave, he abdicates his liberty; he forgoes any future use of it, beyond that single act. He therefore defeats, in his own case, the very purpose which is the justification of allowing him to dispose of himself. He is no longer free; but is thenceforth in a position which has no longer the presumption in its favour, that would be afforded by his voluntarily remaining in it. The principle of freedom cannot require that he should be free not to be free. It is not freedom, to be allowed to alienate his freedom."
It could be argued that suicide prevents further choices in the same way slavery does. However, it can also be argued that there are significant differences in not having any further involvement in decisions about one’s life and not having any further life to make decisions about. Suicide essentially removes the condition of being alive, not the condition of making choices about one’s life.
Mill believes the individual to be the best guardian of their own interests. He uses the example of a man about to cross a broken bridge: we can forcibly stop that person and warn him of the danger, but ultimately should not prevent him from crossing the bridge—for only he knows the worth of his life balanced against the danger of crossing the bridge.
Too much should not be read into "disposing of his own lot in life" in the passage, as this is not necessarily talking about anything other than slavery. Indeed, it would be odd if Mill had intended it to be about suicide but not explored the issue fully.
Deontology
From a deontological perspective, Immanuel Kant argues against suicide in Fundamental Principles of The Metaphysic of Morals. In accordance with the second formulation of his categorical imperative, Kant argues that, "He who contemplates suicide should ask himself whether his action can be consistent with the idea of humanity as an end in itself." Kant's theory looks at the act only, and not at its outcomes and consequences, and claims that one is ethically required to consider whether one would be willing to universalise the act: to claim everyone should behave that way. Kant argues that choosing to commit suicide entails considering oneself as a means to an end, which he rejects: a person, he says, must not be used "merely as means, but must in all actions always be considered as an end in himself." Furthermore, Kant argues that, since objective morality is grounded in one's own ability to reason, suicide is wrong because it involves removing that ability through ending one's life, thereby creating a kind of practical contradiction.
Social contract
The social contract, according to Jean-Jacques Rousseau, is such that every man has "a right to risk his own life in order to preserve it."
Hobbes and Locke reject the right of individuals to take their own life. Hobbes claims in his Leviathan that natural law forbids every man "to do, that which is destructive of his life, or take away the means of preserving the same." Breaking this natural law is irrational and immoral. Hobbes also states that it is intuitively rational for men to want felicity and to fear death most.
Aristotle
Aristotle in his 'discussion of courage, maintains that committing suicide to avoid pain or other undesirable circumstances is a cowardly act. In a later chapter [of Nicomachean Ethics], he further argues that suicide is unlawful and is an act committed against the interests of the state.'
Plotinus
The neoplatonist philosopher Plotinus (205-270) devoted a short treatise (Ennead I, 9 = treatise 16) to the question of the legitimity of suicide (On Suicide). Discussing platonist and stocian arguments, he concludes that suicide is not allowed except when one understands that he is losing his reason. Plotinus also deals in this treatise with the temptation, for the platonist philosopher (cf Phaedo), to join the intelligible absolute, and to liberate his soul from his body through the medium of suicide.
Neutral and situational stances
Honor
Japan has a form of suicide called seppuku, which is considered an honorable way to redeem oneself for transgressions or personal defeats. It was widely accepted in the days of the Samurai and even before that. It was generally seen as a privilege granted only to the samurai class; civilian criminals would thus not have this 'honor' and be executed. In this historical perspective, suicide reflects a cultural view of suicide as noble, acceptable, and even brave, rather than cowardly and wrong.
Utilitarianism
Utilitarianism can be used as a justification for or as an argument against suicide. For example, through Jeremy Bentham's hedonistic calculus, it can be concluded that although the death of a depressed person ends their suffering, the person's family and friends may grieve as well, and their pain may outweigh the release of depression of the individual through suicide.
Arguments that suicide is permissible
There are arguments in favor of allowing an individual to choose between life and death by suicide. Those in favor of suicide as a personal choice reject the thought that suicide is always or usually irrational, but is instead a solution to real problems; a line of last resort that can legitimately be taken when the alternative is considered worse. They believe that no being should be made to suffer unnecessarily, and suicide provides an escape from suffering.
Idealism
Herodotus wrote: "When life is so burdensome, death has become for man a sought-after refuge". Schopenhauer affirmed: "They tell us that suicide is the greatest act of cowardice... that suicide is wrong; when it is quite obvious that there is nothing in the world to which every man has a more unassailable title than to his own life and person."
Schopenhauer's main work, The World as Will and Representation, occasionally uses the act in its examples. He denied that suicide was immoral and saw it as one's right to take one's life. In an allegory, he compared ending one's life, when subject to great suffering, to waking up from sleep when experiencing a terrible nightmare. However, most suicides were seen as an act of the will, as it takes place when one denies life's pains, and is thus different from ascetic renunciation of the will, which denies life's pleasures.
According to Schopenhauer, moral freedom—the highest ethical aim—is to be obtained only by a denial of the will to live. Far from being a denial, suicide is an emphatic assertion of this will. For it is in fleeing from the pleasures, not from the sufferings of life, that this denial consists. When a man destroys his existence as an individual, he is not by any means destroying his will to live. On the contrary, he would like to live if he could do so with satisfaction to himself; if he could assert his will against the power of circumstance; but circumstance is too strong for him.
Schopenhauer also addressed arguments against suicide. "That a man who no longer wishes to live for himself must go on living merely as a machine for others to use is an extravagant demand."
Libertarianism
Libertarianism asserts that a person's life belongs only to them, and no other person has the right to force their own ideals that life must be lived. Rather, only the individual involved can make such a decision, and whatever decision they make should be respected.
Philosopher and psychiatrist Thomas Szasz goes further, arguing that suicide is the most basic right of all. If freedom is self-ownership—ownership over one's own life and body—then the right to end that life is the most basic of all. If others can force you to live, you do not own yourself and belong to them.
Jean Améry, in his book On Suicide: a Discourse on Voluntary Death (originally published in German in 1976), provides a moving insight into the suicidal mind. He argues forcefully and almost romantically that suicide represents the ultimate freedom of humanity, justifying the act with phrases such as "we only arrive at ourselves in a freely chosen death" and lamenting "ridiculously everyday life and its alienation". Améry killed himself in 1978.
Nihilism
Philosophical thinking in the 19th and 20th century has led, in some cases, beyond thinking in terms of pro-choice, to the point that suicide is no longer a last resort, or even something that one must justify, but something that one must justify doing. Many forms of nihilistic thinking essentially begin with the premise that life is objectively meaningless, and proceed to the question of why one should not just kill oneself; they then answer this question by suggesting that the individual has the power to give personal meaning to life.
Stoicism
Although George Lyman Kittredge states that "the Stoics held that suicide is cowardly and wrong," the most famous stoics—Seneca the Younger, Epictetus, and Marcus Aurelius—maintain that death by one's own hand is always an option and frequently more honorable than a life of protracted misery.
The Stoics accepted that suicide was permissible for the wise person in circumstances that might prevent them from living a virtuous life. Plutarch held that accepting life under tyranny would have compromised Cato's self-consistency () as a Stoic and impaired his freedom to make the honorable moral choices. Suicide could be justified if one fell victim to severe pain or disease, but otherwise suicide would usually be seen as a rejection of one's social duty.
Confucianism
Confucianism holds that failure to follow certain values is worse than death; hence, suicide can be morally permissible, and even praiseworthy, if it is done for the sake of those values. The Confucian emphasis on loyalty, self-sacrifice, and honour has tended to encourage altruistic suicide. Confucius wrote, "For gentlemen of purpose and men of ren while it is inconceivable that they should seek to stay alive at the expense of ren, it may happen that they have to accept death in order to have ren accomplished." Mencius wrote:
Other arguments
David Hume wrote an essay entitled Of Suicide in 1755 (although it was not published until the year after his death, in 1777). Most of it is concerned with the claim that suicide is an affront to God. Hume argues that suicide is no more a rebellion against God than is saving the life of someone who would otherwise die, or changing the position of anything in one's surroundings. He spends much less time dismissing arguments that it is an affront to one's duty to others or to oneself. Hume claims that suicide can be compared to retiring from society and becoming a total recluse, which is not normally considered to be immoral. As for duty to self, Hume takes it to be obvious that there can be times when suicide is desirable, though he also thinks it ridiculous that anyone would consider suicide unless they first considered every other option.
Those who support the right to die argue that suicide is acceptable under certain circumstances, such as incurable disease and old age. The idea is that although life is, in general, good, people who face irreversible suffering should not be forced to continue suffering.
Leo Tolstoy wrote in his short work A Confession that after an existential crisis, he considered various options and determined that suicide would be the most logically consistent response in a world where God does not exist. However, he then decided to look less at logic and more towards trying to explain God using a mystical approach in that, for one, he describes God as life. He states that this new understanding of God would allow him to live meaningfully.
Leonard Peikoff states in his book Objectivism: The Philosophy of Ayn Rand:
Bioethicist Jacob Appel has criticized "arbitrary" ethical systems that allow patients to refuse care when they are physically ill, while denying the mentally ill the right to suicide.
See also
Advocacy of suicide
Altruistic suicide
Assisted suicide
Antinatalism
Émile Durkheim's Suicide (1897)
Fatalism
Micromort
Nihilism
Pessimism
Right to die
Johan Robeck
Self-immolation
Suicide attack
Thomas Szasz
References
Further reading
External links
Paterson, Craig, "A History of Ideas Concerning Suicide, Assisted Suicide and Euthanasia"
Video lectures from Yale University delivered by Shelly Kagan (requires Adobe Flash) from www.videolectures.net:
Suicide, Part I: The Rationality of Suicide
Suicide, Part II: Deciding Under Uncertainty
Suicide, Part III: The Morality of Suicide
Philosophy of death
Applied ethics
Social philosophy
Suicide
Suicide | Philosophy of suicide | [
"Biology"
] | 3,484 | [
"Behavior",
"Suicide",
"Human behavior",
"Applied ethics"
] |
3,064,920 | https://en.wikipedia.org/wiki/Glucose-6-phosphate%20dehydrogenase | Glucose-6-phosphate dehydrogenase (G6PD or G6PDH) () is a cytosolic enzyme that catalyzes the chemical reaction
D-glucose 6-phosphate + NADP+ + 6-phospho-D-glucono-1,5-lactone + NADPH + H+
This enzyme participates in the pentose phosphate pathway (see image), a metabolic pathway that supplies reducing energy to cells (such as erythrocytes) by maintaining the level of the reduced form of the co-enzyme nicotinamide adenine dinucleotide phosphate (NADPH). The NADPH in turn maintains the level of glutathione in these cells that helps protect the red blood cells against oxidative damage from compounds like hydrogen peroxide. Of greater quantitative importance is the production of NADPH for tissues involved in biosynthesis of fatty acids or isoprenoids, such as the liver, mammary glands, adipose tissue, and the adrenal glands. G6PD reduces NADP+ to NADPH while oxidizing glucose-6-phosphate. Glucose-6-phosphate dehydrogenase is also an enzyme in the Entner–Doudoroff pathway, a type of glycolysis.
Clinically, an X-linked genetic deficiency of G6PD makes a human prone to non-immune hemolytic anemia.
Species distribution
G6PD is widely distributed in many species from bacteria to humans. Multiple sequence alignment of over 100 known G6PDs from different organisms reveal sequence identity ranging from 30% to 94%. Human G6PD has over 30% identity in amino acid sequence to G6PD sequences from other species. Humans also have two isoforms of a single gene coding for G6PD. Moreover, at least 168 disease-causing mutations in this gene have been discovered. These mutations are mainly missense mutations that result in amino acid substitutions, and while some of them result in G6PD deficiency, others do not seem to result in any noticeable functional differences. Some scientists have proposed that some of the genetic variation in human G6PD resulted from generations of adaptation to malarial infection.
Other species experience a variation in G6PD as well. In higher plants, several isoforms of G6PDH have been reported, which are localized in the cytosol, the plastidic stroma, and peroxisomes. A modified F420-dependent (as opposed to NADP+-dependent) G6PD is found in Mycobacterium tuberculosis, and is of interest for treating tuberculosis. The bacterial G6PD found in Leuconostoc mesenteroides was shown to be reactive toward 4-hydroxynonenal, in addition to G6P.
Enzyme structure
G6PD is generally found as a dimer of two identical monomers (see main thumbnail). Depending on conditions, such as pH, these dimers can themselves dimerize to form tetramers. Each monomer in the complex has a substrate binding site that binds to G6P, and a catalytic coenzyme binding site that binds to NADP+/NADPH using the Rossman fold. For some higher organisms, such as humans, G6PD contains an additional NADP+ binding site, called the NADP+ structural site, that does not seem to participate directly in the reaction catalyzed by G6PD. The evolutionary purpose of the NADP+ structural site is unknown. As for size, each monomer is approximately 500 amino acids long (514 amino acids for humans).
Functional and structural conservation between human G6PD and Leuconostoc mesenteroides G6PD points to 3 widely conserved regions on the enzyme: a 9 residue peptide in the substrate binding site, RIDHYLGKE (residues 198-206 on human G6PD), a nucleotide-binding fingerprint, GxxGDLA (residues 38-44 on human G6PD), and a partially conserved sequence EKPxG near the substrate binding site (residues 170-174 on human G6PD), where we have use "x" to denote a variable amino acid. The crystal structure of G6PD reveals an extensive network of electrostatic interactions and hydrogen bonding involving G6P, 3 water molecules, 3 lysines, 1 arginine, 2 histidines, 2 glutamic acids, and other polar amino acids.
The proline at position 172 is thought to play a crucial role in positioning Lys171 correctly with respect to the substrate, G6P. In the two crystal structures of normal human G6P, Pro172 is seen exclusively in the cis conformation, while in the crystal structure of one disease causing mutant (variant Canton R459L), Pro172 is seen almost exclusively in the trans conformation.
With access to crystal structures, some scientists have tried to model the structures of other mutants. For example, in German ancestry, where enzymopathy due to G6PD deficiency is rare, mutation sites on G6PD have been shown to lie near the NADP+ binding site, the G6P binding site, and near the interface between the two monomers. Thus, mutations in these critical areas are possible without completely disrupting the function of G6PD. In fact, it has been shown that most disease causing mutations of G6PD occur near the NADP+ structural site.
NADP+ structural site
The NADP+ structural site is located greater than 20Å away from the substrate binding site and the catalytic coenzyme NADP+ binding site. Its purpose in the enzyme catalyzed reaction has been unclear for many years. For some time, it was thought that NADP+ binding to the structural site was necessary for dimerization of the enzyme monomers. However, this was shown to be incorrect. On the other hand, it was shown that the presence of NADP+ at the structural site promotes the dimerization of dimers to form enzyme tetramers. It was also thought that the tetramer state was necessary for catalytic activity; however, this too was shown to be false. The NADP+ structural site is quite different from the NADP+ catalytic coenzyme binding site, and contains the nucleotide-binding fingerprint.
The structural site bound to NADP+ possesses favorable interactions that keep it tightly bound. In particular, there is a strong network of hydrogen bonding with electrostatic charges being diffused across multiple atoms through hydrogen bonding with 4 water molecules (see figure). Moreover, there is an extremely strong set of hydrophobic stacking interactions that result in overlapping π systems.
The structural site has been shown to be important for maintaining the long term stability of the enzyme. More than 40 severe class I mutations involve mutations near the structural site, thus affecting the long term stability of these enzymes in the body, ultimately resulting in G6PD deficiency. For example, two severe class I mutations, G488S and G488V, drastically increase the dissociation constant between NADP+ and the structural site by a factor of 7 to 13. With the proximity of residue 488 to Arg487, it is thought that a mutation at position 488 could affect the positioning of Arg487 relative to NADP+, and thus disrupt binding.
Regulation
G6PD converts G6P into 6-phosphoglucono-δ-lactone and is the rate-limiting enzyme of the pentose phosphate pathway. Thus, regulation of G6PD has downstream consequences for the activity of the rest of the pentose phosphate pathway.
Glucose-6-phosphate dehydrogenase is stimulated by its substrate G6P. The usual ratio of NADPH/NADP+ in the cytosol of tissues engaged in biosyntheses is about 100/1. Increased utilization of NADPH for fatty acid biosynthesis will dramatically increase the level of NADP+, thus stimulating G6PD to produce more NADPH. Yeast G6PD is inhibited by long chain fatty acids according to two older publications and might be product inhibition in fatty acid synthesis which requires NADPH.
G6PD is negatively regulated by acetylation on lysine 403 (Lys403), an evolutionarily conserved residue. The K403 acetylated G6PD is incapable of forming active dimers and displays a complete loss of activity. Mechanistically, acetylating Lys403 sterically hinders the NADP+ from entering the NADP+ structural site, which reduces the stability of the enzyme. Cells sense extracellular oxidative stimuli to decrease G6PD acetylation in a SIRT2-dependent manner. The SIRT2-mediated deacetylation and activation of G6PD stimulates pentose phosphate pathway to supply cytosolic NADPH to counteract oxidative damage and protect mouse erythrocytes.
Regulation can also occur through genetic pathways. The isoform, G6PDH, is regulated by transcription and posttranscription factors. Moreover, G6PD is one of a number of glycolytic enzymes activated by the transcription factor hypoxia-inducible factor 1 (HIF1).
Clinical significance
G6PD is remarkable for its genetic diversity. Many variants of G6PD, mostly produced from missense mutations, have been described with wide-ranging levels of enzyme activity and associated clinical symptoms. Two transcript variants encoding different isoforms have been found for this gene.
Glucose-6-phosphate dehydrogenase deficiency is very common worldwide, and causes acute hemolytic anemia in the presence of simple infection, ingestion of fava beans, or reaction with certain medicines, antibiotics, antipyretics, and antimalarials.
Cell growth and proliferation are affected by G6PD. Pharmacologically ablating G6PD has been shown to overcome cross-tolerance of breast cancer cells to anthracyclines. G6PD inhibitors are under investigation to treat cancers and other conditions. In vitro cell proliferation assay indicates that G6PD inhibitors, DHEA (dehydroepiandrosterone) and ANAD (6-aminonicotinamide), effectively decrease the growth of AML cell lines. G6PD is hypomethylated at K403 in acute myeloid leukemia, SIRT2 activates G6PD to enhance NADPH production and promote leukemia cell proliferation.
See also
Glucose-6-phosphate dehydrogenase deficiency
Genetic resistance to malaria
References
Further reading
External links
- G6PD Deficiency Website
ATSDR - G6PD Deficiency
EC 1.1.1
NADPH-dependent enzymes
Enzymes of known structure
Pentose phosphate pathway | Glucose-6-phosphate dehydrogenase | [
"Chemistry"
] | 2,229 | [
"Carbohydrate metabolism",
"Pentose phosphate pathway"
] |
3,065,502 | https://en.wikipedia.org/wiki/Bacterial%20outer%20membrane | The bacterial outer membrane is found in gram-negative bacteria. Gram-negative bacteria form two lipid bilayers in their cell envelopes - an inner membrane (IM) that encapsulates the cytoplasm, and an outer membrane (OM) that encapsulates the periplasm.
The composition of the outer membrane is distinct from that of the inner cytoplasmic cell membrane - among other things, the outer leaflet of the outer membrane of many gram-negative bacteria includes a complex lipopolysaccharide whose lipid portion acts as an endotoxin - and in some bacteria such as E. coli it is linked to the cell's peptidoglycan by Braun's lipoprotein.
Porins can be found in this layer.
Outer membrane proteins
Outer membrane proteins are membrane proteins with key roles associated with bacterial cell structure and morphology; cell membrane homeostasis; the uptake of nutrients; protection of the cell from toxins including antibiotics; and virulence factors including adhesins, exotoxins, and biofilm formation. There are a number of outer membrane proteins that are specifically virulence-related.
Outer membrane proteins consist of two major classes of protein - transmembrane proteins and lipoproteins. The transmembrane proteins form channels or pores in the membrane called porins, and actively pumping efflux channels.
The outer membranes of a bacterium can contain a huge number of proteins. In E. Coli for example there are around 500,000 in the membrane.
Bacterial outer membrane proteins typically have a unique beta barrel structure that spans the membrane. The beta barrels fold to expose a hydrophobic surface before their insertion into the outer membrane. Beta barrels vary in sequence and size that ranges from 8 to 36 beta strands. A subset of OMPs have a perisplasmic or an extracellular link to their beta barrel structure. An outer membrane protein is translocated across the inner membrane through ‘’Sec’’ machinery, and finally inserted to the outer membrane by the barrel assembly machinery complex.
Biogenesis
The biogenesis of the outer membrane requires that the individual components are transported from the site of synthesis to their final destination outside the inner membrane by crossing both hydrophilic and hydrophobic compartments. The machinery and the energy source that drive this process are not yet fully understood. The lipid A-core moiety and the O-antigen repeat units are synthesized at the cytoplasmic face of the inner membrane and are separately exported via two independent transport systems, namely, the O-antigen transporter Wzx (RfbX) and the ATP binding cassette (ABC) transporter MsbA that flips the lipid A-core moiety from the inner leaflet to the outer leaflet of the inner membrane. O-antigen repeat units are then polymerised in the periplasm by the Wzy polymerase and ligated to the lipid A-core moiety by the WaaL ligase.
The LPS transport machinery is composed of LptA, LptB, LptC, LptD, LptE. This supported by the fact that depletion of any one of these proteins blocks the LPS assembly pathway and results in very similar outer membrane biogenesis defects. Moreover, the location of at least one of these five proteins in every cellular compartment suggests a model for how the LPS assembly pathway is organised and ordered in space.
LptC is required for the translocation of lipopolysaccharide (LPS) from the inner membrane to the outer membrane. LptE forms a complex with LptD, which is involved in the assembly of LPS in the outer leaflet of the outer membrane and is essential for envelope biogenesis.
Clinical significance
If lipid A, part of the lipopolysaccharide, enters the circulatory system it causes a toxic reaction by activating toll like receptor TLR 4. Lipid A is very pathogenic and not immunogenic. However, the polysaccharide component is very immunogenic, but not pathogenic, causing an aggressive response by the immune system. The sufferer will have a high temperature and respiration rate and a low blood pressure. This may lead to endotoxic shock, which may be fatal. The bacterial outer membrane is physiologically shed as the bounding membrane of outer membrane vesicles in cultures, as well as in animal tissues at the host–pathogen interface, implicated in translocation of gram-negative microbial biochemical signals to host or target cells.
See also
Host–pathogen interaction
Maltoporin
OMPdb
Outer membrane efflux proteins
Outer mitochondrial membrane
References
Membrane biology
Prokaryotic cell anatomy
Protein families | Bacterial outer membrane | [
"Chemistry",
"Biology"
] | 976 | [
"Protein families",
"Membrane biology",
"Protein classification",
"Molecular biology"
] |
3,065,512 | https://en.wikipedia.org/wiki/Hyaline | A hyaline substance is one with a glassy appearance. The word is derived from , and .
Histopathology
Hyaline cartilage is named after its glassy appearance on fresh gross pathology. On light microscopy of H&E stained slides, the extracellular matrix of hyaline cartilage looks homogeneously pink, and the term "hyaline" is used to describe similarly homogeneously pink material besides the cartilage. Hyaline material is usually acellular and proteinaceous. For example, arterial hyaline is seen in aging, high blood pressure, diabetes mellitus and in association with some drugs (e.g. calcineurin inhibitors). It is bright pink with PAS staining.
Ichthyology and entomology
In ichthyology and entomology, hyaline denotes a colorless, transparent substance, such as unpigmented fins of fishes or clear insect wings.
Botany
In botany, hyaline refers to thin and translucent plant parts, such as the margins of some sepals, bracts and leaves.
See also
Hyaline arteriolosclerosis
Hyaloid canal, which passes through the eye
Hyalopilitic
Hyaloserositis
Infant respiratory distress syndrome, previously known as hyaline membrane disease
References
Taber's Cyclopedic Medical Dictionary, 19th Edition. Donald Venes ed. 1997 F.A. Davis. Page 1008.
Histopathology
Fungal morphology and anatomy | Hyaline | [
"Chemistry"
] | 306 | [
"Histopathology",
"Microscopy"
] |
3,065,729 | https://en.wikipedia.org/wiki/Network%20Admission%20Control | Network Admission Control (NAC) refers to Cisco's version of network access control, which restricts access to the network based on identity or security posture. When a network device (switch, router, wireless access point, DHCP server, etc.) is configured for NAC, it can force user or machine authentication prior to granting access to the network. In addition, guest access can be granted to a quarantine area for remediation of any problems that may have caused authentication failure. This is enforced through an inline custom network device, changes to an existing switch or router, or a restricted DHCP class. A typical (non-free) WiFi connection is a form of NAC. The user must present some sort of credentials (or a credit card) before being granted access to the network.
In its initial phase, the Cisco Network Admission Control (NAC) functionality enables Cisco routers to enforce access privileges when an endpoint attempts to connect to a network. This access decision can be on the basis of information about the endpoint device, such as its current antivirus state. The antivirus state includes information such as version of antivirus software, virus definitions, and version of scan engine.
Network admission control systems allow noncompliant devices to be denied access, placed in a quarantined area, or given restricted access to computing resources, thus keeping insecure nodes from infecting the network.
The key component of the Cisco Network Admission Control program is the Cisco Trust Agent, which resides on an endpoint system and communicates with Cisco routers on the network. The Cisco Trust Agent collects security state information, such as what antivirus software is being used, and communicates this information to Cisco routers. The information is then relayed to a Cisco Secure Access Control Server (ACS) where access control decisions are made. The ACS directs the Cisco router to perform enforcement against the endpoint.
This Cisco product has been marked End of Life since November 30, 2011, which is Cisco's terminology for a product that is no longer developed or supported.
Posture assessment
Besides user authentication, authorization in NAC can be based upon compliance checking. This posture assessment is the evaluation of system security based on the applications and settings that a particular system is using. These might include Windows registry settings or the presence of security agents such as anti-virus or personal firewall. NAC products differ in their checking mechanisms:
802.1X Extensibile Authentication Protocol
Microsoft Windows AD domain authentication - login credentials
Cisco NAC Appliance L2 switch or L3 authentication
Pre-installed security agent
Web-based security agent
Network packet signatures or anomalies
External network vulnerability scanner
External database of known systems
Agent-less posture assessment
Most NAC vendors require the 802.1X supplicant (client or agent) to be installed. Some, including Hexis' NetBeat NAC, Trustwave, and Enterasys offer an agent-less posture checking. This is designed to handle the "Bring Your Own Device" or "BYOD" scenario to:
Detect and fingerprint all network attached devices, whether wired or wireless
Determine if these devices have common vulnerabilities and exposures (aka "CVEs")
Quarantine rogue devices as well as those infected with new malware
The agent-less approach works heterogeneously across almost all network environments and with all network device types.
See also
Access control
Network Access Protection
Cisco NAC Appliance
PacketFence
References
External links
Network Admission Control - Cisco Systems
Agent-less Network Admission Control - NetClarity, Inc.
FastNAC Next-generation NAC
Computer network security | Network Admission Control | [
"Engineering"
] | 752 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
3,065,940 | https://en.wikipedia.org/wiki/Optical%20vortex | An optical vortex (also known as a photonic quantum vortex, screw dislocation or phase singularity) is a zero of an optical field; a point of zero intensity. The term is also used to describe a beam of light that has such a zero in it. The study of these phenomena is known as singular optics.
Explanation
In an optical vortex, light is twisted like a corkscrew around its axis of travel. Because of the twisting, the light waves at the axis itself cancel each other out. When projected onto a flat surface, an optical vortex looks like a ring of light, with a dark hole in the center. The vortex is given a number, called the topological charge, according to how many twists the light does in one wavelength. The number is always an integer, and can be positive or negative, depending on the direction of the twist. The higher the number of the twist, the faster the light is spinning around the axis.
This spinning carries orbital angular momentum with the wave train, and will induce torque on an electric dipole. Orbital angular momentum is distinct from the more commonly encountered spin angular momentum, which produces circular polarization. Orbital angular momentum of light can be observed in the orbiting motion of trapped particles. Interfering an optical vortex with a plane wave of light reveals the spiral phase as concentric spirals. The number of arms in the spiral equals the topological charge.
Optical vortices are studied by creating them in the lab in various ways. They can be generated directly in a laser, or a laser beam can be twisted into a vortex using any of several methods, such as computer-generated holograms, spiral-phase delay structures, or birefringent vortices in materials.
Properties
An optical singularity is a zero of an optical field. The phase in the field circulates around these points of zero intensity (giving rise to the name vortex). Vortices are points in 2D fields and lines in 3D fields (as they have codimension two). Integrating the phase of the field around a path enclosing a vortex yields an integer multiple of 2. This integer is known as the topological charge, or strength, of the vortex.
A hypergeometric-Gaussian mode (HyGG) has an optical vortex in its center. The beam, which has the form
is a solution to the paraxial wave equation (see paraxial approximation, and the Fourier optics article for the actual equation) consisting of the Bessel function. Photons in a hypergeometric-Gaussian beam have an orbital angular momentum of mħ. The integer m also gives the strength of the vortex at the beam's centre. Spin angular momentum of circularly polarized light can be converted into orbital angular momentum.
Creation
Several methods exist to create hypergeometric-Gaussian modes, including with a spiral phase plate, computer-generated holograms, mode conversion, a q-plate, or a spatial light modulator.
Static spiral phase plate(s) or mirror(s) are spiral-shaped pieces of crystal or plastic that are engineered specifically to the desired topological charge and incident wavelength. They are efficient, yet expensive. Adjustable spiral phase plates can be made by moving a wedge between two sides of a cracked piece of plastic. Off-axis spiral phase mirrors can be used to mode convert high-power and ultra-short lasers.
Computer-generated holograms (CGHs) are the calculated interferogram between a plane wave and a Laguerre-Gaussian beam which is transferred to film. The CGH resembles a common Ronchi linear diffraction grating, save a "fork" dislocation. An incident laser beam creates a diffraction pattern with vortices whose topological charge increases with diffraction order. The zero order is Gaussian, and the vortices have opposite helicity on either side of this undiffracted beam. The number of prongs in the CGH fork is directly related to the topological charge of the first diffraction order vortex. The CGH can be blazed to direct more intensity into the first order. Bleaching transforms it from an intensity grating to a phase grating, which increases efficiency.
Mode conversion requires Hermite-Gaussian (HG) modes, which can easily be made inside the laser cavity or externally by less accurate means. A pair of astigmatic lenses introduces a Gouy phase shift which creates an LG beam with azimuthal and radial indices dependent upon the input HG.
A spatial light modulator is a computer-controlled electronic liquid-crystal device which can create dynamic vortices, arrays of vortices, and other types of beams by creating a hologram of varying refractive indices. This hologram may be a fork pattern, a spiral phase plate, or some similar pattern with non-zero topological charge.
Deformable mirror made of segments can be used to dynamically (with a rate of up to a few kHz) create vortices, even if illuminated by high power lasers.
A q-plate is a birefringent liquid crystal plate with an azimuthal distribution of the local optical axis, which has a topological charge q at its center defect. The q-plate with topological charge q can generate a charge vortex based on the input beam polarization.
An s-plate is a similar technology to a q-plate, using a high-intensity UV laser to permanently etch a birefringent pattern into silica glass with an azimuthal variation in the fast axis with topological charge of s. Unlike a q-plate, which may be wavelength tuned by adjusting the bias voltage on the liquid crystal, an s-plate only works for one wavelength of light.
At radio frequencies it is trivial to produce a (non optical) electromagnetic vortex. Simply arrange a one wavelength or greater diameter ring of antennas such that the phase shift of the broadcast antennas varies an integral multiple of 2 around the ring.
Nanophotonic metasurfaces can enable transverse phase modulation to create optical vortices. The vortex beams can be generated in either free space or on an integrated photonic chip.
A spiral lens can “[incorporate] the elements necessary to make an optical vortex directly into its surface.” Spiralizing a diopter can achieve multifocality, allowing—for instance in ophthalmic applications—increased acuity over a wide range of focal distances and light levels.
Detection
An optical vortex, being fundamentally a phase structure, cannot be detected from its intensity profile alone. Furthermore, as vortex beams of the same order have roughly identical intensity profiles, they cannot be solely characterized from their intensity distributions. As a result, a wide range of interferometric techniques are employed.
The simplest of the techniques is to interfere a vortex beam with an inclined plane wave, which results in a fork-like interferogram. By making a count of the number of forks in the pattern and their relative orientations, the vortex order and its corresponding sign can be precisely estimated.
A vortex beam can be deformed into its characteristic lobe structure while passing through a tilted lens. This happens as a result of a self-interference between different phase points in a vortex. A vortex beam of order will be split into lobes, roughly around the depth of focus of a tilted convex lens. Furthermore, the orientation of lobes (right and left diagonal), determine the positive and negative orbital angular momentum orders.
A vortex beam generates a lobe structure when interfered with a vortex of opposite sign. This technique offers no mechanism to characterize the signs, however. This technique can be employed by placing a Dove prism in one of the paths of a Mach–Zehnder interferometer, pumped with a vortex profile.
Applications
There are a broad variety of applications of optical vortices in diverse areas of communications and imaging.
Extrasolar planets have only recently been directly detected, as their parent star is so bright. Progress has been made in creating an optical vortex coronagraph to directly observe planets with too low a contrast ratio to their parent to be observed with other techniques.
Optical vortices are used in optical tweezers to manipulate micrometer-sized particles such as cells. Such particles can be rotated in orbits around the axis of the beam using OAM. Micro-motors have also been created using optical vortex tweezers.
Optical vortices can significantly improve communication bandwidth. For instance, twisted radio beams could increase radio spectral efficiency by using the large number of vortical states. The amount of phase front ‘twisting’ indicates the orbital angular momentum state number, and beams with different orbital angular momentum are orthogonal. Such orbital angular momentum based multiplexing can potentially increase the system capacity and spectral efficiency of millimetre-wave wireless communication.
Similarly, early experimental results for orbital angular momentum multiplexing in the optical domain have shown results over short distances, but longer distance demonstrations are still forthcoming. The main challenge that these demonstrations have faced is that conventional optical fibers change the spin angular momentum of vortices as they propagate, and may change the orbital angular momentum when bent or stressed. So far stable propagation of up to 50 meters has been demonstrated in specialty optical fibers. Free-space transmission of orbital angular momentum modes of light over a distance of 143 km has been demonstrated to be able to support encoding of information with good robustness.
Current computers use electronics that have two states, zero and one. Quantum computing could use light to encode and store information. Optical vortices theoretically have an infinite number of states in free space, as there is no limit to the topological charge. This could allow for faster data manipulation. The cryptography community is also interested in optical vortices for the promise of higher bandwidth communication discussed above.
In optical microscopy, optical vortices may be used to achieve spatial resolution beyond normal diffraction limits using a technique called Stimulated Emission Depletion (STED) Microscopy. This technique takes advantage of the low intensity at the singularity in the center of the beam to deplete the fluorophores around a desired area with a high-intensity optical vortex beam without depleting fluorophores in the desired target area.
Optical vortices can be also directly (resonantly) transferred into polariton fluids of light and matter to study the dynamics of quantum vortices upon linear or nonlinear interaction regimes.
Optical vortices can be identified in the non-local correlations of entangled photon pairs.
See also
Orbital angular momentum of light
References
External links
Video of propagation simulation of Vortex Diffractive Optical Element from near field to far field by Holo/Or .
Optical vortices and optical tweezers at the University of Glasgow.
Singular Optics Master list by Grover Swartzlander Jr., University of Arizona, Tucson.
Optical vortex coronograph, Gregory Foo, et al., University of Arizona, Tucson.
Optical tweezers, David Grier, New York University.
Selected Publications on Optical Vortices at Australian National University.
Physical optics
Orbital angular momentum of waves
Vortices | Optical vortex | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,272 | [
"Physical phenomena",
"Physical quantities",
"Vortices",
"Angular momentum of light",
"Waves",
"Orbital angular momentum of waves",
"Dynamical systems",
"Fluid dynamics",
"Angular momentum",
"Moment (physics)"
] |
3,066,287 | https://en.wikipedia.org/wiki/Silent%20service%20code | In the United States, the silent service code is a way for a diner to communicate to waitstaff during a meal to indicate whether the diner is finished with their plate. This is intended to prevent situations where the server might remove a plate of food and utensils prematurely.
The code is almost always taught during business dining etiquette classes.
Signals
To indicate they have finished with their plate, a diner places their napkin to the left of their plate and places their utensils together at the "4-o'clock" position on their plate. It is applicable to most types of table service: without waitstaff, the host or hosts may find it informative in judging when to clear away a course or the meal.
Utensils crossed on a plate signify that a diner is still eating. If a diner must leave during a course, placing their napkin on their chair indicates they are not finished.
See also
Eating utensil etiquette
Nonverbal communication
References
Etiquette
Nonverbal communication
Serving and dining | Silent service code | [
"Biology"
] | 209 | [
"Etiquette",
"Behavior",
"Human behavior"
] |
3,066,350 | https://en.wikipedia.org/wiki/Microstate%20%28statistical%20mechanics%29 | In statistical mechanics, a microstate is a specific configuration of a system that describes the precise positions and momenta of all the individual particles or components that make up the system. Each microstate has a certain probability of occurring during the course of the system's thermal fluctuations.
In contrast, the macrostate of a system refers to its macroscopic properties, such as its temperature, pressure, volume and density. Treatments on statistical mechanics define a macrostate as follows: a particular set of values of energy, the number of particles, and the volume of an isolated thermodynamic system is said to specify a particular macrostate of it. In this description, microstates appear as different possible ways the system can achieve a particular macrostate.
A macrostate is characterized by a probability distribution of possible states across a certain statistical ensemble of all microstates. This distribution describes the probability of finding the system in a certain microstate. In the thermodynamic limit, the microstates visited by a macroscopic system during its fluctuations all have the same macroscopic properties.
In a quantum system, the microstate is simply the value of the wave function.
Microscopic definitions of thermodynamic concepts
Statistical mechanics links the empirical thermodynamic properties of a system to the statistical distribution of an ensemble of microstates. All macroscopic thermodynamic properties of a system may be calculated from the partition function that sums of all its microstates.
At any moment a system is distributed across an ensemble of microstates, each labeled by , and having a probability of occupation , and an energy . If the microstates are quantum-mechanical in nature, then these microstates form a discrete set as defined by quantum statistical mechanics, and is an energy level of the system.
Internal energy
The internal energy of the macrostate is the mean over all microstates of the system's energy
This is a microscopic statement of the notion of energy associated with the first law of thermodynamics.
Entropy
For the more general case of the canonical ensemble, the absolute entropy depends exclusively on the probabilities of the microstates and is defined as
where is the Boltzmann constant. For the microcanonical ensemble, consisting of only those microstates with energy equal to the energy of the macrostate, this simplifies to
with the number of microstates . This form for entropy appears on Ludwig Boltzmann's gravestone in Vienna.
The second law of thermodynamics describes how the entropy of an isolated system changes in time. The third law of thermodynamics is consistent with this definition, since zero entropy means that the macrostate of the system reduces to a single microstate.
Heat and work
Heat and work can be distinguished if we take the underlying quantum nature of the system into account.
For a closed system (no transfer of matter), heat in statistical mechanics is the energy transfer associated with a disordered, microscopic action on the system, associated with jumps in occupation numbers of the quantum energy levels of the system, without change in the values of the energy levels themselves.
Work is the energy transfer associated with an ordered, macroscopic action on the system. If this action acts very slowly, then the adiabatic theorem of quantum mechanics implies that this will not cause jumps between energy levels of the system. In this case, the internal energy of the system only changes due to a change of the system's energy levels.
The microscopic, quantum definitions of heat and work are the following:
so that
The two above definitions of heat and work are among the few expressions of statistical mechanics where the thermodynamic quantities defined in the quantum case find no analogous definition in the classical limit. The reason is that classical microstates are not defined in relation to a precise associated quantum microstate, which means that when work changes the total energy available for distribution among the classical microstates of the system, the energy levels (so to speak) of the microstates do not follow this change.
The microstate in phase space
Classical phase space
The description of a classical system of F degrees of freedom may be stated in terms of a 2F dimensional phase space, whose coordinate axes consist of the F generalized coordinates qi of the system, and its F generalized momenta pi. The microstate of such a system will be specified by a single point in the phase space. But for a system with a huge number of degrees of freedom its exact microstate usually is not important. So the phase space can be divided into cells of the size h0 = ΔqiΔpi, each treated as a microstate. Now the microstates are discrete and countable and the internal energy U has no longer an exact value but is between U and U+δU, with .
The number of microstates Ω that a closed system can occupy is proportional to its phase space volume: where is an Indicator function. It is 1 if the Hamilton function H(x) at the point x = (q,p) in phase space is between U and U+ δU and 0 if not. The constant makes Ω(U) dimensionless. For an ideal gas is .
In this description, the particles are distinguishable. If the position and momentum of two particles are exchanged, the new state will be represented by a different point in phase space. In this case a single point will represent a microstate. If a subset of M particles are indistinguishable from each other, then the M! possible permutations or possible exchanges of these particles will be counted as part of a single microstate. The set of possible microstates are also reflected in the constraints upon the thermodynamic system.
For example, in the case of a simple gas of N particles with total energy U contained in a cube of volume V, in which a sample of the gas cannot be distinguished from any other sample by experimental means, a microstate will consist of the above-mentioned N! points in phase space, and the set of microstates will be constrained to have all position coordinates to lie inside the box, and the momenta to lie on a hyperspherical surface in momentum coordinates of radius U. If on the other hand, the system consists of a mixture of two different gases, samples of which can be distinguished from each other, say A and B, then the number of microstates is increased, since two points in which an A and B particle are exchanged in phase space are no longer part of the same microstate. Two particles that are identical may nevertheless be distinguishable based on, for example, their location. (See configurational entropy.) If the box contains identical particles, and is at equilibrium, and a partition is inserted, dividing the volume in half, particles in one box are now distinguishable from those in the second box. In phase space, the N/2 particles in each box are now restricted to a volume V/2, and their energy restricted to U/2, and the number of points describing a single microstate will change: the phase space description is not the same.
This has implications in both the Gibbs paradox and correct Boltzmann counting. With regard to Boltzmann counting, it is the multiplicity of points in phase space which effectively reduces the number of microstates and renders the entropy extensive. With regard to Gibbs paradox, the important result is that the increase in the number of microstates (and thus the increase in entropy) resulting from the insertion of the partition is exactly matched by the decrease in the number of microstates (and thus the decrease in entropy) resulting from the reduction in volume available to each particle, yielding a net entropy change of zero.
See also
Quantum statistical mechanics
Degrees of freedom (physics and chemistry)
Ergodic hypothesis
Phase space
References
External links
Some illustrations of microstates vs. macrostates
Statistical mechanics | Microstate (statistical mechanics) | [
"Physics"
] | 1,638 | [
"Statistical mechanics"
] |
3,066,436 | https://en.wikipedia.org/wiki/Howdah | A howdah, or houdah (), derived from the Arabic (), which means "bed carried by a camel", also known as hathi howdah (, ), is a carriage which is positioned on the back of an elephant, or occasionally some other animal such as a camel, used most often in the past to carry wealthy people during progresses or processions, hunting or in warfare. It was also a symbol of wealth for the owner and as a result might be elaborately decorated, even with expensive gemstones.
Notable howdahs are the Golden Howdah, on display at the Napier Museum at Thiruvananthapuram, which was used by the Maharaja of Travancore and that is used traditionally during the Elephant Procession of the famous Mysore Dasara. The Mehrangarh Fort Museum in Jodhpur, Rajasthan, has a gallery of royal howdahs.
Today, howdahs are used mainly for tourist or commercial purposes in South East Asia and are the subject of controversy as animal rights groups and organizations, such as Millennium Elephant Foundation, openly criticize their use, citing evidence that howdahs can cause permanent damage to an elephant's spine, lungs, and other organs and can significantly shorten the animal's life.
History
A passage from Roman historian Curtius describes the lifestyles of ancient Indian kings during the "Second urbanisation" (c. 600 – c. 200 BCE) who rode on chariot mounted on elephants or howdahs when going on distant expeditions.
Howdah Gallery, Mehrangarh Fort Museum
The Mehrangarh Fort Museum, Jodhpur, has a gallery dedicated to an array of Hathi Howdah, used by the Maharaja of Mewar, mostly for ceremonial occasions.
Howdah for armies
References in literature
The American author Herman Melville in Chapter 42 ("The Whiteness of the Whale") of Moby Dick (1851), writes "To the native Indian of Peru, the continual site of the snow-howdahed Andes conveys naught of dread, except, perhaps, in the more fancy of the eternal frosted desolateness reigning at such vast altitudes, and the natural conceit of what a fearfulness it would be to lose oneself in such inhuman solitudes." It also appears in Chapter 11 of Jules Vernes' classic adventure novel Around the World in Eighty Days (1873), in which we are told "The Parsee, who was an accomplished elephant driver, covered his back with a sort of saddle-cloth, and attached to each of his flanks some curiously uncomfortable howdahs.” It is mentioned in the first chapter of Ben-Hur: “Exactly at noon the dromedary, of its own will, stopped, and uttered the cry or moan, peculiarly piteous, by which its kind always protest against an overload, and sometimes crave attention and rest. The master thereupon bestirred himself, waking, as it were, from sleep. He threw the curtains of the houdah up, looked at the sun, surveyed the country on every side long and carefully, as if to identify an appointed place.”
Tolkien wrote in The Lord of the Rings of the Mûmakil (Elephants) of Harad with howdahs on their backs.
Elephant and castle symbol
A derived symbol used in Europe is the "elephant and castle": an elephant carrying a castle on its back, being used especially to symbolize strength. The symbol was used in Europe in classical antiquity and more recently has been used in England since the 13th century, and in Denmark since at least the 17th century.
In antiquity, the Romans made use of war elephants, and turreted elephants feature on the coinage of Juba II of Numidia, in the 1st century BC. Elephants were used in the Roman campaigns against the Celtiberians in Hispania, against the Gauls, and against the Britons, the ancient historian Polyaenus writing, "Caesar had one large elephant, which was equipped with armor and carried archers and slingers in its tower. When this unknown creature entered the river, the Britons and their horses fled and the Roman army crossed over." However, he may have confused this incident with the use of a similar war elephant in Claudius' final conquest of Britain.
Alternatively, modern uses may derive from later contacts with howdahs. Fanciful images of war elephants with elaborate castles on their back date to 12th century Spain, as at right.
Notably, 13th century English use may come from the elephant given by Louis IX of France to Henry III of England, for his menagerie in the Tower of London in 1254, this being the first elephant in England since Claudius.
Today the symbol is most known in the United Kingdom from the Elephant and Castle intersection in south London. This derives its name from a pub established by 1765, in a building previously known as The White Horse and used by a smith or farrier. It has been claimed the premises had been associated with the Cutlers' Company. However the company has advised a contributor it has never had an association with the area. Meanwhile the use of the symbol by the Cutlers due to the presence of ivory in sword and cutlery handles is just one of diverse world-wide uses of the term over a long period. These include the titles of several other public houses in London. Stephen Humphrey, a historian of the Elephant and Castle, addresses the various origin theories and demonstrates the naming of the pub that subsequently gave its name to the area was random.
The elephant and castle symbol has been used since the 13th century in the coat of arms of the city of Coventry, and forms the heraldic crest of the Corbet family, feudal barons of Caus, of Caus Castle in Shropshire, powerful marcher lords. It was used in the 17th century by the Royal African Company, which led to its use on the guinea gold coin.
The symbol of an elephant and castle has also been used in the Order of the Elephant, the highest order in Denmark, since 1693.
Camel howdah
In Persia, a camel howdah used to be a common means of transport.
Turkmens traditionally used Kejebe/کجوه on Camels, mainly used for carrying women in long distances or weddings, now it is only rented for weddings.
See also
Litter
Mahout, the driver of an elephant
Howdah pistols, large handguns used to defend howdahs from predators
Persian war elephants
References
External links
Animal-powered vehicles
Animal equipment
Elephants in India
Camels
Culture of India
Hindi words and phrases
Livestock
de:Sänfte#Spezielle Sänften | Howdah | [
"Biology"
] | 1,357 | [
"Animal equipment",
"Animals"
] |
3,066,455 | https://en.wikipedia.org/wiki/Semantide | Semantides (or semantophoretic molecules) are biological macromolecules that carry genetic information or a transcript thereof. Three different categories or semantides are distinguished: primary, secondary and tertiary. Primary Semantides are genes, which consist of DNA. Secondary semantides are chains of messenger RNA, which are transcribed from DNA. Tertiary semantides are polypeptides, which are translated from messenger RNA. In eukaryotic organisms, primary semantides may consist of nuclear, mitochondrial or plastid DNA. Not all primary semantides ultimately form tertiary semantides. Some primary semantides are not transcribed into mRNA (non-coding DNA) and some secondary semantides are not translated into polypeptides (non-coding RNA). The complexity of semantides varies greatly. For tertiary semantides, large globular polypeptide chains are most complex while structural proteins, consisting of repeating simple sequences, are least complex. The term semantide and related terms were coined by Linus Pauling and Emile Zuckerkandl. Although semantides are the major type of data used in modern phylogenetics, the term itself is not commonly used.
Related terms
Isosemantic
DNA or RNA that differs in base sequence, but translate into identical polypeptide chains are referred to as being isosemantic.
Episemantic
Molecules that are synthesized by enzymes (tertiary semantides) are referred to as episemantic molecules. Episemantic molecules have a larger variety in types than semantides, which only consist of three types (DNA, RNA or polypeptides). Not all polypeptides are tertiary semantides. Some, mainly small polypeptides, can also be episemantic molecules.
Asemantic
Molecules that are not produced by an organism are referred to as asemantic molecules, because they do not contain any genetic information. Asementic molecules may be changed into episemantic molecules by anabolic processes. Asemantic molecules may also become semantic molecules when they integrate into a genome. Certain viruses and episomes have this ability.
When referring to a molecule as being semantic, episemantic or asemantic, then this only applies to a specific organism. A semantic molecule for one organism may be asemantic for another organism.
Research applications
Semantides are used as phylogenetic information for studying the evolutionary history of organisms. Primary semantides are also used in comparative biodiversity analyses. However, since extracellular DNA can persist for some time, these types of analysis cannot discern active from inactive and or dead organisms.
The extent to which biological macromolecules are informative for studying evolutionary history differs. The more complex a molecule, the more informative it is in for phylogenetics. Primary and secondary semantides contain the most information. In tertiary semantides, some information is lost, because many amino acids are coded for by more than one codon.
Episemantic molecules (e.g. carotenoids) are also informative for phylogenetics. However, the distributions of these molecules do not correlate perfectly with phylogenies based on semantides. Therefore, independent confirmation is often still needed. The more enzymes involved in a synthesis pathway, the more unlikely that such pathways have evolved separately. Therefore, for episemantic molecules, molecules that are synthesized from the least complex asemantic molecules are the most informative in phylogenetics. However, different pathways may synthesize similar or even identical molecules. For example, in animals, plants and other eukaryotes, different pathways have been found for vitamin C synthesis. Therefore, certain molecules should not be used for studying phylogenetic relationships.
Although asemantic molecules could indicate some quantitative or qualitative features of a group of organisms, they are considered to be unreliable and uninformative for phylogenetics.
Analyses using different semantides may yield conflicting phylogenies. However, if the phylogenies are congruent, then there is more support for the evolutionary relationship. By analyzing larger sequences (e.g. complete mitochondrial genome sequences), phylogenies can be constructed, which are more resolved and have more support.
Examples
Semantides often used in studies are common to most organisms and are known to only change slowly over time. Examples of these macromolecules are:
ATPase
Cytochrome b
Cytochrome c oxidase subunit I
Heat shock protein genes
Histone H3
RecA
Recombination activating gene 1
Ribonuclease P RNA
Ribosomal DNA (e.g. 28S rDNA)
Ribosomal RNA (e.g. 16S rRNA)
References
Phylogenetics | Semantide | [
"Biology"
] | 1,004 | [
"Bioinformatics",
"Phylogenetics",
"Taxonomy (biology)"
] |
3,066,490 | https://en.wikipedia.org/wiki/Indy%20%28software%29 | Indy is a music discovery tool for computers with an Internet connection. It uses collaborative filtering to automatically download music the user is likely to enjoy listening to. Indy is similar to iRATE radio, but does not include a built-in file manager. Indy automatically downloads music from public websites and continuously plays new titles. The user can rate the titles during or after playback. These ratings are then matched against the tastes of other listeners, and newly downloaded titles are meant to be more likely to be enjoyable to the listener.
Every track can be given a rating from 1 to 5 stars. If the song is still playing when the user assigns the score, a rating lower than 3 stars skips to the next title. Indy's user interface is highly simplified. The makers of the program recommend to only use Indy to discover music, and to utilize other applications to organize and repeatedly listen to it.
Indy is written in Java and therefore runs on most modern operating systems.
As of October 3, 2007, the official homepage contains only an image and the text Revver is undergoing maintenance at the moment. We promise to return soon.
External links
Official homepage
Review of Indy published on infoAnarchy, November 3, 2005.
Recommender systems | Indy (software) | [
"Technology"
] | 248 | [
"Information systems",
"Recommender systems"
] |
3,066,922 | https://en.wikipedia.org/wiki/Skene%20%28theatre%29 | In the theatre of ancient Greece, the skene was the structure at the back of a stage. The word means 'tent' or 'hut', and it is thought that the original structure for these purposes was a tent or light building of wood and was a temporary structure. It was initially a very light structure or just cloth hanging from a rope, but over the course of time the skene underwent fundamental changes. First, it became a permanent building, whose roof could sometimes be used to make speeches, and as time passed it was raised up from the level of the orchestra, creating a , or "space in front of the ". The facade of the was behind the orchestra and provided a space for supporting stage scenery.
During the Roman Period, the had become a large and complex, elaborately decorated, stone building on several levels. Actors emerged from the parodoi and could use its steps and balconies to speak from. It was also where costumes were stored and to which the periaktoi (painted panels serving as the background) were connected.
Classical Greece
Ancient Greek theatre began in the 6th century BC and traces its origins to religious rituals such as the Festival of Dionysus and choral odes to the gods known as dithyrambs. Early Greek theatres were simple open air structures built on the slope of a hill. The Theatre of Dionysus in Athens is thought to have been the first purpose-built theatre. Around the middle of the 5th century BC, the began to appear in Greek theatre. Placing a behind the orchestra – where the performers acted, played, and danced – broke what is thought to have been the original theatre in the round nature of Greek theatre. The also served as another "hidden stage". At times some of the action went on inside, in which case it was up to the audience to decide what was happening based on the noises coming from the inside. It was a convention of the dramas of the classic period that characters never died on stage, instead usually retreating into the to do so.
At some point at Athens in the Classical period a small stoa colonnade was constructed behind the scene-building with its back to the theatre and would have provided a permanent backdrop for the action."
Hellenistic period
The Hellenistic period started around the time of Alexander the Great's death in 323 BC and lasted until the Roman Victory at the Battle of Actium in 31 BC. As Ancient Greece began to change from a culture consisting of ethnic and city-state Greeks to one governed by large monarchies, theatre architecture to include the stage buildings began to experience significant changes. In the 4th century BC, the became a permanent stone structure and the stage was raised off the ground. In surviving examples this stage seems to have been raised by 2.5–4 m above the orchestra, and to have been 2–4 m deep, terminated by the .
As the Greek chorus declined in importance compared to a smaller group of main actors, the chorus remained in the orchestra to perform, while the main actors generally performed from the stage on top of the . This important change occurred in the Hellenistic period, between the 3rd and 1st centuries BC. The itself became increasingly elaborate, and was also available as a place for actors to declaim from, so that the performers between them had three levels available. "The roof of the was called the ('god-speaking'), from which one might assume that its primary use was for the advent of deities, either at the start or close of the drama." Most theatres still standing today date from the Hellenistic period.
Roman period
In Roman theatres, scaenae frons ('facade of the ') is the term for the elaborately decorated stone screens, rising two or three stories, that the had now become. By the 1st century BC, the was as elaborate as its Roman development, which dispensed with the orchestra altogether, leaving a relatively low facade, often decorated, and a wide stage or behind, ending in an elaborate with three or more doors, and sometimes three stories. The evolution of the actor, who assumed an individual part and answered to the chorus (the word for actor, , means 'answerer'), introduced into drama a new form, the alternation of acted scenes, or episodes. The no longer supported painted sets in the Greek manner, but relied for effect on elaborate permanent architectural decoration and consisted of a series of complex stone buildings. To each side there was a . The was the upper floor of the , which might be deepened to give a third stage level, seen through or openings. The interior of the ('building') behind the facade remained normally outside the view of the audience, and fulfilled the original function as a changing room and place for props.
Surviving examples
Notes
References
Boardman, John ed., The Oxford History of Classical Art, 1993, OUP, .
"Grove" = Anastasia N. Dinsmoor, William B. Dinsmoor jr, "Architecture; Theatres" section in Thomas Braun, et al. "Greece, ancient". Grove Art Online, Oxford Art Online, Oxford University Press. Web. 22 Mar. 2016.
"Perseus", Perseus Encyclopedia, Skene.
External links
Ancient Roman Theatre - http://www.crystalinks.com/rometheaters.html
Ancient Greek theatre
Parts of a theatre
History of theatre
Greek words and phrases | Skene (theatre) | [
"Technology"
] | 1,099 | [
"Parts of a theatre",
"Components"
] |
3,067,110 | https://en.wikipedia.org/wiki/Talking%20ATM | A Talking ATM is a type of automated teller machine (ATM) that provides audible instructions so that persons who cannot read an ATM screen can independently use the machine. All audible information is delivered privately through a standard headphone jack on the face of the machine or a separately attached telephone handset. Information is delivered to the customer either through pre-recorded sound files or via text-to-speech speech synthesis.
History
The world's first talking ATM for the blind was an NCR machine unveiled by the Royal Bank of Canada on October 22, 1997, at a bank branch on the corner of Bank Street and Queen Street in Ottawa, Ontario. The talking ATM was a result of concerns Chris and Marie Stark, two blind customers, raised with the bank beginning in 1984. Their concerns turned into a discrimination complaint with the Canadian Human Rights Commission in 1991. The machine was manufactured by NCR and adapted by Ottawa-based T-Base Communications at a cost of about $500,000 Canadian dollars.
Usage
A user plugs a standard headset into the jack, and can hear instructions such as "press 1 for withdrawal", "press 2 for deposit." There is an audible orientation for first time users, and audible information describing the location of features such as the number keypad, deposit slot, and card slot.
With the increasing processing power available inside ATMs today, most ATM manufacturers provide the ability to connect headsets to their ATMs. Speech features are now available from lower-cost ATM producers, which means that the technology should gradually appear in off-premises ATM installations as equipment wears out and is replaced.
By country
Talking ATMs in Australia
National Australia Bank and Westpac have deployed talking ATMs.
Talking ATMs in Canada
By 2002 Royal Bank had 15 talking ATMs in operation and announced that an additional 250 units would be installed.
Relevant legislation and standards
Canadian Human Rights Act
Canadian Standards Association: CAN/CSA-B651.2-07 (R2012) – Accessible Design for Self-Service Interactive Devices.
Talking ATMs in the Philippines
Metrobank uses talking ATMs.
Talking ATMs in Turkey
Yapı ve Kredi Bankası implemented the first Talking ATMs in Turkey in December 2010. The Talking ATM function is specifically designed for visually impaired or partially sighted customers of Yapi Kredi or other banks. Utilising the text-to-speech technology, customers can perform cash withdrawal or balance inquiry transactions via Talking ATMs. The audible transaction starts when a headphone plug is connected to the Talking ATM's headphone jack, and is terminated for security when the jack is disconnected. Optionally, the customer may select to mask the account information on the ATM screen.
Talking ATMs in the UK
Barclays initially launched, with over 80% of their 4,100 ATMs offering the functionality. This included the 800 ATMs at ASDA superstores which are operated by Barclays. In addition, Northern Bank in the UK have deployed 85 talking cash machines out of their estate of over 200 which amounts to 40 per cent of their estate. Most recent machines installed by banks include a standard audio jack for blind persons to interact with the machine, but these facilities have not yet been enabled.
In 2011 the UK's leading charity for blind and partially sighted people, RNIB, launched a campaign to get major banks to install talking cash machines.
They had also been working with LOCOG from 2009 to ensure that talking cashpoints would be provided in the Olympic Park for the London 2012 Olympics and Paralympics. However, a few weeks before the Games, sponsor and sole provider Visa announced that they would only be able to install the necessary software in time on two machines.
Following consultation and collaboration with the RNIB, during 2013, Nationwide Building Society, the world’s largest mutual organisation, started introducing voice guided transactions across their network of 1,300 ATMs.
In April 2013 RBS publicly announced as part of their 2012 Sustainability report that they would be installing Talking ATMs from 2014 onwards, as part of a wider ATM upgrade.
Talking ATMs in the US
The first public actions in the United States to achieve ATM access for the blind occurred in June 1999. On June 3, Mellon Bank and PNC Bank were sued in federal courts in Philadelphia and Pittsburgh respectively. On June 25, 1999, Wells Fargo became the first major bank in the United States to commit to installing talking ATMs. In a legal settlement with blind community leaders, the bank agreed to install a talking ATM at all of its 1,500 ATM locations in California. The company has subsequently installed talking ATMs at all ATM locations in all states. In July 1999, Citibank agreed to pilot five talking ATMs in and around San Francisco and Los Angeles. The Citibank machine represented a unique engineering and research challenge as it uses a touch screen interface and has no function keys to offer access to the blind. All Citibank locations with this kind of machine have been adapted with talking functionality.
The first talking ATM in the United States was a Diebold machine installed on October 1, 1999, in San Francisco's City Hall by the San Francisco Federal Credit Union. Like the Royal Bank machine, it was adapted by T-Base Communications. In March 2000, Bank of America became the first financial institution to commit to installing a talking ATM at all of its ATM locations nationwide. A legal settlement called for the installation of hundreds of machines with later negotiations for a schedule for the remainder.
By 2012, there were in excess of 100,000 talking ATM's in the USA.
Relevant legislation and standards
Americans with Disabilities Act of 1990
Talking ATMs in India
In 2012, one of the leading Public Sector Bank Union Bank of India unveiled India's first ever Truly Accessible and Talking ATM in Vastrapur, Ahmedabad, Gujarat on 6 June 2012 for visually and physically disabled people. Union Bank of India has done pioneer work on Talking ATM in India. Union Bank's Talking ATM model and workflow has set a benchmark. The bank has also developed Talking ATM usage accessible manuals in DAISY and electronic Braille formats. List of talking ATMs locations and accessible instructions manuals can be easily downloaded from bank's website.
On October 4, 2012 State Bank of India, India's largest Public Sector Bank, launched its first and Real Talking ATM in New Delhi. State Bank of India has done large scale deployment of Talking ATM across India. State Bank inaugurated its 5555th Talking ATM on 1 July 2014.
Few more banks namely Bank of Baroda, Corporation Bank, Citibank, HSBC and more public sector banks, private banks, cooperative banks in India have either deployed or taken good initiatives on Talking ATMs for the blind.
A repository of Talking ATM addresses of banks in India is made available by 'Talking ATM India Locator' website which is a voluntary and non-commercial service.
See also
ATM Industry Association (ATMIA)
Automated cash handling
Braille
Security of Automated Teller Machines
Self service
Verification and validation
References
Automated teller machines
Assistive technology | Talking ATM | [
"Engineering"
] | 1,421 | [
"Automation",
"Automated teller machines"
] |
3,067,159 | https://en.wikipedia.org/wiki/Tom%27s%20Hardware | Tom's Hardware is an online publication owned by Future plc and focused on technology. It was founded in 1996 by Thomas Pabst. It provides articles, news, price comparisons, videos and reviews on computer hardware and high technology. The site features coverage on CPUs, motherboards, RAM, PC cases, graphic cards, display technology, power supplies and displays, storage, smartphones, tablets, gaming, consoles, and computer peripherals.
Tom's Hardware has a forum and featured blogs.
History
Tom's Hardware was founded in 1996 as Tom's Hardware Guide in Canada by Thomas Pabst. It started using the domain tomshardware.com in September 1997 and was followed by several foreign language versions, including Italian, French, Finnish and Russian based on franchise agreements.
While the initial testing labs were in Germany and California, much of Tom's Hardware's testing now occurs in New York and a facility in Ogden, Utah owned by its parent company. In April 2007, the site was acquired by the French company Bestofmedia Group. In July 2013, that company was acquired by TechMediaNetwork, Inc., which changed its name to Purch in April 2014. Purch's consumer brands, including Tom's Hardware, were acquired by Future in 2018.
The site celebrated its 20th anniversary in May 2016. Beyond continuous publication of the website, it is known for its overclocking championships and other contests.
Editors
Avram Piltch is the current editor-in-chief of Tom's Hardware. Prior to starting the position in 2018, he worked for sister sites Tom's Guide and Laptop Mag. Prior to that, John A. Burek, formerly of Computer Shopper, briefly held the role.
Burek succeeded Fritz Nelson, who served from August 2014 through 2017. Other former editors-in-chief include Chris Angelini (July 2008 – July 2014), Patrick Schmid (2005–2006), David Strom (2005), Omid Rahmat (1999–2003) and founder Thomas Pabst (1996–2001).
Related publications
Tom's Hardware is owned by Future plc, which also owns a number of other websites. In technology, those include Tom's Guide (formerly Gear Digest), Laptop Mag and AnandTech, as well as science sites like LiveScience and Space.com.
In March 2018 the German spin-off was to be closed because of the new data/privacy laws, but continued as an independent site (tomshw.de), with an exclusive licence for the local usage of the brand name.
In July 2019 the licence was returned. After that the German CEO and editor-in-chief of the gotIT! Tech Media GmbH started a new website Igor´sLAB and his own Youtube channel.
Tom's Guide
Tom's Guide (formerly known as GearDigest) is an online publication owned by Future that focuses on technology, with editorial teams in the US, UK and Australia. Tom's Guide was launched in 2007 by Bestofmedia, which was subsequently acquired by TechMediaNetwork in 2013; in 2014, TechMediaNetwork changed its name to Purch, which was acquired by Future in 2018. Primarily focused on news, reviews, price comparisons, how-tos and guides, Tom's Guide also features opinion articles and deals content.
The site features coverage on CPUs, motherboards, RAM, PC cases, graphic cards, display technology, displays, storage, smartphones, tablets, gaming, consoles, fitness and health, home, smart home, streaming, security and computer peripherals.
It is the second largest consumer technology, news and review site from the US with 68.4M visits in September 2022.
History
Tom’s Guide was originally launched as Gear Digest by Bestofmedia before being re-named to Tom's Guide. The publication was subsequently acquired by TechMediaNetwork in 2013; in 2014, TechMediaNetwork changed its name to Purch, which was then acquired by Future in 2018.
While the initial testing labs were in Germany and California, much of Tom’s Hardware’s testing now occurs in New York and a facility in Ogden, Utah owned by its parent company, Purch.
In April 2007, the site was acquired by the French company Bestofmedia Group. In July 2013, that company was acquired by TechMediaNetwork, Inc., which changed its name to Purch in April 2014.
The site celebrated its 15th anniversary in 2022. Beyond continuous publication of the website, it is known for its annual CES awards and Tom's Guide Awards that are held in June and July each year.
Editors
Mark Spoonauer is the current Global Editor-in-Chief and has been since 2013. Before that, he worked as the Editor-in-Chief of Laptop Mag since 2003.
Mike Prospero is the current US Editor-in-Chief alongside Managing Editors Philip Michaels, Jason England, Nick Pino and Senior Deals Editor Louis Ramirez.
See also
CNET
TechCrunch
List of Internet forums
References
External links
Tom's Guide
Future plc
Magazines established in 1996
Computing websites
American technology news websites
Computer magazines published in the United States | Tom's Hardware | [
"Technology"
] | 1,065 | [
"Computing websites"
] |
3,067,258 | https://en.wikipedia.org/wiki/Nano-ITX | Nano-ITX is a computer motherboard form factor first proposed by VIA Technologies at CeBIT in March 2003, and implemented in late 2005. Nano-ITX boards measure , and are fully integrated, very low power consumption motherboards with many uses, but targeted at smart digital entertainment devices such as DVRs, set-top boxes, media centers, car PCs, and thin devices. Nano-ITX motherboards have slots for SO-DIMM.
There are four Nano-ITX motherboard product lines so far, VIA's EPIA N, EPIA NL, EPIA NX, and the VIA EPIA NR. These boards are available from a wide variety of manufacturers supporting numerous different CPU platforms.
Udoo has now released at least 1 nano-ITX board: the Udoo Bolt.
See also
Mini-ITX
Pico-ITX
Mobile-ITX
EPIA, mini-ITX and nano-ITX motherboards from VIA
Ultra-Mobile PC
Minimig, is an open source re-implementation of an Amiga 500 in Nano-ITX format
References
External links
Jetway Computer Corp. J8F9 AMD Nano-ITX Mainboards Nano ITX Manufacturer, Mainboard OEMs, Daughterboards etc.
VIA EPIA N-Series Nano-ITX Mainboard
VIA EPIA NL-Series Nano-ITX Mainboard
VIA EPIA NX-Series Nano-ITX Mainboard
VIA EPIA NR-Series Nano-ITX Mainboard
Digital video recorders
IBM PC compatibles
Motherboard form factors
Set-top box | Nano-ITX | [
"Technology"
] | 323 | [
"Digital video recorders",
"Computing stubs",
"Recording devices",
"Computer hardware stubs"
] |
3,067,278 | https://en.wikipedia.org/wiki/Heisenberg%27s%20microscope | Heisenberg's microscope is a thought experiment proposed by Werner Heisenberg that has served as the nucleus of some commonly held ideas about quantum mechanics. In particular, it provides an argument for the uncertainty principle on the basis of the principles of classical optics.
The concept was criticized by Heisenberg's mentor Niels Bohr, and theoretical and experimental developments have suggested that Heisenberg's intuitive explanation of his mathematical result might be misleading. While the act of measurement does lead to uncertainty, the loss of precision is less than that predicted by Heisenberg's argument when measured at the level of an individual state. The formal mathematical result remains valid, however, and the original intuitive argument has also been vindicated mathematically when the notion of disturbance is expanded to be independent of any specific state.
Heisenberg's argument
Heisenberg supposes that an electron is like a classical particle, moving in the direction along a line below the microscope. Let the cone of light rays leaving the microscope lens and focusing on the electron make an angle with the electron. Let be the wavelength of the light rays. Then, according to the laws of classical optics, the microscope can only resolve the position of the electron up to an accuracy of
An observer perceives an image of the particle because the light rays strike the particle and bounce back through the microscope to the observer's eye. We know from experimental evidence that when a photon strikes an electron, the latter has a Compton recoil with momentum proportional to , where is the Planck constant. However, the extent of "recoil cannot be exactly known, since the direction of the scattered photon is undetermined within the bundle of rays entering the microscope." In particular, the electron's momentum in the direction is only determined up to
Combining the relations for and , we thus have
,
which is an approximate expression of Heisenberg's uncertainty principle.
Analysis of argument
Although the thought experiment was formulated as an introduction to Heisenberg's uncertainty principle, one of the pillars of modern physics, it attacks the very premises under which it was constructed, thereby contributing to the development of an area of physics—namely, quantum mechanics—that redefined the terms under which the original thought experiment was conceived.
Some interpretations of quantum mechanics question whether an electron actually has a determinate position before it is disturbed by the measurement used to establish said determinate position. Under the Copenhagen interpretation, an electron has some probability of showing up at any point in the universe, though the probability that it will be far from where one expects becomes very low at great distances from the neighborhood in which it is originally found. In other words, the "position" of an electron can only be stated in terms of a probability distribution, as can predictions of where it may move.
See also
Atom localization
Quantum mechanics
Basics of quantum mechanics
Interpretation of quantum mechanics
Philosophical interpretation of classical physics
Schrödinger's cat
Uncertainty principle
Quantum field theory
Electromagnetic radiation
References
Sources
External links
History of Heisenberg's Microscope
Lectures on Heisenberg's Microscope
Thought experiments in quantum mechanics
Werner Heisenberg | Heisenberg's microscope | [
"Physics"
] | 626 | [
"Quantum mechanics",
"Thought experiments in quantum mechanics"
] |
3,067,384 | https://en.wikipedia.org/wiki/European%20Community%20number | The European Community number (EC number) is a unique seven-digit identifier that was assigned to substances for regulatory purposes within the European Union by the European Commission. The EC Inventory comprises three individual inventories, EINECS, ELINCS and the NLP list.
Structure
The EC number may be written in a general form as: NNN-NNN-R, where the N represents digits and R is a check digit. The check digit is calculated using the ISBN method. According to this method, the check digit R is the following sum modulo 11:
If the remainder R is equal to 10, that combination of digits is not used for an EC number. To illustrate, the EC number of dexamethasone is 200-003-9. N1 is 2, N2 through N5 are 0, and N6 is 3.
The remainder is 9, which is the check digit.
There is a set of 181 ELINCS numbers (EC numbers starting with 4) for which the checksum by the above algorithm is 10 and the number has not been skipped but issued with a checksum of 1.
EC Inventory
The EC Inventory includes the substances in the following inventories. The content of these inventories is fixed and official.
List numbers
European Chemicals Agency (ECHA) also applies the EC number format to what it calls "List number". The number are assigned under the REACH Regulation without being legally recognised. Hence, they are not official because they have not been published in the Official Journal of the European Union. List numbers are administrative tools only and shall not be used for any official purposes.
See also
Registration, Evaluation, Authorisation and Restriction of Chemicals
European chemical Substances Information System
CAS registry number
References
External links
Chemical numbering schemes
European Union law
Regulation of chemicals in the European Union | European Community number | [
"Chemistry",
"Mathematics"
] | 367 | [
"Regulation of chemicals in the European Union",
"Regulation of chemicals",
"Chemical numbering schemes",
"Mathematical objects",
"Numbers"
] |
3,067,617 | https://en.wikipedia.org/wiki/Allegations%20of%20Iraqi%20mobile%20weapons%20laboratories | During the lead-up to the Iraq War, the United States had alleged that Iraq owned bioreactors, and other processing equipment to manufacture and process biological weapons that can be moved from location to location either by train or vehicle. Subsequent investigations failed to find any evidence of Iraq having access to a mobile weapons lab.
In the run up to the 2003 Invasion of Iraq, the main rationale for the Iraq War were allegations that Iraq had failed to transparently and verifiably cease their weapons of mass destruction (WMD) program. In February 2003, Secretary of State Colin Powell gave a presentation before the United Nations showing a computer image of what were purported to be mobile weapons for creating biological agents. He said Iraq had as many as 18 mobile facilities for making anthrax and botulinum toxin, stating "they can produce enough dry, biological agent in a single month to kill thousands upon thousands of people." Powell based the assertion on accounts of at least four Iraqi defectors, including a chemical engineer who supervised one of the facilities and been present during production runs of a biological agent.
Following the invasion of Iraq two trailers were found and initially described as the alleged mobile labs.
Intelligence sources
In the CIA briefing days before the 2003 United Nations security council presentation Colin Powell knew that all information included in the report had to be solid. "Powell and I were both suspicious because there were no pictures of the mobile labs," Wilkerson, Powell's chief of staff said. Powell demanded multiple sources and the two CIA men present George Tenet, then the CIA director and John E. McLaughlin, then the CIA deputy director claimed to have multiple eye witness accounts and supporting evidence. Wilkerson claims that the two said, "This is it, Mr. Secretary. You can't doubt this one"
The information behind the mobile vehicles had come from the multiple informants but the main and most important one was known as Curveball. Curveball was an Iraqi refugee in Germany.
He claimed that after he had graduated at the top of his chemical engineering class at Baghdad University in 1994, he worked for "Dr. Germ," the pseudonym of British-trained microbiologist Rihab Rashid Taha. He led a team that built mobile labs to create biological WMD Curveball was never actually interviewed by American intelligence and in May 2004, over a year after the invasion of Iraq, the CIA concluded formally that Curveball's information was fabricated. Furthermore, on June 26, 2006, the Washington Post reported that "the CIA acknowledged that Curveball was a con artist who drove a taxi in Iraq and spun his engineering knowledge into a fantastic but plausible tale about secret bioweapons factories on wheels."
With information about the mobile labs the Bush administration then went and asked Ahmed Chalabi's Iraqi National Congress (INC) if they knew anything about this "threat". The INC provided an Iraqi defector, Mohammad Harith, who claimed that while working for the Iraqi government he had purchased seven Renault refrigerated trucks to be converted into mobile biological weapons laboratories. The INC used James Woolsey, former director of the CIA, to directly contact Deputy Assistant Defense Secretary Linton Wells, of the Defense Intelligence Agency (DIA), with info about Mohammad Harith's account to avoid any scrutiny by the CIA. Harith's was met by a DIA debriefer who concluded that it "seemed accurate, but much of it appeared embellished" and he apparently "had been coached on what information to provide." However, the line about Harith being coached was removed and one that he passed a lie detector added and as such became official evidence of mobile bio-labs even being used by Bush in his January 2003 State of the Union message. Later Mohammad Harith like curveball evidence was labeled with a fabricator notice.
A third source, reporting through Defense HUMINT channels and another asylum seeker, claimed that in June 2001 that Iraq had mobile biological weapons laboratories however after the war in Oct 2003 the source recanted his testimony.
A fourth source existed but all information and details regarding the report are still classified.
All the sources depended on the Curveball's account and were seen as supportive to it. When Tenet called Powell in late summer 2003, seven months after the U.N. speech, he admitted that all of the CIA's claims Powell used in his speech about Iraqi weapons were wrong. "They had hung on for a long time, but finally Tenet called Powell to say, 'We don't have that one, either,' " Wilkerson recalled. "The mobile labs were the last thing to go."
Investigations
May 13, 2003, it was reported that a second suspected mobile weapons lab had been found in Iraq on April 19, 2003.
May 27, 2003, a fact finding mission to Iraq sent its report to Washington unanimously declaring that the trailers had nothing to do with biological weapons. The report was 'shelved'.
May 28, 2003, the Central Intelligence Agency released a report on the supposed mobile weapons labs, stating "Despite the lack of confirmatory samples, we nevertheless are confident that this trailer is a mobile BW production plant."
May 29, 2003, President George W Bush declared that they had found the weapons of mass destruction that had been claimed were in Iraq, these were in the form of mobile labs for manufacturing biological weapons.
June 2, 2003, In the UK, Susan Watts broadcasts on the influential BBC2 News Night report which includes an anonymous experts (Dr David Kelly ) opinion on the Mobile Weapons labs being for biological weapons. Dr Kelly is now only 40% certain the trailers are labs.
June 5, 2003 Dr. David Kelly one of Britains foremost experts on Biological Weapons visited Iraq to examine the trailers and take photographs.
June 7, 2003, Judith Miller reports that some scientists had doubts about the trailers in her piece"Some experts doubt trailers were germ lab", Judith Miller and William J. Broad, New York Times.
June 8, 2003 The Observer newspaper picks up on the story with their piece "Blow to Blair over 'mobile labs'Saddam's trucks were for balloons, not germs " Placing more pressure on Prime Minister Tony Blair over the lack of Weapons of Mass Destruction found in Iraq.
June 15, 2003, It was revealed that the trailers discovered were for the production of hydrogen to fill artillery balloons, as the Iraqis had insisted all along. The artillery balloons were used to get detailed weather data to be used to accurately direct artillery shelling.
June 20, 2003:
June 23, 2003:
July 17/18, 2003: Dr. David Kelly, a key source for many of the newspaper articles doubting the Mobile weapons labs, is found dead. An inquiry into his death, the Hutton Inquiry, found his death to be suicide.
Dick Cheney's continued support for the allegations
Powell's retraction
Powell retracted his
The Pentagon produced a secret report in 2003 entitled Final Technical Engineering Exploitation Report on Iraqi Suspected Biological Weapons-Associated Trailers that found that the trailers were impractical for biological agent production and almost certainly designed and built for the generation of hydrogen.
See also
Curveball (informant)
Iraqi aluminum tubes
Niger uranium forgeries
Plame affair
Ukraine bioweapons conspiracy theory
References
Iraq and weapons of mass destruction
Military equipment
Biological warfare facilities
Causes and prelude of the Iraq War
Propaganda in the Iraq War | Allegations of Iraqi mobile weapons laboratories | [
"Biology"
] | 1,499 | [
"Biological warfare facilities",
"Biological warfare"
] |
3,067,624 | https://en.wikipedia.org/wiki/Quantitative%20psychology | Quantitative psychology is a field of scientific study that focuses on the mathematical modeling, research design and methodology, and statistical analysis of psychological processes. It includes tests and other devices for measuring cognitive abilities. Quantitative psychologists develop and analyze a wide variety of research methods, including those of psychometrics, a field concerned with the theory and technique of psychological measurement.
Psychologists have long contributed to statistical and mathematical analysis, and quantitative psychology is now a specialty recognized by the American Psychological Association. Doctoral degrees are awarded in this field in a number of universities in Europe and North America, and quantitative psychologists have been in high demand in industry, government, and academia. Their training in both social science and quantitative methodology provides a unique skill set for solving both applied and theoretical problems in a variety of areas.
History
Quantitative psychology has its roots in early experimental psychology when, in the nineteenth century, the scientific method was first systematically applied to psychological phenomena. Notable contributions included E. H. Weber's studies of tactile sensitivity (1830s), Fechner's development and use of psychophysical methods (1850–1860), and Helmholtz's research on vision and audition beginning after 1850. Wilhelm Wundt is often called the "founder of experimental psychology", because he called himself a psychologist and opened a psychological laboratory in 1879 where many researchers came to study. The work of these individuals and many others dispelled the assertion, by theorists such as Immanuel Kant, that psychology could not become a science because precise experiments on the human mind were impossible.
Intelligence testing
Intelligence testing has long been an important branch of quantitative psychology. The nineteenth-century English statistician Francis Galton, a pioneer in psychometrics, was the first to create a standardized test of intelligence, and he was among the first to apply statistical methods to the study of human differences and their inheritance. He came to believe that intelligence is largely determined by heredity, and he also hypothesized that other measures such as the speed of reflexes, muscle strength, and head size are correlated with intelligence. He established the world's first mental testing center in 1882; in the following year, he published his observations and theories in "Inquiries into Human Faculty and Its Development".
Statistical techniques
Statistical methods are the quantitative tools most used by psychologists. Karl Pearson introduced the correlation coefficient and the chi-squared test. The 1900–1920 period saw the t-test (Student, 1908), the ANOVA (Fisher, 1925) and a non-parametric correlation coefficient (Spearman, 1904). A large number of tests were developed in the latter half of the 20th century (e.g., all multivariate tests). More recently, popular multivariate techniques were developed—including the hierarchical linear model, structural equation modeling, and independent component analysis.
In 1946, psychologist Stanley Smith Stevens organized levels of measurement into four scales, Nominal, Ordinal, Ratio, and Interval, in a paper that is still often cited. Jacob Cohen, a New York University professor of psychology, analyzed quantitative methods involving statistical power and effect size, which helped to lay foundations for current statistical meta-analysis and the methods of estimation statistics. He gave his name to Cohen's kappa and Cohen's d.
Education and training
Undergraduate
Training for quantitative psychology can begin informally at the undergraduate level. Many graduate schools recommend that students have some coursework in psychology and complete the full college sequence of calculus (including multivariate calculus) and a course in linear algebra. Quantitative coursework in other fields such as economics and research methods and statistics courses for psychology majors are also helpful. Historically, however, students without all these courses have been accepted if other aspects of their application show promise. Some schools also offer formal minors in areas related to quantitative psychology. For example, the University of Kansas offers a minor in "Social and Behavioral Sciences Methodology" that provides advanced training in research methodology, applied data analysis, and practical research experience relevant to quantitative psychology. Coursework in computer science is also useful. Mastery of an object-oriented programming language or learning to write code in R, SAS, or SPSS is useful for the type of data analysis performed in graduate school.
Graduate
Quantitative psychologists may possess a doctoral degree or a master's degree. Due to its interdisciplinary nature and depending on the research focus of the university, these programs may be housed in a school's college of education or in their psychology department. Programs that focus especially in educational research and psychometrics are often part of education or educational psychology departments. These programs may therefore have different names mentioning "research methods" or "quantitative methods", such as the "Research and Evaluation Methodology" Ph.D. from the University of Florida or the "Quantitative Methods" degree at the University of Pennsylvania. However, some universities may have separate programs in their two colleges. For example, the University of Washington has a "Quantitative psychology" degree in their psychology department and a separate "Measurement & Statistics" Ph.D. in their college of education. Others, such as Vanderbilt University's Ph.D. in Psychological Sciences, are jointly housed across two psychology departments.
Universities with a mathematical focus include McGill University's "Quantitative Psychology and Modeling" program and Purdue University's "Mathematical and Computational Psychology" degrees. Students with an interest in modeling biological or functional data may go into related fields such as biostatistics or computational neuroscience.
Doctoral programs typically accept students with only bachelor's degrees, although some schools may require a master's degree before applying. After the first two years of studies, graduate students typically earn a Master of Arts in Psychology, Master of Science in Statistics or Applied Statistics, or both. For example, most students in the University of Minnesota's "Quantitative and Psychometric Methods" Ph.D. program are also Master of Science students in the School of Statistics. Additionally, several universities offer minor concentrations in quantitative methods, such as New York University.
Companies that produce standardized tests such as College Board, Educational Testing Service, and American College Testing are some of the largest private sector employers of quantitative psychologists. These companies also often provide internships to students in graduate school.
Shortage of qualified applicants
In 1990, an influential paper titled "Graduate Training in Statistics, Methodology, and Measurement in Psychology" was published in the American Psychologist journal. This article discussed the need for increased and up-to-date training in quantitative methods for psychology graduate programs in the United States. In August 2005, the American Psychological Association expressed the need for more quantitative psychologists in the industry—for every PhD awarded in the subject, there were about 2.5 quantitative psychologist position openings. Due to a lack of applicants in the field, the APA created a Task Force to study the state of quantitative psychology and predict its future. Domestic U.S. applicants are especially lacking. The majority of international applicants come from Asian countries, especially South Korea and China. In response to the lack of qualified applicants, the APA Council of Representatives authorized a special task force in 2006. The task force was chaired by Leona S. Aiken from Arizona State University.
Research areas
Quantitative psychologists generally have a main area of interest. Notable research areas in psychometrics include item response theory and computer adaptive testing, which focus on education and intelligence testing. Other research areas include structural equation modeling, social network analysis, human decision science, statistical genetics, and modeling psychological processes through time series analysis, such as in fMRI data collection.
Two common types of psychometric tests are aptitude tests, which are supposed to measure raw intellectual suitability for a purpose, and personality tests, which aim to assess tendencies toward certain thoughts, feelings, or behaviors.
Item response theory is based on the application of related mathematical models to testing data. Because it is generally regarded as superior to classical test theory, it is the preferred method for developing scales in the United States, especially when optimal decisions are demanded, as in so-called high-stakes tests, e.g., the Graduate Record Examination (GRE) and Graduate Management Admission Test (GMAT).
Professional organizations
Quantitative psychology is served by several scientific organizations. These include the Psychometric Society, Division 5 of the American Psychological Association (Quantitative and Qualitative Methods), the Society of Multivariate Experimental Psychology, and the European Society for Methodology. Associated disciplines include statistics, mathematics, educational measurement, educational statistics, sociology, and political science. Several scholarly journals reflect the efforts of scientists in these areas, notably Psychometrika, Psychological Methods, Multivariate Behavioral Research, Journal of Mathematical Psychology, and Structural Equation Modeling (journal).
Notable people
The following is a select list of quantitative psychologists or people who have contributed to the field:
See also
List of schools for quantitative psychology
Mathematical psychology
Measuring the Mind
Network neuroscience
Psychophysics
Psychometrics
Psychometrika
Multivariate Behavioral Research
Journal of Mathematical Psychology
Structural Equation Modeling (journal)
Quantitative psychological research
WinBUGS
References
Further reading
External links
APA Division 5: Quantitative and Qualitative Methods
The Psychometric Society
The Society of Multivariate Experimental Psychology
The European Society for Methodology
Society for Mathematical Psychology
Quantitative psychology
Applied statistics
Psychometrics
Quantitative analysis of behavior
Quantitative research | Quantitative psychology | [
"Mathematics",
"Biology"
] | 1,856 | [
"Behavior",
"Applied mathematics",
"Quantitative analysis of behavior",
"Behaviorism",
"Quantitative psychology",
"Applied statistics"
] |
3,067,713 | https://en.wikipedia.org/wiki/Maintenance%20Operations%20Protocol | The Maintenance Operation Protocol (MOP) is used for utility services such as uploading and downloading system software, remote testing and problem diagnosis. It was a proprietary protocol of Digital Equipment Corporation.
MOP frames can be one of the following commands:
See also
Reverse Address Resolution Protocol (RARP)
References
Digital Equipment Corporation
System administration
Booting | Maintenance Operations Protocol | [
"Technology"
] | 70 | [
"Information systems",
"System administration"
] |
3,067,780 | https://en.wikipedia.org/wiki/Chan%20King-ming | Chan King-ming is a Hong Kong politician and academic. He served as the vice-chairman of the Democratic Party of Hong Kong from 2004 to 2006. He is also an associate professor in the department of biochemistry and Environmental Science Program of the Chinese University of Hong Kong.
Academic career
Chan King-ming earned his Bachelor of Science and Master of Philosophy degrees at the Chinese University of Hong Kong, and his doctoral degree from Memorial University of Newfoundland in St. John's, Newfoundland, Canada.
He is now director of the Environmental Science Program at the Chinese University of Hong Kong. He teaches many different courses including Current Environmental Issues, Biochemical Toxicology and Introduction to Environmental Science in the Environmental Science Program and Molecular Endocrinology in the Biochemistry Programme. Trained as a molecular biologist for his PhD and post-doctoral research, Professor Chan's research interests include gene regulation, aquatic toxicology, marine biotechnology and environmental biochemistry and environmental policy. Prof. Chan is also chairman of CUTA (Chinese University Teachers Association), trustee of Shaw College Board of trustees, Member of Assembly of Fellows, Shaw College, and warden of Hostel 2, Shaw College.
Political career
Chan is a founding member of the Democratic Party. He was elected as chairman of the New Territories East Branch in 1999, and later became the party's minister of organization affairs and central committee member. He ran for the chairmanship election in 2004 but lost to Lee Wing-tat. He was then elected vice-chairman of the party. He also served as a part-time member of Central Policy Unit of the Hong Kong Government between 2004 and 2006.
He ran again for the chairmanship in December 2006, but lost to Albert Ho. He did not seek to run for the vice-chairmanship in the 2006 election. In 2010, the Democratic Party decided to support the government's proposal of the political reform package to expand the numbers of legislative council members from 30 to 35 in Geographical Constituency and 30 to 35 of Functional Constituency by adopting the idea of the "Super-district Councillors" which will be voted across the territory after nominations by District Councillors. Younger members of the Democratic Party including Chan believed that such proposal could not provide any significant progress towards democratic development in local political agenda.
In December 2010, Chan quit the party due to the electoral reform to found the Neo Democrats and is the incumbent convenor of the party. The Neo Democrats campaigned in the 2011 District Council election and won a total of 8 seats.
Affiliations
Chan is a current member of the Professional Teachers' Union (HK), Hong Kong Marine Biological Association, Society of Toxicology (SOT) in the US, American Fisheries Society, American Physiological Society, etc.
He is now chairman (elected) of the Teachers' Association of Chinese University (2011–12). He also serves as warden of Student Hostel 2 of Shaw College, member of the board of trustees of Shaw College, and member of the Assembly of Fellows, Shaw College, Chinese University.
References
Year of birth missing (living people)
Living people
Biochemists
Academic staff of the Chinese University of Hong Kong
Hong Kong educators
Democratic Party (Hong Kong) politicians
Neo Democrats politicians
Members of the Election Committee of Hong Kong, 2000–2005
Members of the Election Committee of Hong Kong, 2007–2012
Members of the Election Committee of Hong Kong, 2012–2017 | Chan King-ming | [
"Chemistry",
"Biology"
] | 672 | [
"Biochemistry",
"Biochemists"
] |
3,068,051 | https://en.wikipedia.org/wiki/O-Phenylenediamine | o-Phenylenediamine (OPD) is an organic compound with the formula C6H4(NH2)2. This aromatic diamine is an important precursor to many heterocyclic compounds. OPD is a white compound although samples appear darker owing to oxidation by air. It is isomeric with m-phenylenediamine and p-phenylenediamine.
Preparation
Commonly, 2-nitrochlorobenzene is treated with ammonia to generate 2-nitroaniline, whose nitro group is then reduced:
ClC6H4NO2 + 2 NH3 → H2NC6H4NO2 + NH4Cl
H2NC6H4NO2 + 3 H2 → H2NC6H4NH2 + 2 H2O
In the laboratory, the reduction of the nitroaniline is effected with zinc powder in ethanol, followed by purification of the diamine as the hydrochloride salt. Darkened impure samples can be purified by treatment of its aqueous solution with sodium dithionite and activated carbon.
Reactions and uses
o-Phenylenediamine condenses with ketones and aldehydes to give rise to various valuable products. Its reactions with formic acids to produce benzimidazole. Other carboxylic acids give 2-substituted benzimidazoles. The herbicides benomyl and fuberidazole are made in this manner. Thiophanate-methyl is another herbicide produced from o-phenylenediamine. Condensation with potassium ethylxanthate gives 2-mercaptobenzimidazole. With nitrous acid, o-phenylenediamine condenses to give benzotriazole, a corrosion inhibitor.
Quinoxalinedione may be prepared by condensation of o-phenylenediamine with dimethyl oxalate. Mercaptoimidazole are commonly used as antioxidants in rubber production, obtained by condensing xanthate esters. Condensation of substituted o-phenylenediamine with diketones yields various pharmaceuticals.
OPD is a ligand in coordination chemistry. Oxidation of metal-phenylenediamine complexes affords the diimine derivatives. OPD condenses with salicylaldehyde to give chelating Schiff base ligands.
Safety
With an LD50 of 44 mg/L (in water), o-phenylenediamine is about 1000 times less toxic than the para-isomer. Anilines are typically handled as if they are carcinogenic. For many applications, OPD has been replaced by safer alternatives such as 3,3',5,5'-tetramethylbenzidine.
References
Diamines
Anilines
Chelating agents | O-Phenylenediamine | [
"Chemistry"
] | 608 | [
"Chelating agents",
"Process chemicals"
] |
3,068,097 | https://en.wikipedia.org/wiki/Houndstooth | Houndstooth is a pattern of alternating light and dark checks used on fabric. It is also known as hounds tooth check, hound's tooth (and similar spellings), dogstooth, dogtooth or dog's tooth. The duotone pattern is characterized by a tessellation of light and dark solid checks alternating with light-and-dark diagonally-striped checks—similar in pattern to gingham plaid but with diagonally-striped squares in place of gingham's blended-tone squares. Traditionally, houndstooth uses black and white, although other contrasting colour combinations may be used.
History
The oldest Bronze Age houndstooth textiles found so far are from the Hallstatt Celtic Salt Mine, Austria, 1500-1200 BC. One of the best known early occurrence of houndstooth is the Gerum Cloak, a garment uncovered in a Swedish peat bog, dated to between 360 and 100 BC. Contemporary houndstooth checks may have originated as a pattern in woven tweed cloth from the Scottish Lowlands, but are now used in many other woven fabric aside from wool. The traditional houndstooth check is made with alternating bands of four dark and four light threads in both warp and weft/filling woven in a simple 2:2 twill, two over/two under the warp, advancing one thread each pass. In an early reference to houndstooth, De Pinna, a New York City–based men's and women's high-end clothier founded in 1885, included houndstooth checks along with gun club checks and Scotch plaids as part of its 1933 spring men's suits collection. The actual term houndstooth for the pattern is not recorded before 1936.
Oversized houndstooth patterns were also employed prominently at Alexander McQueen's Fall 2009 Collection, entitled Horn of Plenty. The patterns were a reference to Christian Dior's signature tweed suits.
Houndstooth patterns, especially black-and-white houndstooth, have long been associated regionally with the University of Alabama (UA). This is because the longtime UA football coach Paul "Bear" Bryant often (though not exclusively) wore black-and-white houndstooth fedoras. The university has attempted to assert a trademark for houndstooth, especially when used in conjunction with other symbols of the school and its football team, a legal strategy that has been largely unsuccessful.
The Australian department store David Jones's branding—a black-on-white houndstooth pattern—is one of the most recognised corporate identities in Australia. A government-sponsored panel judged it in 2006 as one of Australia's top-10 favourite trademarks. The iconic design was the result of a 1967 rebranding exercise by chairman Charles Lloyd Jones, Jr., who wished that the store would be so well known by the design as to not require the use of the name on the packing. It was allegedly inspired by the houndstooth design on a Miss Dior perfume bottle of his mother's, Hannah Jones. On 25 July 2016, David Jones' introduced a new logo, with a revised font style and removed references to the houndstooth online.
Variations
A smaller-scale version of the pattern can be referred to as puppytooth.
In Pop-Culture
Ricky, from the popular Canadian comedy franchise "The Trailer Park Boys", can often be seen wearing a houndstooth button up shirt. It has become an unofficial branding element for the series, with different merchandise featuring the pattern.
Gallery
See also
Argyle
Check
Glen plaid
Tartan
Twill
Tweed
References
External links
Textile patterns
Visual motifs
Tessellation | Houndstooth | [
"Physics",
"Mathematics"
] | 715 | [
"Visual motifs",
"Tessellation",
"Symbols",
"Euclidean plane geometry",
"Planes (geometry)",
"Symmetry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.