id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
3,641,488 | https://en.wikipedia.org/wiki/Mother%20Albania%20%28statue%29 | Mother Albania () is a statue located at the National Martyrs' Cemetery of Albania () in Albania, dedicated in 1972.
The statue represents the country as a mother guarding over the eternal slumber of those who gave their lives for her. There are up to 28,000 graves of Albanian partisans in the cemetery, all of whom perished during World War II. The massive statue holds a wreath of laurels and a star. The cemetery was also the resting place of former leader Enver Hoxha, who was subsequently disinterred and given a more humble grave in another public cemetery.
The statue is made of concrete and it is a work of the sculptors Kristaq Rama, Muntaz Dhrami (1936-) & Shaban Hadëri (1928-2010). It stands atop a 3-metre pedestal; engraved on the pedestal are the words "Lavdi e përjetshme dëshmorëve të atdheut" ("Eternal glory to the martyrs of the fatherland").
Gallery
See also
Tirana
Landmarks in Tirana
Tourism in Albania
Albania
History of Albania
National Martyrs' Cemetery of Albania
External links
Monuments and memorials in Albania
National symbols of Albania
National personifications
Colossal statues
1971 sculptures
Outdoor sculptures in Tirana
1971 establishments in Albania | Mother Albania (statue) | Physics,Mathematics | 257 |
70,855,438 | https://en.wikipedia.org/wiki/Toxungen | A toxungen comprises a secretion or other bodily fluid containing one or more biological toxins that is transferred by one animal to the external surface of another animal via a physical delivery mechanism with or without direct contact between the secreting animal and the victim. Toxungens can be delivered through spitting, spraying, or smearing. As one of three categories of biological toxins, toxungens can be distinguished from poisons, which are passively transferred via ingestion, inhalation, or absorption across the skin, and venoms, which are delivered through a wound generated by direct contact in the form of a bite, sting, or other such action. Toxungen use offers the evolutionary advantage of delivering toxins into the target's tissues without the need for physical contact. Animals that deploy toxungens are referred to as toxungenous.
Taxonomic distribution
Toxungens have evolved in a variety of animals, including flatworms, insects, arachnids, cephalopods, amphibians, and reptiles.
Toxungen use possibly also exists in birds, as a number of species deploy defensive secretions from their stomachs, uropygial glands, or cloacas, and some anoint themselves with heterogenously acquired chemicals from millipedes, caterpillars, beetles, plant materials, and even manufactured pesticides. Some of the described substances may be toxic, at least to ectoparasites, which would qualify them as toxungens.
Toxungen use might also exist in several mammal groups. Slow lorises (genus Nycticebus), which comprise several species of nocturnal primates in Southeast Asia, produce a secretion in their brachial glands (a scent gland near their armpit) that possesses apparent toxicity. When the secretion is licked and combined with saliva, their bite introduces the secretion into a wound, which can cause sometimes severe tissue injury to conspecifics and other aggressors, thereby functioning as a venom. They can also rub the secretion on their fur or lick their offspring before stashing them in a secure location, thereby functioning potentially as a toxungen. Skunks and several other members of Mephitidae and Mustelidae spray a noxious and potentially injurious secretion from their anal sac when threatened. High concentrations of the spray can be toxic, with rare accounts of spray victims suffering injury and even death.
Although the extinct theropod Dilophosaurus was portrayed in the original Jurassic Park and Jurassic World Dominion films as capable of spitting a toxic secretion, no evidence exists to suggest that any dinosaur possessed either a toxungen or venom.
Classification of toxin deployment
Some animals use their toxins in multiple ways, and can be classified as poisonous, toxungenous, and/or venomous. Examples include the scorpion Parabuthus transvaalicus, which is both toxungenous (can spray its toxins) and venomous (can inject its toxins), and the snake Rhabdophis tigrinus, which is poisonous (sequesters toad and/or firefly toxins in its nuchal gland tissues that are toxic if consumed by a predator), toxungenous (the nuchal glands are pressurized and can spray the toxins when ruptured), and venomous (toxic oral gland secretions can be injected via the teeth). Even humans can be considered facultatively poisonous, toxungenous, and venomous because they sometimes make use of toxins by all three means for research and development (e.g., biomedical purposes), agriculture (e.g., spraying insecticides), and nefarious reasons (to kill other animals, including humans).
Evolution and function
Toxungen deployment offers a key evolutionary advantage compared to poisons and venoms. Poisons and venoms require direct contact with the target animal, which puts the toxin-possessing animal at risk of injury and death from a potentially dangerous enemy. Evolving the capacity to spit or spray a toxic secretion can reduce this risk by delivering the toxins from a distance.
Toxins used as toxungens can be acquired by several means. Many species synthesize their own toxins and store them within glands, but others acquire their toxins exogenously from other species. Two examples illustrate exogenous acquisition. Snakes of the genus Rhabdophis sequester their nuchal gland toxins from their diet of toads and/or fireflies, Blue-ringed octopuses (genus Hapalochlaeana) acquire tetrodotoxin, the highly toxic non-proteinaceous component of their salivary glands that can be ejected into the water to subdue nearby prey, via accumulation from food resources and/or symbiotic tetrodotoxin-producing bacteria.
Toxungens are most commonly used for defensive purposes, but can be used in other contexts as well. Examples of toxungen use for predation include the blue-ringed octopus, which can squirt its secretion into water to immobilize or kill its prey, and ants of the genus Crematogaster that cooperatively subdue their prey by seizing, spread-eagling, and then smearing their toxins onto the prey's surface. Toxungens can also be used for communication and hygiene. Many hymenopterans possess a secretion used as a venom (injected for predation and/or defense) that can also be sprayed to communicate alarm among nestmates, to mark a trail used for food gathering, or to keep their brood free of parasites.
Because of their unique delivery system, toxungens may be chemically designed to better penetrate body surfaces. Arthropods that spray or smear their secretion onto insect prey enhance toxin penetration by including a spreading agent that additionally enhances toxicity. Some Spitting cobras have modified their secretion so that the cardiotoxins are more injurious to eye membranes.
References
Animal physiology
Toxins | Toxungen | Biology,Environmental_science | 1,254 |
56,993,176 | https://en.wikipedia.org/wiki/Diana%20Trujillo | Lady Diana Trujillo Pomerantz (born 1983) is a Colombian-American aerospace engineer at the NASA Jet Propulsion Laboratory. She currently leads the engineering team at JPL responsible for the robotic arm of the Perseverance rover. On February 18, 2021, Trujillo hosted the first ever Spanish-language NASA transmission of a planetary landing, for the Perseverance rover landing on Mars.
Early life and education
Trujillo was born in 1983, in Cali, Colombia. Her mother was a medical student when she got pregnant and had to leave her studies to look after her daughter. Trujillo attended Colegio Internacional Cañaverales, a bilingual school accredited by the International Baccalaureate, formerly the International Baccalaureate Organization (IBO). During her school years, she had an interest in science and questioned the roles that are traditionally associated with women.
Uncertain but determined to overcome the economic difficulties that her family faced in Colombia, Trujillo moved to the United States at the age of seventeen with only $300. In order to improve her language skills, she started English lessons at Miami Dade College while working as a housekeeper, among other jobs.
Trujillo enrolled initially at the University of Florida to pursue studies in aerospace engineering, inspired by a magazine article about the role of women working on aerospace missions and having self-confidence in her strong mathematical skills. While studying at the university, she decided to apply for the NASA Academy, being the first Hispanic immigrant woman admitted to the program. She was one of the two participants to get a job offer from NASA. During her work at the Academy, she met NASA robots expert Brian Roberts, who convinced her to move to Maryland with the aim of increasing her chances in the Aerospace industry. Trujillo attended the University of Maryland where she was part of Roberts' research team, focusing on robots in space operations. In 2007, she earned a bachelor's degree in Aerospace Engineering from the University of Maryland. Her story was turned into a children's science book titled "Mars Science Lab - Engineer" by Kari Cornell and Fatima Khan. She was a member of Sigma Gamma Tau.
Career
Trujillo joined NASA in 2007, working at Goddard Space Flight Center on the Constellation program and the Jet Propulsion Laboratory on human and robotic space missions. She has served many roles, including Surface Sampling System Activity Lead and Dust Removal Tool Lead Systems Engineer. She was responsible for ensuring Curiosity's sampling fulfilled its science objectives dust-free whilst maintaining operational safety. The Dust Removal Tool took her six months to develop, and brushes the dust of the surface of Mars to allow scientists to investigate the surface below. It was used on Curiosity's 151st day on Mars. In 2009 she was appointed telecom systems engineer for the Curiosity Rover. Trujillo was responsible for the communications between the spacecraft and scientists on Earth. She has also been Flight Ground Systems Engineer and Vehicle System Testbed Mars Surface Lead. She was at the Jet Propulsion Laboratory when the rover landed on Mars. In 2014, Trujillo was promoted to Mission Lead. That year, she was listed as in the 20 most influential Latinos in the Technology Industry.
Trujillo worked as flight director on the Mars 2020 Perseverance Rover robotic arm and in February 2021, she hosted NASA's first Spanish-language planetary landing show.
She has been involved in several initiatives to inspire young women from Latin America and African-American women to pursue a career in science and engineering. She took part in a discussion about Hidden Figures at the University of Southern California alongside Octavia Spencer and Pharrell Williams. She has been a mentor for the Brooke Owens Fellowship, which she created with her husband Will Pomerantz.
In June 2020, Trujillo was appointed to the Brooke Owens Fellowship's Executive Board She was awarded the Jet Propulsion Laboratory Bruce Murray Award for Excellence in Education and Public Engagement. She was featured on CBS' 2018 celebration of Women's History Month.
Personal life
Trujillo married Will Pomerantz in 2009. They have two children.
Awards
2011 Shared the STEM Award from the Hispanic Heritage Foundation.
2021 Awarded the rank of Commander (Comendador) in the Order of Boyaca, the highest honor that can be awarded to Colombian citizens for exceptional service to Colombia
2021 Awarded by the Congress of Colombia the order of merit Policarpa Salavarrieta on March 8, 2021.
2019 City of STEM Icon Award.
2017 Named one of Los 22 Más by the Colombian Embassy in the United States.
2017 Bruce Murray Award for Excellence in Education and Public Management.
References
External links
Aerospace engineers
Colombian women engineers
University of Maryland, Baltimore alumni
21st-century women engineers
Mars 2020
NASA people
Colombian emigrants to the United States
1983 births
Living people
People from Cali | Diana Trujillo | Engineering | 959 |
75,152,655 | https://en.wikipedia.org/wiki/Parochial%20altruism | Parochial altruism is a concept in social psychology, evolutionary biology, and anthropology that describes altruism towards an in-group, often accompanied by hostility towards an out-group. It is a combination of altruism, defined as behavior done for the benefit of others without direct effect on the self, and parochialism, which refers to having a limited viewpoint. Together, these concepts create parochial altruism, or altruism which is limited in scope to one's in-group. Parochial altruism is closely related to the concepts of in-group favoritism and out-group discrimination. Research has suggested that parochial altruism may have evolved in humans to promote high levels of in-group cooperation, which is advantageous for group survival. Parochial altruism is often evoked to explain social behaviors within and between groups, such as why people are cooperative within their social groups and why they may be aggressive towards other social groups.
History
The concept of parochial altruism was first suggested by Charles Darwin. In his book, "The Descent of Man," Darwin observed that competition between a group of the same species and cooperation within groups were important evolutionary traits that influenced human behavior. While Darwin first described the general concept of parochial altruism, the term was first coined in 2007 by economists Jung-Kyoo Choi and Samuel Bowles.
Following Darwin's initial theories, modern researchers in fields such as evolutionary biology and social psychology began investigating the evolution of group dynamics and altruism. Bowles and fellow economist Herbert Gintis were particularly influential in this work, proposing a co-evolution between warfare and in-group altruism.
In addition to this work on evolution, a set of influential studies conducted with indigenous groups in Papua New Guinea were major contributions to the study of parochial altruism. These studies demonstrated how social norms and behaviors surrounding cooperation are often shaped by parochialism. Specifically, these altruistic behaviors were found to be limited to one's own ethnic, racial, or language group. This work revealed that individuals were more likely to protect members of their in-group, even if it required aggression to out-group members.
Definition and characteristics
Parochial altruism refers to a form of altruistic behavior that is exhibited preferentially towards members of one's own group, often accompanied by hostility towards those outside the group. This phenomenon is characterized by a combination of "in-group love" and "out-group hate".
More broadly, altruism can manifest in different forms, ranging from small acts of kindness, like helping a stranger or a friend in need, to more significant sacrifices, such as donating an organ to save another's life. Evolutionary biologists, ethologists, and psychologists have investigated the roots of altruism, suggesting that it may have evolved as a means of enhancing the survival of one's kin (kin selection) or as a strategy to receive a reciprocal benefit from another individual (the norm of reciprocity). Altruism is often contrasted with ethical egoism, the view that individuals should act in their own self-interest. The complexity of human motivation makes the distinction between altruism and self-interest difficult to identify, and this is an ongoing debate within psychology and philosophy alike.
Evolutionary theories
Kin Selection Theory
Kin selection is a theory in evolutionary biology that may offer a foundational framework to help explain the mechanisms underlying parochial altruism. In 1964, evolutionary biologist William Donald Hamilton proposed a theory and mathematical formula, commonly referred to as Hamilton's Rule. The rule posits that evolutionary processes may favor altruistic behaviors when they benefit close genetic relatives, thereby indirectly promoting the transmission of shared genes. Hamilton's Rule is described by the formula C < r × B, where C represents the cost to the altruist, r is the genetic relatedness between the altruist and the receiver, and B is the benefit to the receiver. In essence, kin selection suggests that individuals are more likely to perform altruistic acts if the cost to themselves is outweighed by the benefit to their relatives. It suggests that individuals may be evolutionarily predisposed to exhibit altruistic behaviors towards members of their own group, especially if those group members are close genetic relatives.
Reciprocity
The norm of reciprocity states that people tend to respond to others in the same way that they have been treated. For example, kind and altruistic behavior will be responded to with more kind and altruistic behavior, while unkind and aggressive behavior will be responded to with more unkind and aggressive behavior. This principle, central to the theory of reciprocal altruism introduced by Robert Trivers in 1971, suggests that altruistic behaviors within a group are reciprocated, thereby reinforcing group cohesion and mutual support. This idea has been applied to group cooperation, which suggests that reciprocity is evolutionarily advantageous, particularly in the context of an in-group. Reciprocal altruism extends beyond kin selection, as it benefits individuals based on their previous actions, not just genetic relatedness. Reciprocity has been observed in a wide range of species, indicating its evolutionary advantage in fostering cooperation among non-kin group members. In the context of parochial altruism, the expectation of reciprocity fosters social connection and a sense of mutual obligation that is preferential to the in-group.
Co-evolution with war
Evolutionary theorists have suggested that the human capacity for altruism may have co-evolved with warfare. This theory argues that in-group altruism, a core component of parochial altruism, would have increased chances of success in warfare. Groups who were willing to sacrifice for each other would be more cohesive and cooperative, thus conferring advantages in warfare. Ultimately, greater success in warfare would lead to greater genetic success. Conversely, the pressures and demands of warfare may have intensified the need for in-group altruism and exacerbated parochialism. This process may have led to a bidirectional relationship between warfare and parochial altruism, with each element reinforcing the other. The idea of war and altruism being intricately interconnected may also help explain the high frequency of intergroup conflicts observed in ancient human societies.
Group Selection Theory
The idea of parochial altruism may seem counterintuitive from an individual selection theory, given that parochialism is often dangerous to the individual. To explain this, theorists often reference group selection theory, which suggests that natural selection operates at the group level, not just among individuals. Specifically, behavior that is beneficial to a group, even if it is costly to an individual, may be selected because it increases the overall survival chances and genetic success of a group. Group selection theory suggests that individual behaviors and decisions may be shaped by the needs of the group. For example, an individual may choose to sacrifice themselves by attacking an out-group, if they perceive a benefit to their in-group. This theory has faced considerable criticism and is not universally accepted in the field.
Third party punishment
Third Party Punishment is a phenomenon that occurs when an individual, who was not directly affected by a transgression, punishes the transgressor. This form of punishment is influential in maintaining social order and reinforcing group norms, even if it incurs personal cost to the punisher. Third party punishment is an integral component of enforcing social norms among societies. Research on parochial altruism often employs third-party punishment experiments, whereby individuals are more likely to protect norm violators from their in-groups, and punish those from an out-group. This bias in third party punishment is a basis for parochial altruism. These experiments often use economic games, such as the dictator game or the prisoner's dilemma to measure punishment. Furthermore, researchers have identified neural mechanisms for social cognition that seem to specifically modulate third-party norm enforcement. The study illustrated that participants who were determining punishment for out-group members who have transgressed show greater activity and connectivity in a network of brain regions that modulate sanction-related decisions, while participants who were determining punishment for in-group members who have transgressed show greater activity and connectivity in brain regions that modulate mentalizing.
Cross-cultural perspectives
Like many psychological phenomenon, parochial altruism may manifest uniquely across different cultural contexts. Research has revealed that cultures vary in both intensity and expression of in-group favoritism and out-group hostility. These differences are likely the result of norms, societal structures, and historical factors that vary among cultures. Joseph Henrich and colleagues conducted a large-scale research study examining cross-cultural variations in economic and dictator games in 15 small-scale societies. Their studies revealed that economic and social environments influence altruistic behavior towards in-group members. For example, they found that societies with a higher level of market integration and adherence to religion showed more fairness in economic games. This suggests that there is a moral component of altruism, that is influenced by culture and is distinct from the in-group and out-group model of parochial altruism. Additionally, theories about the coevolution of parochial altruism and war suggest that social structures and organization may play a role in shaping parochial altruism. Societies with strong clan or tribal affiliations, and particularly those with more frequent conflict, tend to exhibit more pronounced parochial altruism, reinforcing cooperation and unity within the social group. Historical and ecological factors may also influence the extent of parochial altruism within societies. In regions with a history of intergroup conflict or scare resources that must be fought over, groups may exhibit stronger in-group loyalty and out-group aggression as an adaptive response to the environment.
Psychological and sociological implications
Individual psychology
Parochial altruism influences individual through its impact on social identity and perception. Social identity theory suggests that individuals derive a sense of self from their group memberships. Parochial altruism can reinforce a social identity when individuals behave more altruistically to their own one-group. Similarly, in-group favoritism and out-group hostility are central to parochial altruism, and shape how individuals perceive and interact with others. Individuals are more likely to view in-group members as trustworthy and likable, and view out-group members as suspicious and hostile. Thus, parochial altruism is an example of how group membership shapes individual attitudes and interpersonal dynamics.
Within-group relations
Parochial altruism influences within-group relations by fostering a sense of unity and cooperation among group members. This is achieved through the in-group favoritism that is characteristic of parochial altruism, whereby individuals selectively behave altruistically towards members of their own group. Research on social identity illustrates how these in-group biases reinforce a sense of shared identity and collective goals. Social identity theory further posits that enhanced group cooperation can increase group morale and self-esteem, strengthening the social bonds among group members.
Intergroup relations
Contrary to within-group relations, parochial altruism influences intergroup relations through increased tension and conflict between in-groups and out-groups. This is driven by the out-group hostility component of parochial altruism, where individuals are more likely to punish out-group members and treat them with aggression when compared with in-group members. Research illustrates that these out-group biases that are characteristic of parochial altruism can lead to prejudice, discrimination, and intergroup conflict.
Animal models
The study of parochial altruism extends beyond human societies, with various animal models providing insight into the evolutionary origins and mechanisms of this behavior. In the animal kingdom, parochial altruism has been observed within the context of territorial defense and resource allocation within social groups. For example, chimpanzees have been observed to exhibit behaviors that mirror human parochial altruism, such as defending their group's territory against outsiders and favoring group members in food-sharing and grooming practices. These behaviors are directed towards enhancing the survival of in-group members, similar to the in-group favoritism and out-group hostility characteristic of human parochial altruism. Similar behavior has been observed in vampire bats, who demonstrate reciprocal altruism within their social groups by sharing meals with kin and non-kin group members, but not with other bats.
Criticism and controversy
While the concept of parochial altruism has been influential in explaining social behaviors like in-group altruism and out-group hostility, it has also received criticism. Specifically, the evolutionary basis of parochial altruism has been questioned for the theory's reliance on group selection. Group selection posits that natural selection operates at the group level, favoring traits that are beneficial for the group rather than the individual. This concept contrasts the traditional and more scientifically backed view of Darwinian selection, which occurs at the individual level and promotes traits beneficial to individual organisms. This debate over group selection is a longstanding issue in evolutionary biology, and the group selection theory has faced critiques from scientists such as Richard Dawkins and Steven Pinker, who argue that there is not sufficient evidence to support the theory. An alternative theory, multi-level selection, was proposed by David Sloan Wilson and Elliott Sober as a modern interpretation of group selection.
Field studies on parochial altruism during conflict have also illustrated the need for a more nuanced understanding of parochial altruism. Researchers conducted studies before, during, and after riots in Northern Ireland, investigating how the conflict influenced real-world measures of cooperation, such as charity and school donations. The findings revealed that conflict was associated with reductions in all types of altruism, including both in-group and out-group, challenging the notion that inter-group conflict unconditionally promotes parochial altruism. Instead, they suggest that conflict may lead to a reduction in all types of cooperation. Critics have argued that the co-evolution of war and altruism is an oversimplification, which also fails to explain peaceful interactions between groups, defensive strategies, and sex differences in parochial altruism.
Future directions
Emerging research seeks to investigate the neural basis of parochial altruism, using modern technologies such as neuroimaging and neurobiological approaches. Studies utilizing functional magnetic resonance imaging (fMRI) have identified specific brain regions that are activated during in-group versus out-group interactions, indicating a potential neural basis for parochial decision-making. Other research studies have examined how neuroendocrine factors, such as oxytocin and testosterone, may influence in-group favoritism and out-group hostility. A study by De Dreu et al. demonstrated that intranasal administration of oxytocin increased in-group trust and cooperation, as well as aggression toward perceived out-group threats. Other studies have illustrated that testosterone is associated with parochial altruism in humans and may modulate the neural systems associated with it.
See also
Altruism
In-group and out-group
In-group favoritism
Social identity theory
Cooperation
Kin Selection
Reciprocal Altruism
Group Selection
Evolutionary game theory
Moral Psychology
Intergroup Relations
Reciprocal Altruism
References
Wikipedia Student Program
Altruism | Parochial altruism | Biology | 3,097 |
9,613,870 | https://en.wikipedia.org/wiki/Susan%20Solomon | Susan Solomon is an American atmospheric chemist, working for most of her career at the National Oceanic and Atmospheric Administration (NOAA). In 2011, Solomon joined the faculty at the Massachusetts Institute of Technology, where she serves as the Ellen Swallow Richards Professor of Atmospheric Chemistry & Climate Science. Solomon, with her colleagues, was the first to propose the chlorofluorocarbon free radical reaction mechanism that is the cause of the Antarctic ozone hole. Her most recent book, Solvable: how we healed the earth, and how we can do it again (2024) focuses on solutions to current problems, as do books by data scientist Hannah Ritchie, marine biologist, Ayana Elizabeth Johnson and climate scientist Katharine Hayhoe.
Solomon is a member of the U.S. National Academy of Sciences, the European Academy of Sciences, and the French Academy of Sciences.
In 2002, Discover magazine recognized her as one of the 50 most important women in science.
In 2008, Solomon was selected by Time magazine as one of the 100 most influential people in the world. She also serves on the Science and Security Board for the Bulletin of the Atomic Scientists.
Biography
Early life
Solomon was born in Chicago, Illinois. Her interest in science began as a child watching The Undersea World of Jacques Cousteau. In high school she placed third in a national science competition, with a project that measured the percentage of oxygen in a gas mixture.
Solomon received a B.S. degree in chemistry from the Illinois Institute of Technology in 1977. She then received an M.S. in chemistry in 1979 followed by a Ph.D. in 1981 in atmospheric chemistry, both from the University of California, Berkeley.
Personal life
Solomon married Barry Sidwell in 1988. She is Jewish.
Work
Solomon was the head of the Chemistry and Climate Processes Group of the National Oceanic and Atmospheric Administration Chemical Sciences Division until 2011. In 2011, she joined the faculty of the Department of Earth, Atmospheric and Planetary Sciences at the Massachusetts Institute of Technology.
Books
The Coldest March: Scott's Fatal Antarctic Expedition, Yale University Press, 2002 – Depicts the tale of Captain Robert Falcon Scott's failed 1912 Antarctic expedition, specifically applying the comparison of modern meteorological data with that recorded by Scott's expedition in an attempt to shed new light on the reasons for the demise of Scott's polar party.
Aeronomy of the Middle Atmosphere: Chemistry and Physics of the Stratosphere and Mesosphere, 3rd Edition, Springer, 2005 – Describes the atmospheric chemistry and physics of the middle atmosphere from altitude.
The Ozone Hole
Solomon, working with colleagues at the NOAA Earth System Research Laboratories, postulated the mechanism that the Antarctic ozone hole was created by a heterogeneous reaction of ozone and chlorofluorocarbons free radicals on the surface of ice particles in the high altitude clouds that form over Antarctica. In 1986 and 1987 Solomon led the National Ozone Expedition to McMurdo Sound, where the team gathered the evidence to confirm the accelerated reactions. Solomon was the solo leader of the expedition, and the only woman on the team. Her team measured levels of chlorine oxide 100 times higher than expected in the atmosphere, which had been released by the decomposition of chlorofluorocarbons by ultraviolet radiation.
Solomon later showed that volcanoes could accelerate the reactions caused by chlorofluorocarbons, and so increase the damage to the ozone layer. Her work formed the basis of the U.N. Montreal Protocol, an international agreement to protect the ozone layer by regulating damaging chemicals. Solomon has also presented some research which suggests that implementation of the Montreal Protocols is having a positive effect.
For her critical contribution to saving the ozone layer, Solomon was a winner of the 2021 Future of Life Award along with Joe Farman and Stephen O. Andersen. Jim Hansen, former Director of the NASA Goddard Institute for Space Studies and Director of Columbia University's Program on Climate Science, Awareness and Solutions said, "In Farman, Solomon and Andersen we see the tremendous impact individuals can have not only on the course of human history, but on the course of our planet's history. My hope is that others like them will emerge in today's battle against climate change." Professor Guus Velders, a climate scientist at Utrecht University said, "Susan Solomon is a deserving recipient of the Future of Life Award. Susan not only explained the processes behind the formation of the ozone hole, she also played an active role as an interface between the science and policy of the Montreal Protocol."
The Coldest March – A book
Using research work conducted by English explorer and navy officer Robert Falcon Scott, Solomon also wrote and spoke about Scott's 1911 expedition inThe Coldest March: Scott's Fatal Antarctic Expedition to counter a longstanding argument that blamed Scott for his and his crew's demise during that expedition. Scott attributed his death to unforeseen weather conditions – a claim that has been contested by British journalist and author Roland Huntford. Huntford claimed that Scott was a prideful and under-prepared leader. Solomon has defended Scott and said that "modern data side squarely with Scott", describing the weather conditions in 1911 as unusual.
In the voluminous book (778 pages, 150+21 figures, 1444 references, 23 maps, 39 tables and 2 schemes) recently published by Dr. Krzysztof Sienicki, a theoretical physicist, Chapter 4 of this book examines Dr. Susan Solomon's analysis of the Terra Nova Expedition and demonstrates numerous errors and misrepresentations in her work. Below is a concise summary of the key findings including the key errors and criticisms:
1. Data Manipulation and Cherry-Picking: Dr. Solomon is accused of selectively presenting temperature data to falsely suggest that Captain Roald Amundsen experienced more favorable conditions than Captain Scott. Specifically, she omitted data points that contradicted her argument, such as temperatures above the long-term mean (pages 174–179 and 227–244),
2. Fabrication of Meteorological Data: The chapter claims that Solomon fabricated temperature data to support her thesis of an "Extreme Cold Snap." She is accused of falsifying temperature trends and extending analysis periods to include unrelated warm days to "warm up" data(pages 248–182 and 702–715),
3. Logical Fallacies: Solomon is critiqued for employing the Gambler’s fallacy, cherry-picking, and affirming the consequent to support her conclusions about the weather conditions faced by Captain Scott (pages 165–198 and 210–229),
4. Misrepresentation of Statistical Methods: Solomon allegedly failed to conduct proper statistical error analysisStatistics, hypothesis testing, and probability distribution analysis, which undermines the credibility of her conclusions (pages 192–200 and 700–710),
5. Misinterpretation of Historical Data: Solomon is accused of attributing modern weather station data incorrectly to the conditions of 1912. This includes comparing non-interchangeable geographical locations and inaccurately interpreting automated weather station readings (pages 165–170 and 255–289),
6. Subjective Assessments and Bias: The chapter accuses Solomon of dismissing Captain Scott's responsibility by attributing his failures solely to luck and weather, which is labeled as an overly subjective and biased approach (pages 179–181 and 220–223),
7. Errors in Critical Figures and Tables: The document identifies discrepancies in Solomon's figures and tables, noting that none of them accurately represent the true meteorological data from the Terra Nova expedition (pages 178–211 and 702–711).
For a summary of Solomon's errors and manipulations, see also Chapter 17 (p. 658) and the following sections:
Appendix 2 (p. 658): Errors and Fallacies in Drs. Solomon and Stearns' paper, "On the Role of the Weather in the Deaths of R. F. Scott and his Companions."
Appendix 3 (p. 668): Data Dragging and Fabrication in Dr. Solomon's book, "The Coldest March: Scott's Fatal Antarctic Expedition."
Intergovernmental Panel on Climate Change
Solomon served on the Intergovernmental Panel on Climate Change. She was a contributing author for the Third Assessment Report. She was also co-chair of Working Group I for the Fourth Assessment Report.
Awards
1991 – Henry G. Houghton Award for research in physical meteorology, awarded by the American Meteorological Society
1994 – Solomon Saddle (), a snow saddle at about elevation, named in her honor
1994 – Solomon Glacier (), an Antarctic glacier named in her honor
1999 – National Medal of Science, awarded by the President of the United States
2000 – Carl-Gustaf Rossby Research Medal, awarded by the American Meteorological Society
2004 – Blue Planet Prize, awarded by the Asahi Glass Foundation
2006 – V. M. Goldschmidt Award
2006 – Inducted into the Colorado Women's Hall of Fame
2007 – William Bowie Medal, awarded by the American Geophysical Union
2007 — Prix Georges Lemaître
2007 – As a member of IPCC, which received half of the Nobel Peace Prize in 2007, she shared a stage receiving the prize with Al Gore (who received the other half).
2008 – Grande Médaille (Great Medal) of the French Academy of Sciences
2008 – Foreign Member of the Royal Society
2008 – Member of the American Philosophical Society
2009 – Volvo Environment Prize, awarded by the Royal Swedish Academy of Sciences
2009 – Inducted into the National Women's Hall of Fame
2010 – Service to America Medal, awarded by the Partnership for Public Service
2012 – Vetlesen Prize, for work on the ozone hole, shared with Jean Jouzel. She was the first woman to receive this prize.
2013 – BBVA Foundation Frontiers of Knowledge Award in the Climate Change category
2015 – Honorary Doctorate (honoris causa) from Brown University.
2017 – Arthur L. Day Prize and Lectureship by the National Academy of Sciences for substantive work in atmospheric chemistry and climate change
2018 – Bakerian Lecture
2018 – Crafoord Prize in Geosciences
2019 – Made one of the members of the inaugural class of the Government Hall of Fame
2021 – On 31 July she was appointed as ordinary Member of the Pontifical Academy of Sciences
2021 – 2021 Future of Life Award (Ozone Layer)
2021 – NAS Award for Chemistry in Service to Society
2023 – Honorary Doctorate from Duke University
2023 – Female Innovator Prize from the VinFuture Foundation
References
External links
Oral History Interview with Susan Solomon. (1997-09-05). American Meteorological Society Oral History Project. UCAR Archives.
1956 births
Living people
American geophysicists
Atmospheric chemists
American women chemists
Illinois Institute of Technology alumni
UC Berkeley College of Chemistry alumni
Carl-Gustaf Rossby Research Medal recipients
Members of the French Academy of Sciences
Foreign members of the Royal Society
Members of the United States National Academy of Sciences
National Oceanic and Atmospheric Administration personnel
National Medal of Science laureates
20th-century American women scientists
21st-century American women scientists
Women geophysicists
20th-century American chemists
21st-century American scientists
Members of Academia Europaea
Intergovernmental Panel on Climate Change contributing authors
Recipients of the V. M. Goldschmidt Award
Vetlesen Prize winners | Susan Solomon | Chemistry | 2,291 |
32,682,492 | https://en.wikipedia.org/wiki/FPGA%20Mezzanine%20Card | FPGA Mezzanine Card (FMC) is an ANSI/VITA (VMEbus International Trade Association) 57.1 standard that defines I/O mezzanine modules with connection to an FPGA or other device with re-configurable I/O capability. It specifies a low profile connector and compact board size for compatibility with several industry standard slot card, blade, low profile motherboard, and mezzanine form factors.
Specifications
The FMC specification defines:
I/O mezzanine modules, which connect to carrier cards
A high-speed connector family of connectors for I/O mezzanine modules
Supporting up to 10 Gbit/s transmission with adaptively equalized I/O
Supporting single ended and differential signaling up to 2 Gbit/s
Numerous I/O available
The electrical connectivity of the I/O mezzanine module high-speed connector
Supporting a wide range of signaling standards
System configurable I/O functionality
FPGA intimacy
The mechanical properties of the I/O mezzanine module
Minimal size
Scalable from low end to high performance applications
Conduction and ruggedized support
The FMC specification has two defined sizes: single width (69 mm) and double width (139 mm). The depth of both is about 76.5 mm. The FMC mezzanine module uses a high-pin count 400 pin high-speed array connector. A mechanically compatible low pin count connector with 160 pins can also be used with any of the form factors in the standard.
LPC vs. HPC
FMC allows for two sizes of connector, Low Pin Count (LPC) and High Pin Count (HPC), each offering different (maximum) levels of connectivity, analogous to how some PMC boards have a 32-bit interface while others have a 64-bit interface by using an additional connector. "The LPC connector provides 68 user-defined, single-ended signals or 34 user-defined, differential pairs. The HPC connector provides 160 user-defined, single-ended signals (or 80 user-defined, differential pairs), 10 serial transceiver pairs, and additional clocks. The HPC and LPC connectors use the same mechanical connector. The only difference is which signals are actually populated. Thus, cards with LPC connectors can be plugged into HPC sites, and if properly designed, HPC cards can offer a subset of functionality when plugged into an LPC site."
FMC Geographical Address feature
FMC provides a Geographical Address using two pins (GA1:GA0)
that are typically used by a mezzanine
device to determine which FMC connector on a carrier
it is attached to. For cards that have
only one FMC connector, the default
geographical address is 00.
Some FMC mezzanine cards may attach other devices
to the I2C bus and address them through a
system controller, using the geographical address
as a chip-select. This is not strictly in adherence
with the FMC specification.
See also
VPX
Expansion slot
CRUVI FPGA daughtercard standard with FMC option
References
American National Standards Institute standards | FPGA Mezzanine Card | Technology | 638 |
5,536,187 | https://en.wikipedia.org/wiki/Topopolis | A topopolis is a proposed tube-shaped space habitat, rotating to produce artificial gravity via centrifugal force on the inner surface, which is extended into a loop around the local planet or star. The concept was invented by writer Patrick Gunkel.
Varieties of topopolises and similar fictional structures
A topopolis has been compared to an O'Neill cylinder, or a McKendree cylinder, that has been extended in length so that it encircles a star. A "normal" topopolis would be hundreds of millions of miles/kilometers long and at least several miles (kilometers) in diameter.
Topopoles can be looped several times around the local star, in a geometric figure known as a torus knot. Topopolises are also called cosmic spaghetti.
A topopolis with big enough diameter could theoretically have multiple levels of concentric cylinders.
Larry Niven (1974) mentioned the idea in a much-reprinted magazine article "Bigger Than Worlds".
Examples in novels
In Matter, Iain M. Banks (2008) depicts a topopolis that loops its system star many times in various braidings, and houses trillions of sapient residents. The topopolis was so massive that stray gases from the system collected within the major spacing within the braids by gravitation alone, producing a slight atmosphere between the strands, that the author describes as a "haze".
Dennis E. Taylor (2020) in the book Heaven’s River features an alien civilization inhabiting a topopolis.
See also
Big dumb object
Ringworld
References
External links
Megastructures
Fictional space stations | Topopolis | Technology | 334 |
52,596,516 | https://en.wikipedia.org/wiki/XML%20log | XML log or XML logging is used by many computer programs to log the programs operations. An XML logfile records a description of the operations done by a program during its session. The log normally includes: timestamp, the programs settings during the operation, what was completed during the session, the files or directories used and any errors that may have occurred. In computing, a logfile records either events that occur in an operating system or other software running. It may also log messages between different users of a communication software. XML file standard is controlled by the World Wide Web Consortium as the XML file standard is used for many other data standards, see List of XML markup languages. XML is short for eXtensible Markup Language file.
See also
List of XML markup languages
List of XML schemas
Comparison of data serialization formats
Binary XML
EBML
WBXML
XHTML
XML Protocol
References
External links
W3C XML homepage
XML 1.0 Specification
Retrospective on Extended Reference Concrete Syntax by Rick Jelliffe
XML, Java and the Future of the Web (1997) by Jon Bosak
http://validator.w3.org/ The Official [W3C] Markup Validation Service
The XML FAQ originally for the W3C's XML SIG by Peter Flynn
Computer file formats
Computer logging
Markup languages
Open formats | XML log | Technology | 279 |
997,579 | https://en.wikipedia.org/wiki/14%20Herculis | 14 Herculis or 14 Her is a K-type main-sequence star away in the constellation Hercules. It is also known as HD 145675. Because of its apparent magnitude, of 6.61 the star can be very faintly seen with the naked eye. As of 2021, 14 Herculis is known to host two exoplanets in orbit around the star.
Stellar components
14 Herculis is an orange dwarf star of the spectral type K0V. The star has about 98 percent of the mass, 97 percent of the radius, and only 67 percent of the luminosity of the Sun. The star appears to be 2.7 times as enriched with elements heavier than hydrogen (based on its abundance of iron), in comparison to the Sun. It may have been the most metal rich star known as of 2001.
Planetary system
In 1998 a planet, 14 Herculis b was discovered orbiting 14 Herculis via radial velocity. This was formally published in 2003. The planet has an eccentric orbit with a period of 4.8 years. In 2005, a possible second planet was proposed, designated 14 Herculis c. The parameters of this planet were very uncertain, but an initial analysis suggested that it was in the 4:1 resonance with the inner planet, with an orbital period of almost 19 years at an orbital distance of 6.9 AU. The existence of the planet 14 Herculis c was confirmed in 2021, along with a rough orbit determination.
A 2021 study combining radial velocity and astrometry found that the planetary orbits are not coplanar, which may indicate a strong planet-planet scattering event in the past. Subsequent astrometric studies have found differing results; a 2022 study found inclinations consistent with aligned orbits, while a 2023 study again found misaligned orbits. The latter study also found signs of a third candidate planet with a period of about 10 years, but this signal is most likely related to the star's magnetic activity cycle.
Direct imaging of the outer planet 14 Herculis c with the James Webb Space Telescope is planned.
See also
47 Ursae Majoris
List of stars in Hercules
Lists of exoplanets
References
External links
Herculis, 014
Hercules (constellation)
145675
079248
0614
BD+44 2549
K-type main-sequence stars
Planetary systems with two confirmed planets | 14 Herculis | Astronomy | 484 |
44,595,611 | https://en.wikipedia.org/wiki/Hj%C3%A4rtats%20hj%C3%A4ltar | Hjärtats hjältar ("The Heroes of the Heart") was the 2004 edition of Sveriges Radio's Christmas Calendar.
Plot
Julius and Juliana are twins. Julius wants to become a better ice hockey player. He practices a lot, not even caring for Christmas. Worried, Juliana visits the school nurse Vera, who says practicing to hard is dangerous. Vera has invented a shrinking machine, which she shows Juliana. Juliana is shrunk down and enters Julius' body travelling inside small a yellow, submarine-like vehicle.
References
2004 radio programme debuts
2004 radio programme endings
Fiction about size change
Sports fiction
Sveriges Radio's Christmas Calendar | Hjärtats hjältar | Physics,Mathematics | 134 |
4,746,264 | https://en.wikipedia.org/wiki/Uranium%20pentafluoride | Uranium pentafluoride is the inorganic compound with the chemical formula UF5. It is a pale yellow paramagnetic solid. The compound has attracted interest because it is related to uranium hexafluoride, which is widely used to produce uranium fuel. It crystallizes in two polymorphs, called α- and β-UF5.
Synthesis and structure
Uranium pentafluoride is an intermediate in the conversion of uranium tetrafluoride to volatile UF6:
2UF4 + F2 → 2UF5
2UF5 + F2 → 2UF6
It can be produced by reduction of the hexafluoride with carbon monoxide at elevated temperatures.
2UF6 + CO → 2UF5 + COF2
Other reducing agents have been examined.
The α form is a linear coordination polymer consisting of chains of octahedral uranium centers in which one of the five fluoride anion forms a bridge to the next uranium atom. The structure is reminiscent of that for vanadium pentafluoride.
In the β form, the uranium centers adopt a square antiprismatic structure. The β polymorph gradually converts to α at 130 °C.
Monomeric UF5
Of theoretical interest, molecular UF5 can be generated as a transient monomer by UV-photolysis of uranium hexafluoride. It is thought to adopt a square pyramidal geometry.
References
Uranium(V) compounds
Nuclear materials
Fluorides
Actinide halides
Inorganic polymers
Coordination polymers | Uranium pentafluoride | Physics,Chemistry | 317 |
7,202,520 | https://en.wikipedia.org/wiki/Aluminium%E2%80%93lithium%20alloys | Aluminium–lithium alloys (Al–Li alloys) are a set of alloys of aluminium and lithium, often also including copper and zirconium. Since lithium is the least dense elemental metal, these alloys are significantly less dense than aluminium. Commercial Al–Li alloys contain up to 2.45% lithium by mass.
Crystal structure
Alloying with lithium reduces structural mass by three effects:
Displacement A lithium atom is lighter than an aluminium atom; each lithium atom then displaces one aluminium atom from the crystal lattice while maintaining the lattice structure. Every 1% by mass of lithium added to aluminium reduces the density of the resulting alloy by 3% and increases the stiffness by 5%. This effect works up to the solubility limit of lithium in aluminium, which is 4.2%.
Strain hardening Introducing another type of atom into the crystal strains the lattice, which helps block dislocations. The resulting material is thus stronger, which allows less of it to be used.
Precipitation hardening When properly aged, lithium forms a metastable Al3Li phase (δ') with a coherent crystal structure. These precipitates strengthen the metal by impeding dislocation motion during deformation. The precipitates are not stable, however, and care must be taken to prevent overaging with the formation of the stable AlLi (β) phase. This also produces precipitate free zones (PFZs) typically at grain boundaries and can reduce the corrosion resistance of the alloy.
The crystal structure for Al3Li and Al–Li, while based on the FCC crystal system, are very different. Al3Li shows almost the same-size lattice structure as pure aluminium, except that lithium atoms are present in the corners of the unit cell. The Al3Li structure is known as the AuCu3, L12, or Pmm and has a lattice parameter of 4.01 Å. The Al–Li structure is known as the NaTl, B32, or Fdm structure, which is made of both lithium and aluminium assuming diamond structures and has a lattice parameter of 6.37 Å. The interatomic spacing for Al–Li (3.19 Å) is smaller than either pure lithium or aluminium.
Usage
Al–Li alloys are primarily of interest to the aerospace industry for their weight advantage. On narrow-body airliners, Arconic (formerly Alcoa) claims up to 10% weight reduction compared to composites, leading to up to 20% better fuel efficiency, at a lower cost than titanium or composites. Aluminium–lithium alloys were first used in the wings and horizontal stabilizer of the North American A-5 Vigilante military aircraft. Other Al–Li alloys have been employed in the lower wing skins of the Airbus A380, the inner wing structure of the Airbus A350, the fuselage of the Bombardier CSeries (where the alloys make up 24% of the fuselage), the cargo floor of the Boeing 777X, and the fan blades of the Pratt & Whitney PurePower geared turbofan aircraft engine. They are also used in the fuel and oxidizer tanks in the SpaceX Falcon 9 launch vehicle, Formula One brake calipers, and the AgustaWestland EH101 helicopter.
The third and final version of the US Space Shuttle's external tank was principally made of Al–Li 2195 alloy. In addition, Al–Li alloys are also used in the Centaur Forward Adapter in the Atlas V rocket, in the Orion Spacecraft, and were to be used in the planned Ares I and Ares V rockets (part of the cancelled Constellation program).
Al–Li alloys are generally joined by friction stir welding. Some Al–Li alloys, such as Weldalite 049, can be welded conventionally; however, this property comes at the price of density; Weldalite 049 has about the same density as 2024 aluminium and 5% higher elastic modulus. Al–Li is also produced in rolls as wide as , which can reduce the number of joins.
Although aluminium–lithium alloys are generally superior to aluminium–copper or aluminium–zinc alloys in ultimate strength-to-weight ratio, their poor fatigue strength under compression remains a problem, which is only partially solved as of 2016. Also, high costs (around 3 times or more than for conventional aluminium alloys), poor corrosion resistance, and strong anisotropy of mechanical properties of rolled aluminium–lithium products has resulted in a paucity of applications.
Al-Li alloy powder is used in the production of lightweight sporting goods, including bicycles, tennis rackets, golf clubs, and baseball bats. Its high strength combined with reduced weight significantly enhances performance, speed, and maneuverability. It is also used in the automobile industry as body panels, chassis parts, and suspension components.
List of aluminium–lithium alloys
Aside from its formal four-digit designation derived from its element composition, an aluminium–lithium alloy is also associated with particular generations, based primarily on when it was first produced, but secondarily on its lithium content. The first generation lasted from the initial background research in the early 20th century to their first aircraft application in the middle 20th century. Consisting of alloys that were meant to replace the popular 2024 and 7075 alloys directly, the second generation of Al–Li had high lithium content of at least 2%; this characteristic produced a large reduction in density but resulted in some negative effects, particularly in fracture toughness. The third generation is the current generation of Al–Li product that is available, and it has gained wide acceptance by aircraft manufacturers, unlike the previous two generations. This generation has reduced lithium content to 0.75–1.8% to mitigate those negative characteristics while retaining some of the density reduction; third-generation Al–Li densities range from .
First-generation alloys (1920s–1960s)
Second-generation alloys (1970s–1980s)
Third-generation alloys (1990s–2010s)
Other alloys
1424 aluminium alloy
1429 aluminium alloy
1441K aluminium alloy
1445 aluminium alloy
V-1461 aluminium alloy
V-1464 aluminium alloy
V-1469 aluminium alloy
V-1470 aluminium alloy
2094 aluminium alloy
2095 aluminium alloy (Weldalite 049)
2097 aluminium alloy
2197 aluminium alloy
8025 aluminium alloy
8091 aluminium alloy
8093 aluminium alloy
CP 276
Production sites
Key world producers of aluminium–lithium alloy products are Arconic, Constellium, and Kamensk-Uralsky Metallurgical Works.
Arconic Technical Center (Upper Burrell, Pennsylvania, USA)
Arconic Lafayette (Indiana, USA); annual capacity of of aluminium–lithium and capable of casting round and rectangular ingot for rolled, extruded and forged applications
Arconic Kitts Green (United Kingdom)
Rio Tinto Alcan Dubuc Plant (Canada); capacity
Constellium Issoire (Puy-de-Dôme), France; annual capacity of
Kamensk-Uralsky Metallurgical Works (KUMZ)
Aleris (Koblenz, Germany)
FMC Corporation - FMC spun off its lithium division into Livent, which has now (2024) merged to form Arcadium (https://arcadiumlithium.com/)
Southwest Aluminium (PRC)
See also
Aluminium alloy
Magnesium–lithium alloys
GLARE
Carbon fiber reinforced plastic (CFRP)
References
Bibliography
External links
Lithium
Lithium | Aluminium–lithium alloys | Chemistry | 1,515 |
2,842,330 | https://en.wikipedia.org/wiki/Essential%20complexity | Essential complexity is a numerical measure defined by Thomas J. McCabe, Sr., in his highly cited, 1976 paper better known for introducing cyclomatic complexity. McCabe defined essential complexity as the cyclomatic complexity of the reduced CFG (control-flow graph) after iteratively replacing (reducing) all structured programming control structures, i.e. those having a single entry point and a single exit point (for example if-then-else and while loops) with placeholder single statements.
McCabe's reduction process is intended to simulate the conceptual replacement of control structures (and actual statements they contain) with subroutine calls, hence the requirement for the control structures to have a single entry and a single exit point. (Nowadays a process like this would fall under the umbrella term of refactoring.) All structured programs evidently have an essential complexity of 1 as defined by McCabe because they can all be iteratively reduced to a single call to a top-level subroutine. As McCabe explains in his paper, his essential complexity metric was designed to provide a measure of how far off this ideal (of being completely structured) a given program was. Thus greater than 1 essential complexity numbers, which can only be obtained for non-structured programs, indicate that they are further away from the structured programming ideal.
To avoid confusion between various notions of reducibility to structured programs, it's important to note that McCabe's paper briefly discusses and then operates in the context of a 1973 paper by S. Rao Kosaraju, which gave a refinement (or alternative view) of the structured program theorem. The seminal 1966 paper of Böhm and Jacopini showed that all programs can be [re]written using only structured programming constructs, (aka the D structures: sequence, if-then-else, and while-loop), however, in transforming a random program into a structured program additional variables may need to be introduced (and used in the tests) and some code may be duplicated.
In their paper, Böhm and Jacopini conjectured, but did not prove that it was necessary to introduce such additional variables for certain kinds of non-structured programs in order to transform them into structured programs. An example of program (that we now know) does require such additional variables is a loop with two conditional exits inside it. In order to address the conjecture of Böhm and Jacopini, Kosaraju defined a more restrictive notion of program reduction than the Turing equivalence used by Böhm and Jacopini. Essentially, Kosaraju's notion of reduction imposes, besides the obvious requirement that the two programs must compute the same value (or not finish) given the same inputs, that the two programs must use the same primitive actions and predicates, the latter understood as expressions used in the conditionals. Because of these restrictions, Kosaraju's reduction does not allow the introduction of additional variables; assigning to these variables would create new primitive actions and testing their values would change the predicates used in the conditionals. Using this more restrictive notion of reduction, Kosaraju proved Böhm and Jacopini's conjecture, namely that a loop with two exits cannot be transformed into a structured program without introducing additional variables, but went further and proved that programs containing multi-level breaks (from loops) form a hierarchy, such that one can always find a program with multi-level breaks of depth n that cannot be reduced to a program of multi-level breaks with depth less than n, again without introducing additional variables.
McCabe notes in his paper that in view of Kosaraju's results, he intended to find a way to capture the essential properties of non-structured programs in terms of their control-flow graphs. He proceeds by first identifying the control-flow graphs corresponding to the smallest non-structured programs (these include branching into a loop, branching out of a loop, and their if-then-else counterparts) which he uses to formulate a theorem analogous to Kuratowski's theorem, and thereafter he introduces his notion of essential complexity in order to give a scale answer ("measure of the structuredness of a program" in his words) rather than a yes/no answer to the question of whether a program's control-flow graph is structured or not. Finally, the notion of reduction used by McCabe to shrink the CFG is not the same as Kosaraju's notion of reducing flowcharts. The reduction defined on the CFG does not know or care about the program's inputs, it is simply a graph transformation.
For example, the following C program fragment has an essential complexity of 1, because the inner if statement and the for can be reduced, i.e. it is a structured program.
for (i = 0; i < 3; i++) {
if (a[i] == 0) b[i] += 2;
}
The following C program fragment has an essential complexity of four; its CFG is irreducible. The program finds the first row of z which is all zero and puts that index in i; if there is none, it puts -1 in i.
for (i = 0; i < m; i++) {
for (j = 0; j < n; j++) {
if (z[i][j] != 0)
goto non_zero;
}
goto found;
non_zero:
}
i = -1;
found:
The idea of CFG reducibility by successive collapses of sub-graphs (ultimately to a single node for well-behaved CFGs) is also used in modern compiler optimization. However the notion from structured programming of single-entry and single-exit control structure is replaced with that of natural loop, which is defined as a "single-entry, multiple-exit loop, with only a single branch back to the entry from within it". The areas of the CFG that cannot be reduced to natural loops are called improper regions; these regions end up having a fairly simple definition: multiple-entry, strongly connected components of the CFG. The simplest improper region is thus a loop with two entry points. Multiple exits do not cause analysis problems in modern compilers. Improper regions (multiple-entries into loops) do cause additional difficulties in optimizing code.
See also
History of software engineering
Decision-to-decision path
Cyclomatic complexity
References
Software project management
Software metrics | Essential complexity | Mathematics,Engineering | 1,345 |
59,129,761 | https://en.wikipedia.org/wiki/Prothrombin%20fragment%201%2B2 | Prothrombin fragment 1+2 (F1+2), also written as prothrombin fragment 1.2 (F1.2), is a polypeptide fragment of prothrombin (factor II) generated by the in vivo cleavage of prothrombin into thrombin (factor IIa) by the enzyme prothrombinase (a complex of factor Xa and factor Va). It is released from the N-terminus of prothrombin. F1+2 is a marker of thrombin generation and hence of coagulation activation. It is considered the best marker of in vivo thrombin generation.
F1+2 levels can be quantified with blood tests and is used in the diagnosis of hyper- and hypocoagulable states and in the monitoring of anticoagulant therapy. It was initially determined with a radioimmunoassay, but is now measured with several enzyme-linked immunosorbent assays.
The molecular weight of F1+2 is around 41 to 43 kDa. Its biological half-life is 90 minutes and it persists in blood for a few hours after formation. The half-life of F1+2 is relatively long, which makes it more reliable for measuring ongoing coagulation than other markers like thrombin–antithrombin complexes and fibrinopeptide A. Concentrations of F1+2 in healthy individuals range from 0.44 to 1.11 nM.
F1+2 levels increase with age. Levels of F1+2 have been reported to be elevated in venous thromboembolism, protein C deficiency, protein S deficiency, atrial fibrillation, unstable angina, acute myocardial infarction, acute stroke, atherosclerosis, peripheral arterial disease, and in smokers. Anticoagulants have been found to reduce F1+2 levels. F1+2 levels are increased with pregnancy and by ethinylestradiol-containing birth control pills. Conversely, they do not appear to be increased with estetrol- or estradiol-containing birth control pills. However, F1+2 levels have been reported to be increased with oral estrogen-based menopausal hormone therapy, whereas transdermal estradiol-based menpausal hormone therapy appears to result in less or no consistent increase.
References
Blood tests
Coagulation system | Prothrombin fragment 1+2 | Chemistry,Biology | 498 |
74,751,551 | https://en.wikipedia.org/wiki/CoRoT-26b | CoRoT-26b is a gas giant exoplanet that orbits a G-type star, CoRoT-26. It has a mass of 0.52 Jupiters, takes 4.2 days to complete one orbit of its star, and is 0.0526 AU from its star. Its discovery was announced in 2013.
References
Exoplanets discovered in 2013
Exoplanets discovered by CoRoT
Transiting exoplanets
Ophiuchus | CoRoT-26b | Astronomy | 96 |
40,276,418 | https://en.wikipedia.org/wiki/LitSat-1 | LitSat-1 was one of the first two Lithuanian satellites (other one being Lituanica SAT-1). It was launched aboard the second Cygnus spacecraft along with 28 Flock-1 CubeSats aboard an Antares 120 carrier rocket flying from Pad 0B at the Mid-Atlantic Regional Spaceport on Wallops Island. The launch was scheduled to occur in December 2013, but later was rescheduled to 9 January 2014 and occurred then. The satellite was deployed from the International Space Station via the NanoRacks Cubesat Deployer on February 28, 2014. Three Lithuanian words will be broadcast from space "Lietuva myli laisvę" (Lithuania loves freedom). Launch of satellites Lituanica SAT-1 and LitSat-1 was broadcast live in Lithuania.
On 6 March 2014 the satellite radio station of Kaunas University of Technology (KTU) established a two-way connection with LitSat-1 for the first time.
References
External links
Official website
2014 in Lithuania
Spacecraft launched in 2014
Satellites orbiting Earth
First artificial satellites of a country
Satellites of Lithuania
Student satellites
Satellites deployed from the International Space Station | LitSat-1 | Astronomy | 229 |
911,658 | https://en.wikipedia.org/wiki/Reserve%20requirement | Reserve requirements are central bank regulations that set the minimum amount that a commercial bank must hold in liquid assets. This minimum amount, commonly referred to as the commercial bank's reserve, is generally determined by the central bank on the basis of a specified proportion of deposit liabilities of the bank. This rate is commonly referred to as the cash reserve ratio or shortened as reserve ratio. Though the definitions vary, the commercial bank's reserves normally consist of cash held by the bank and stored physically in the bank vault (vault cash), plus the amount of the bank's balance in that bank's account with the central bank. A bank is at liberty to hold in reserve sums above this minimum requirement, commonly referred to as excess reserves.
In some areas such as the euro are and the UK, tightening of reserve requirements in the home country is found to be associated with higher lending by foreign branches. For this reason, the reserve ratio is sometimes used by a country’s monetary authority as a tool in monetary policy, to influence the country's money supply by limiting or expanding the amount of lending by the banks. Monetary authorities increase the reserve requirement only after careful consideration because an abrupt change may cause liquidity problems for banks with low excess reserves; they generally prefer to use other monetary policy instruments to implement their monetary policy. In many countries (except Brazil, China, India, Russia), reserve requirements are generally not altered frequently in implementing a country's monetary policy because of the short-term disruptive effect on financial markets. In several countries, including the United States, there are today zero reserve requirements.
Policy objective
One of the critical functions of a country's central bank is to maintain public confidence in the banking system, as under a fractional-reserve banking system banks are not expected to hold cash to cover all deposits liabilities in full. One of the mechanisms used by most central banks to further this objective is to set a reserve requirement to ensure that banks have, in normal circumstances, sufficient cash on hand in the event that large deposits are withdrawn, which may precipitate a bank run. The central bank in some jurisdictions, such as the European Union, does not require reserves to be held during the day, while in others, such as the United States, the central bank does not set a reserve requirement at all.
Bank deposits are usually of a relatively short-term duration, and may be “at call”, while loans made by banks tend to be longer-term, resulting in a risk that customers may at any time collectively wish to withdraw cash out of their accounts in excess of the bank reserves. The reserves only provide liquidity to cover withdrawals within the normal pattern. Banks and the central bank expect that in normal circumstances only a proportion of deposits will be withdrawn at the same time, and that the reserves will be sufficient to meet the demand for cash. However, banks routinely find themselves in a shortfall situation or may experience an unexpected bank run, when depositors wish to withdraw more funds than the reserves held by the bank. In that event, the bank experiencing the liquidity shortfall may routinely borrow short-term funds in the interbank lending market from banks with a surplus. In exceptional situations, the central bank may provide funds to cover the short-term shortfall as lender of last resort. When the bank liquidity problem exceeds the central bank’s desire to continue as "lender of last resort", as happened during the global financial crisis of 2007-2008, the government may try to restore confidence in the banking system, for example, by providing government guarantees.
Effects on money supply
Textbook view
Many textbooks describe a system in which reserve requirements can act as a tool of a country’s monetary policy though these bear little resemblance to reality and many central banks impose no such requirements. The commonly assumed requirement is 10% though almost no central bank and no major central bank imposes such a ratio requirement.
With higher reserve requirements, there would be less funds available to banks for lending. Under this view, the money multiplier compounds the effect of bank lending on the money supply. The multiplier effect on the money supply is governed by the following formulas:
: definitional relationship between monetary base MB (bank reserves plus currency held by the non-bank public) and the narrowly defined money supply, ,
: derived formula for the money multiplier m, the factor by which lending and re-lending leads to be a multiple of the monetary base:
where notationally,
the currency ratio: the ratio of the public's holdings of currency (undeposited cash) to the public's holdings of demand deposits; and
the total reserve ratio (the ratio of legally required plus non-required reserve holdings of banks to demand deposit liabilities of banks).
This limit on the money supply does not apply in the real world.
Endogenous money view
Central banks dispute the money multiplier theory of the reserve requirement and instead consider money as endogenous. See endogenous money.
Jaromir Benes and Michael Kumhof of the IMF Research Department report that the "deposit multiplier" of the undergraduate economics textbook, where monetary aggregates are created at the initiative of the central bank, through an initial injection of high-powered money into the banking system that gets multiplied through bank lending, turns the actual operation of the monetary transmission mechanism on its head. Benes and Kumhof assert that in most cases where banks ask for replenishment of depleted reserves, the central bank obliges. Under this view, reserves therefore impose no constraints, as the deposit multiplier is simply, in the words of Kydland and Prescott (1990), a myth. Under this theory, private banks almost fully control the money creation process.
Required reserves
China
The People's Bank of China uses changes in the reserve requirement as an inflation-fighting tool, and raised the reserve requirement ten times in 2007 and eleven times since the beginning of 2010.
India
The Reserve Bank of India uses changes in the CRR as a liquidity management tool, hiked it alongside SLR to navigate 2008 financial crisis. RBI introduced and withdrew Incremental - Cash reserve ratio I-CRR over and above CRR for managing liquidity.
Countries and districts without reserve requirements
Canada, the UK, New Zealand, Australia, Sweden and Hong Kong have no reserve requirements.
This does not mean that banks can—even in theory—create money without limit. On the contrary, banks are constrained by capital requirements, which are arguably more important than reserve requirements even in countries that have reserve requirements.
A commercial bank's overnight reserves are not permitted to become negative. The central bank will step in to lend a bank funds if necessary so that this does not happen. Historically, a central bank might have run out of reserves to lend to banks with liquidity problems and so had to suspend redemptions, but this can no longer happen to modern central banks because of the end of the gold standard worldwide, which means that all nations use a fiat currency.
A zero reserve requirement cannot be explained by a theory that holds that monetary policy works by varying the quantity of money using the reserve requirement.
Even in the United States, which retained formal reserve requirements until 2020, the notion of controlling the money supply by targeting the quantity of base money fell out of favor many years ago, and now the pragmatic explanation of monetary policy refers to targeting the interest rate to control the broad money supply. (See also Regulation D (FRB).)
United Kingdom
In the United Kingdom, commercial banks are called clearing banks with direct access to the clearing system.
The Bank of England, the central bank for the United Kingdom, previously set a voluntary reserve ratio, and not a minimum reserve requirement. In theory, this meant that commercial banks could retain zero reserves. The average cash reserve ratio across the entire United Kingdom banking system, though, was higher during that period, at about 0.15% .
From 1971 to 1980, the commercial banks all agreed to a reserve ratio of 1.5%. In 1981 this requirement was abolished.
From 1981 to 2009, each commercial bank set out its own monthly voluntary reserve target in a contract with the Bank of England. Both shortfalls and excesses of reserves relative to the commercial bank's own target over an averaging period of one day would result in a charge, incentivising the commercial bank to stay near its target, a system known as reserves averaging.
Upon the parallel introduction of quantitative easing and interest on excess reserves in 2009, banks were no longer required to set out a target, and so were no longer penalised for holding excess reserves; indeed, they were proportionally compensated for holding all their reserves at the Bank Rate (the Bank of England now uses the same interest rate for its bank rate, its deposit rate and its interest rate target). In the absence of an agreed target, the concept of excess reserves does not really apply to the Bank of England any longer, so it is technically incorrect to call its new policy "interest on excess reserves".
Canada
Canada abolished its reserve requirement in 1992.
Australia
Australia abolished "statutory reserve deposits" in 1988, which were replaced with 1% non-callable deposits.
United States
In the Thomas Amendment to the Agricultural Adjustment Act of 1933, the Fed was granted the authority to set reserve requirements jointly with the president as one of several provisions that sought to mitigate or prevent deflation. The power was granted to the Fed, without presidential consent, in the Banking Act of 1935. Under the International Banking Act of 1978, the same reserve ratios would apply to branches of foreign banks operating in the United States.
The United States removed reserve requirements for nonpersonal time deposits and eurocurrency liabilities on Dec 27, 1990 and for net transaction accounts on March 27, 2020, thus eliminating reserve requirements altogether. Before that, the Board of Governors of the Federal Reserve System used to set reserve requirements (“liquidity ratio”) based on categories of deposit liabilities ("Net Transaction Accounts" or "NTAs") of depository institutions, such as commercial banks including U.S. branches of a foreign bank, savings and loan association, savings bank, and credit union. For a time, checking accounts were subject to reserve requirements, whereas there was no reserve requirement on savings accounts and time deposit accounts of individuals. The Board for some time set a zero reserve requirement for banks with eligible deposits up to , 3% for banks up to , and 10% thereafter. The total removal of reserve requirements followed the Federal Reserve's shift to an "ample-reserves" system, in which the Federal Reserve Banks pay member banks interest on excess reserves held by them.
The total amount of all NTAs held by customers with U.S. depository institutions, plus the U.S. paper currency and coin currency held by the nonbank public, is called M1.
Reserve requirements by country
The reserve ratios set in each country and district vary. The following list is non-exhaustive:
See also
Bank regulation
Basel accords
Capital requirement
Capital adequacy ratio
Criticism of the Federal Reserve
Excess reserves
Financial repression
Fractional-reserve banking
Full-reserve banking
Great Contraction
Islamic banking
Monetary policy of central banks
Money creation
Money supply
Negative interest on excess reserves
Statutory liquidity ratio
Tier 1 capital
Tier 2 capital
References
External links
Title 12 of the Code of Federal Regulations (12CFR) Part 204--Reserve Requirements of Depository Institutions (Regulation D) (See Section §204.4 for current reserve requirements.)
Reserve Requirements - Fedpoints - Federal Reserve Bank of New York (May 2007)
Reserve Requirements - The Federal Reserve Board
Hussman Funds - Why the Federal Reserve is Irrelevant - August 2001
Don't mention the reserve ratio
Banking
Monetary policy
Financial ratios
Financial economics
Capital requirement | Reserve requirement | Mathematics | 2,393 |
36,338,734 | https://en.wikipedia.org/wiki/Chromium%20hydride | Chromium hydrides are compounds of chromium and hydrogen, and possibly other elements. Intermetallic compounds with not-quite-stoichometric quantities of hydrogen exist, as well as highly reactive molecules. When present at low concentrations, hydrogen and certain other elements alloyed with chromium act as softening agents that enables the movement of dislocations that otherwise not occur in the crystal lattices of chromium atoms.
The hydrogen in typical chromium hydride alloys may contribute only a few hundred parts per million in weight at ambient temperatures. Varying the amount of hydrogen and other alloying elements, and their form in the chromium hydride either as solute elements, or as precipitated phases, expedites the movement of dislocations in chromium, and thus controls qualities such as the hardness, ductility, and tensile strength of the resulting chromium hydride.
Material properties
Even in the narrow range of concentrations that make up chromium hydride, mixtures of hydrogen and chromium can form a number of different structures, with very different properties. Understanding such properties is essential to making quality chromium hydride. At room temperature, the most stable form of pure chromium is the body-centered cubic (BCC) structure α-chromium. It is a fairly hard metal that can dissolve only a small concentration of hydrogen.
It can occur as a dull brown or dark grey solid in two different crystalline forms: face-centered cubic with formula CrH~2 or a close packed hexagonal solid with formula CrH~1. Chromium hydride is important in chrome plating, being an intermediate in the formation of the chromium plate.
An apparent unusual allotrope of chromium in a hexagonal crystal form was investigated by Ollard and Bradley by X-ray crystallography; however they failed to notice that it contained hydrogen.
The hexagonal close packed crystalline substance they discovered actually contains CrHx with x between 0.5 and 1. The lattice for the hexagonal form had unit cell dimensions a=0.271 nm and c=0.441 nm. The crystal form has been described as anti-NiAs structure and is known as the β-phase. Also known as ε-CrH, the space group is Fmm with hydrogen only in octahedral sites.
A face-centered cubic (fcc) phase of chromium hydride can also be produced when chromium is electrodeposited. Cloyd A. Snavely used chromate in sugar syrup cooled to about 5 °C and with a current density of 1290 amperes per square meter. The unit cell dimension in the material was 0.386 nm. The material is brittle and easily decomposed by heat. The composition is CrHx, with x between 1 and 2. For current density above 1800 amps per square meter and at low temperatures, the hexagonal close-packed form was made, but if the current was lower or the temperature was higher, then regular body-centered cubic chromium metal was deposited. The condition for preferring the formation of face-centered cubic chromium hydride is a high pH. The fcc form of CrH has hydrogen atoms in octahedral sites in the P63/mmc spacegroup.
Face-centered cubic CrH had the composition CrH1.7. But in theory it would be CrH2 if the substance was pure and all the tetrahedral sites were occupied by hydrogen atoms. The solid substance CrH2 appears as a dull grey or brown colour. Its surface is easily scratched, but that is due to the brittleness of the hydride.
Face-centered cubic chromium hydride also forms temporarily when chromium metal is etched with hydrochloric acid.
The hexagonal form spontaneously changes to normal chromium in 40 days, whereas the other form (face-centered cubic) changes to the body-centered cubic form of chromium in 230 days at room temperature. Ollard already noticed that hydrogen is evolved during this transformation, but was not sure that the hydrogen was an essential component of the substance, as electrodeposited chromium usually contained hydrogen. Colin G Fink observed that if the hexagonal form was heated in a flame that the hydrogen would quickly burn off.
Electroplating chromium metal from a chromate solution involves the formation of chromium hydride. If the temperature is high enough the chromium hydride rapidly decomposes as it forms, yielding microcrystalline body-centered cubic chromium. Therefore, to ensure that the hydride decomposes sufficiently rapidly and smoothly, chromium must be plated at a suitably high temperature (roughly 60C to 75C, depending on conditions). As the hydride decomposes, the plated surface cracks. The cracking can be controlled and there may be up to 40 cracks per millimeter. Substances on the plating surface, mostly chromium sesquioxide, are sucked into the cracks as they form. The cracks heal over and newer electroplated layers will crack differently. When observed with a microscope the electroplated chromium will appear to be in the form of crystals with 120° and 60° angles, but these are the ghosts of the original hydride crystals; the actual crystals that finally form in the coating are much smaller and consist of body centered cubic chromium.
Superhexagonal chromium hydride has also been produced by exposing chromium films to hydrogen under high pressure and temperature.
In 1926 T. Weichselfelder and B. Thiede claimed to have prepared solid chromium trihydride by reacting hydrogen with chromium chloride and phenylmagnesium bromide in ether, forming a black precipitate.
Solid hexagonal CrH can burn in air with a bluish flame. It is ignitable with a burning match.
Related alloys
The hydrogen content of chromium hydride is between zero and a few hundred parts per million in weight for plain chromium-hydrogen alloys. These values vary depending on alloying elements, such as iron, manganese, vanadium, titanium and so on.
Alloys with significantly higher than a few hundred parts per million hydrogen content can be formed, but require extraordinarily high pressures to be stable. Under such conditions, the hydrogen content may contribute up to 0.96% of its weight, at which point it reaches what is called a line compound phase boundary. As the hydrogen content moves beyond the line compound boundary, the chromium-hydrogen system ceases to behave as an alloy, and instead forms a series of non-metallic stoichiometric compounds, each succeeding one requiring still higher pressure for stability. The first such compound found is dichromium hydride (), where the chromium-to-hydrogen ratio is 1/0.5, corresponding to a hydrogen content of 0.96%. Two of these compounds are metastable at ambient pressures, meaning that they decompose over extended lengths of time, rather than instantaneously so. The other such compound is Chromium(I) hydride which is several times more stable. Both these compounds are stable at cryogenic temperatures, persisting indefinitely. Although precise details are not known.-
Other materials are often added to the chromium/hydrogen mixture to produce chromium hydride alloy with desired properties. Titanium in chromium hydride make the β-chromium form of the chromium-hydrogen solution more stable.
References
Further reading
Chromium alloys
Metal hydrides | Chromium hydride | Chemistry | 1,624 |
39,859,865 | https://en.wikipedia.org/wiki/Salford%20Acoustics | Salford Acoustics offers acoustics and audio engineering courses undertakes public and industrial research in acoustics, carries out commercial testing and undertakes activities to engage the public in acoustic science and engineering. It is based in two locations: (i) 3 km west of Manchester city centre, UK, in the Newton Building on the Peel Park Campus of the University of Salford, and (ii) on the banks of the Manchester Ship Canal in Manchester at MediaCityUK.
History and current structure
The first acoustic laboratories were established in Salford in 1965; in the early 1970s the Department of Applied Acoustics was formed. In 1996 the university merged with University College Salford and a Department of Acoustic and Audio Engineering was formed. A couple of years later, this joined with another department to form Acoustic and Electronic Engineering. Finally, the university twice reduced the number of schools in the organisation. Salford Acoustics first joined the School of Computing, Science and Engineering and later this was merged into the School of Science, Engineering and Environment. Research work comes under the auspices of the Acoustics Research Centre.
Programmes
The Department of Applied Acoustics first taught an undergraduate degree in 1975, namely the BSc (Hons) in Electroacoustics. This was later renamed Beng (Hons) Acoustics. In 1993, Salford Acoustics set up the BEng (Hons) in Audio Technology. These two undergraduate degrees are now taught under a single banner, BEng Audio Acoustics, with two pathways to represent the different interests of the cohort. Salford acoustics has also taught masters in acoustic engineering and audio for many decades, currently offering an MSc in Audio Acoustics and an MSc in Environmental Acoustics. The Acoustics Research Centre offers masters and doctoral research degrees.
Research
Rating
In REF2021, the feedback from the Engineering Panel (UoA12) noted, ‘outstanding impact demonstrated … live sports audio’. The Acoustics Research Centre achieved the top research rating of 6* in RAE 2001 as part of the Research Institute for the Built and Human Environment's submission to Unit Of Assessment 30, Architecture and the Built Environment. In 2008, the RAE submission including the Acoustics Research Centre finished top of Research Fortnight’s ‘Research Power’ table for Architecture & the Built Environment. 90% of the research was graded at international standard and 25% at world-leading.
Sub-disciplines
Research is carried out in the following sub-disciplines of acoustic engineering and science
Archaeoacoustics
Architectural and building acoustics
Audio signal processing
Auralization
Electroacoustics
Environmental noise
Noise control
Outdoor sound propagation
Psychoacoustics
Remote sensing using sound
Sound reproduction
Soundscapes
Surround sound systems
Vibration and dynamics
Public engagement
Examples of public engagement work include:
The search for the Worst Sound in the World. Engineering and Physical Sciences Research Council GrantRef:EP/D000068/1.
Development of extensive curriculum materials on physics and acoustics for schools (EPSRC GrantRefs:GR/S23919/01, EP/D507030/1, P/D054729/1, EP/E033806/1, EP/G020116/1)
The search for the Sonic Wonders of the World
Laboratories
Most of Salford's Acoustics and Audio Laboratories are based on the Peel Park campus, but some are at MediaCityUK:
Audio production suites
Radio studios
Recording studios
Anechoic chamber
2x Semi-anechoic chambers
Reverberation chamber
Transmission suite
Listening room
Commercial work
Salford Acoustics is a calibration and test house for construction, government, military, audio R&D and the motor industry.
Current staff
Awards
Notable staff
Trevor Cox, (Professor of Acoustic Engineering and Broadcaster)
Olga Umnova
Alumni and Former Staff
The following past members of Salford Acoustics have been President of the Institute of Acoustics:
Teli Chinelis, Acoustician and Expert Witness with Finch Consulting Ltd,
Theo Hutchcraft, Hurts
Dr Guy Nicholson, Applications Manager at Apple
Tom Wrigglesworth, stand-up comedian
Nick Zacharov, co-author of Perceptual Audio Evaluation
Velma Allen, Director, Technical Publications at Citrix Systems
Mark Bailey, Director of Sales, EMEA, at QSC Audio Products, LLC
Asa Beattie, Senior Engineer at Technicolor
Tony Churnside, Creative Technologist at BBC, Technical. Director at The Radiophonic Workshop
Kelvin Griffiths, Company Director at Electroacoustic Design Ltd
Ian Bromilow, Principal at Vanguardia Consulting
Rachel Canham, Partner, WBM Consultants in Noise & Vibration
Chris Chittock, managing director at Dragonfly Acoustics Ltd
Richard Collman, managing director at Acoustical Control Engineers Ltd
Matt Desborough, Director, Content Services (EMEA) at Dolby Laboratories
Chris Dilworth, Director (Acoustics) at AWN Consulting
Matthew Dore, Senior Manager, Sound and Acoustics at Philips Consumer Lifestyle
Ian Etchells, Principal Consultant at Red Acoustics Limited
Matthew Hyden, Principal Consultant at Temple Group
Daniel Goodhand, Owner of Goodhand Acoustics
Dr Tony Jones, managing director AIRO
Sam Liston, Director at F1 Sound Company Limited
Paul Malpas, Director at Engineered Acoustic Design Ltd
Andrew Marchant, Principal Engineer at HiWave Technologies plc
Richard Metcalfe, Global Product Line Management at Harman Consumer Group International
Rick Methold, Director at Southdowns Environmental - Consultants in Acoustics, Noise and Vibration
Robert Miller, Director at F1 Acoustics Company Limited
Derek Nash, managing director at Acoustics Central
Chris Needham, Senior Software Engineer at BBC Research and Development
Rohan Ramadorai, Principal at Atkins Limited
Andrew Parkin, Acoustics Partner at Cundall
Richard Perkins, Technical Director - Acoustics at PB
Martin Raisborough, Technical Director at WSP Group
Russell Richardson, Director at RBA Acoustics
Darren Rose, Senior R&D Specialist, Electronics at Genelec Oy
Mark Scaife, Head of Acoustics - Middle East at WSP Group
Richard Sherwood, Director at Sound Reduction Systems Ltd
Simon Shilton, Director, Acustica Ltd
Vicky Stewart, Principal Acoustician at Atkins
Martin Stone, Senior Software Engineer at the BBC
Phil Stollery, Global Product Marketing Manager at Brüel & Kjær,
Tim Stubbs, managing director at PCB Piezotronics
Ryan Swales, Director at RS Acoustic Engineering Ltd
James Trow, associate director at AMEC Environment and Infrastructure UK Ltd
Susan Witterick, Director at dBx Acoustics Limited
See also
University of Salford
References
External links
Acoustics
Audio engineering schools
University of Salford
Articles containing video clips | Salford Acoustics | Physics,Engineering | 1,334 |
446,836 | https://en.wikipedia.org/wiki/360%20%28number%29 | 360 (three hundred [and] sixty) is the natural number following 359 and preceding 361.
In mathematics
360 is the 13th highly composite number and one of only seven numbers such that no number less than twice as much has more divisors; the others are 1, 2, 6, 12, 60, and 2520 .
360 is also the 6th superior highly composite number, the 6th colossally abundant number, a refactorable number, a 5-smooth number, and a Harshad number in decimal since the sum of its digits (9) is a divisor of 360.
360 is divisible by the number of its divisors (24), and it is the smallest number divisible by every natural number from 1 to 10, except 7. Furthermore, one of the divisors of 360 is 72, which is the number of primes below it.
360 is the sum of twin primes (179 + 181) and the sum of four consecutive powers of three (9 + 27 + 81 + 243).
The sum of Euler's totient function φ(x) over the first thirty-four integers is 360.
360 is a triangular matchstick number.
360 is the product of the first two unitary perfect numbers:
There are 360 even permutations of 6 elements. They form the alternating group A6.
A turn is divided into 360 degrees for angular measurement. is also called a round angle. This unit choice divides round angles into equal sectors measured in integer rather than fractional degrees. Many angles commonly appearing in planimetrics have an integer number of degrees. For a simple non-intersecting polygon, the sum of the internal angles of a quadrilateral always equals 360 degrees.
Integers from 361 to 369
361
centered triangular number, centered octagonal number, centered decagonal number, member of the Mian–Chowla sequence. There are also 361 positions on a standard 19 × 19 Go board.
362
: sum of squares of divisors of 19, Mertens function returns 0, nontotient, noncototient.
363
364
, tetrahedral number, sum of twelve consecutive primes (11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53), Mertens function returns 0, nontotient.
It is a repdigit in bases three (111111), nine (444), twenty-five (EE), twenty-seven (DD), fifty-one (77), and ninety (44); the sum of six consecutive powers of three (1 + 3 + 9 + 27 + 81 + 243); and the twelfth non-zero tetrahedral number.
365
365 is the amount of days in a common year. For the common year, see common year.
366
sphenic number, Mertens function returns 0, noncototient, number of complete partitions of 20, 26-gonal and 123-gonal. There are also 366 days in a leap year.
367
367 is a prime number, Perrin number, happy number, prime index prime and a strictly non-palindromic number.
368
It is also a Leyland number.
369
References
Sources
Wells, D. (1987). The Penguin Dictionary of Curious and Interesting Numbers (p. 152). London: Penguin Group.
External links
Integers | 360 (number) | Mathematics | 710 |
24,107,029 | https://en.wikipedia.org/wiki/C15H12O5 | {{DISPLAYTITLE:C15H12O5}}
The chemical formula C15H12O5 (molar mass : 272.25 g/mol, exact Mass 272.068473 u) may refer to :
Butein, a chalcone
Butin (molecule), a flavanone
Garbanzol, a flavanonol
Glycinol (pterocarpan)
Griseoxanthone C, a xanthone
Naringenin, a flavanone
Naringenin chalcone, a chalcone
Pinobanksin, a dihydroflavonol
Thunberginol C, an isocoumarin
Thunberginol G, an isocoumarin | C15H12O5 | Chemistry | 161 |
2,561,743 | https://en.wikipedia.org/wiki/Balm%20of%20Gilead | Balm of Gilead was a rare perfume used medicinally that was mentioned in the Hebrew Bible and named for the region of Gilead, where it was produced. The expression stems from William Tyndale's language in the King James Bible of 1611 and has come to signify a universal cure in figurative speech. The tree or shrub producing the balm is commonly identified as Commiphora gileadensis. However, some botanical scholars have concluded that the actual source was a terebinth tree in the genus Pistacia.
History
Hebrew Bible
In the Bible, balsam is designated by various names: (bosem), (besem), (ẓori), נָטָף (nataf), which all differ from the terms used in rabbinic literature.
After having cast Joseph into a pit, his brothers noticed a caravan on its way from Gilead to Egypt, "with their camels bearing spicery, and balm, and myrrh" (Gen. ). When Jacob dispatched his embassy into Egypt, his present to the unknown ruler included "a little balm" (Gen. ). During the final years of the Kingdom of Judah, Jeremiah asks "Is there no balm in Gilead?" (Jer. 8:22). Still later, from an expression in Ezekiel , balm was one of the commodities which Hebrew merchants carried to the market of Tyre. According to 1 Kings 10:10, balsam (Hebrew: bosem) was among the many precious gifts of the Queen of Sheba to King Solomon.
Greco-Roman
In the later days of Jewish history, the neighborhood of Jericho was believed to be the only spot where the true balsam grew, and even there its culture was confined to two gardens, the one twenty acres in extent, the other much smaller (Theophrastus).
According to Josephus, the Queen of Sheba brought "the root of the balsam" as a present to King Solomon (Ant. 8.6.6).
In describing Palestine, Tacitus says that in all its productions it equals Italy, besides possessing the palm and the balsam (Hist. 5:6); and the far-famed tree excited the cupidity of successive invaders. By Pompey it was exhibited in the streets of Rome as one of the spoils of the newly conquered province in 65 BCE; and one of the wonderful trees graced the triumph of Vespasian in 79 CE. During the invasion of Titus, two battles took place at the balsam groves of Jericho, the last being to prevent the Jews in their despairing frenzy from destroying the trees. Then they became public property, and were placed under the protection of an imperial guard; but history does not record how long the two plantations survived.
According to Pliny (Hist. Nat. 12:54), the balsam-tree was indigenous only to Judea, but known to Diodorus Siculus (3:46) as a product of Arabia also. In Palestine, praised by other writers also for its balsam (Justinus, 36:3; Tacitus, Hist. 5:6; Plutarchus, Vita Anton. c. 36; Florus, Epitome bellorum 3.5.29; Dioscorides, De materia medica 1:18) this plant was cultivated in the environs of Jericho (Strabo, 16:763; Diodorus Siculus 2:48; 19:98), in gardens set apart for this use (Pliny, Hist. Nat. 12:54; see Josephus, Ant. 14.4.1; 15.4.2; War 1.6.6); and after the destruction of the state of Judea, these plantations formed a lucrative source of the Roman imperial revenue (see Diodorus Siculus 2:48).
Pliny distinguishes three different species of this plant; the first with thin, capillaceous leaves; the second a crooked scabrous shrub; and the third with smooth rind and of taller growth than the two former. He tells us that, in general, the balsam plant, a shrub, has the nearest resemblance to the grapevine, and its mode of cultivation is almost the same. The leaves, however, more closely resemble those of the rue, and the plant is an evergreen. Its height does not exceed two cubits. From slight incisions made very cautiously into the rind (Josephus, Ant. 14.4.1; War 1.6.6) the balsam trickles in thin drops, which are collected with wool into a horn, and then preserved in new earthen jars. At first it is whitish and pellucid, but afterwards it becomes harder and reddish. That is considered to be the best quality which trickles before the appearance of the fruit. Much inferior to this is the resin pressed from the seeds, the rind, and even from the stems (see Theophrastus, Hist. Plant. 9:6; Strabo 16:763; Pausanias 9.28.2). This description, which is not sufficiently characteristic of the plant itself, suits for the most part the Egyptian balsam-shrub found by Belon in a garden near Cairo. The plant, however, is not indigenous to Egypt, but the layers are brought there from Arabia Felix; Prosperus Alpinus has published a plate of it.
Dioscorides (De materia medica) attributes many medical properties to balsam, such as expelling menstrual flow; being an abortifacient; moving the urine; assisting breathing and conception; being an antidote for aconitum and snakebite; treating pleurisy, pneumonia, cough, sciatica, epilepsy, vertigo, asthma, and gripes (sharp bowel pains).
In the era of Galen, who flourished in the second century, and travelled to Palestine and Syria purposely to obtain a knowledge of this substance, it grew in Jericho and many other parts of the Holy Land.
Rabbinic literature
The terms used in rabbinic literature are different from those used in the Hebrew Bible: (kataf), (balsam), (appobalsamon), and (afarsemon).
In the Talmud, balsam appears as an ointment which was a highly praised product of the Jericho plain. However, its main use was as a topical medication rather than as a cosmetic. Rav Yehudah composed a special blessing for balsam: "Who creates the oil of our land". Young women used it as a perfume to seduce young men. After King Josiah hid away the holy anointing oil, balsam oil was used in its stead. In the messianic era, the righteous will "bathe in 13 rivers of balsam".
Christian
The Christian rite of confirmation is conferred through the anointing with chrism, which is traditionally a blend of olive oil and balsam. Balm seems to have been used everywhere for chrism at least from the sixth century.
Arab
The balsam, carried originally, says Arab tradition, from Yemen by the Queen of Sheba, as a gift to Solomon, and planted by him in the gardens of Jericho, was brought to Egypt by Cleopatra, and planted at Ain-Shemesh (Ain Shams), in a garden which all the old travellers, Arab and Christian, mention with deep interest.
The Egyptian town of Ain Shams was renowned for its balsam garden, which was cultivated under the supervision of the government. During the Middle Ages the balsam tree is said to have grown only there, though formerly it had also been a native plant in Syria. According to a Coptic tradition known also by the Muslims, it was in the spring of Ayn Shams that Mary, the mother of Jesus, washed the swaddling clothes of the latter on her way back to Judaea after her flight to Egypt. From that time onwards, the spring was beneficent, and during the Middle Ages balsam-trees could only produce their precious secretion on land watered by it. The story is reminiscent of Christian legends about the Fountain of the Virgin in Jerusalem.
Prosper Alpinus relates that forty plants were brought by a governor of Cairo to the garden there, and ten remained when Belon travelled in Egypt, but only one existed in the 18th century. By the 19th century, there appeared to be none.
Modern
The German botanist Schweinfurth (1836–1925) claimed to have reconstructed the ancient process of balsam production.
At present the tree Commiphora gileadensis grows wild in the valley of Mecca where it is called . Many strains of this species are found, some in Somalia and Yemen.
Lexicon
Hebrew tsori
In the Hebrew Bible, the balm of Gilead is tsori or tseri ( or ). It is a merchandise in Gen. 37:25 and Ez. 27:17, a gift in Gen. 43:11, and a medicament (for national disaster, in fig.) in Jer. 8:22, 46:11, 51:8. The Hebrew root z-r-h () means "run blood, bleed" (of vein), with cognates in Arabic (, an odoriferous tree or its gum), Sabaean (), Syriac (, possibly fructus pini), and Greek (, in meaning). The similar word tsori () denotes the adjective "Tyrean", i. e. from the Phoenician city of Tyre.
Many attempts have been made to identify the tsori, but none can be considered conclusive. The Samaritan Pentateuch (Gen. 37:25) and the Syriac bible (Jer. 8:22) translate it as wax (cera). The Septuagint has , "pine resin". The Arabic version and Castell hold it for theriac. Lee supposes it to be "mastich". Luther and the Swedish version have "salve", "ointment" in the passages in Jer., but in Ezek. 27:17 they read "mastic". Gesenius, Hebrew commentators (Kimchi, Junius, Tremellius, Deodatius), and the Authorized Version (except in Ezek. 27:17, rosin) have balm, balsam, Greek , Latin .
Hebrew nataph
Besides the tsori, another Hebrew word, nataph (), mentioned in Ex. 30:34, as an ingredient of the holy incense, is taken by Hebrew commentators for opobalsamum; this, however, is perhaps rather stacte.
Hebrew bosem
Another Hebrew word, (), Aramaic (), Arabic (), appears in various forms throughout the Hebrew Bible. It is usually translated as "spice, perfume, sweet odour, balsam, balsam-tree". The Greek βάλσαμον can be interpreted as a combination of the Hebrew words (בַּעַל) "lord; master; the Phoenician god Baal" and shemen (שֶׁמֶן) "oil", thus "Lord of Oils" (or "Oil of Baal").
Greek balsamon
Greek authors use the words (Theophrastus, Aristotle) for the balsam plant and its resin, while Galen, Nicander and the Geoponica consider it an aromatic herb, like mint. The word is probably Semitic. ὁπο-βάλσᾰμον (Theophrastus) is the juice of the balsam tree. βαλσαμίνη (Dioscorides) is the balsam plant. Palladius names it βάλσαμος and also has βαλσαμουργός, a preparer of balsam. Related are ξῠλο-βάλσᾰμον (Dioscorides, Strabo) "balsam-wood", and καρπο-βάλσᾰμον (Galen) "the fruit of the balsam".
Latin balsamum
Latin authors use (Tacitus, Pliny, Florus, Scribonius Largus, Celsus, Columella, Martialis) for the balsam tree and its branches or sprigs, as well as for its resin, opobalsamum (Pliny, Celsus, Scribonius Largus, Martialis, Statius, Juvenal) for the resinous juice of the balsam tree, and xylobalsamum (Pliny, Scribonius Largus, Celsus) for balsam wood, all derived from Greek.
Plants
Assuming that the tsori was a plant product, several plants have been proposed as its source.
Mastic
Celsius (in Hierobotanicon) identified the tsori with the mastic tree, Pistacia lentiscus L. The Arabic name of this plant is or , which is identical with the Hebrew . Rauwolf and Pococke found the plant occurring at Joppa.
Zukum
and Rosenmüller thought that the pressed juice of the fruit of the zukum-tree (Elaeagnus angustifolia L.) or the myrobalanus of the ancients, is the substance denoted; but Rosenmüller, in another place, mentioned the balsam of Mecca (Amyris opobalsamum L., now Commiphora gileadensis (L.) C.Chr.) as being probably the tsori. Zukum oil was in very high esteem among the Arabs, who even preferred it to the balm of Mecca, as being more efficacious in wounds and bruises. Maundrell found zukum-trees near the Dead Sea. Hasselquist and Pococke found them especially in the environs of Jericho. In the 19th century, the only product in the region of Gilead which had any affinity to balm or balsam was a species of Eleagnus.
Terebinth
Bochart strongly contended that the balm mentioned in Jer. 8:22 could not possibly be that of Gilead, and considered it as the resin drawn from the terebinth. The Biblical terebinth is Hebrew eloh (), Pistacia terebinthus L.
Pine
The Greek word ῥητίνη, used in the Septuagint for translating tsori, denotes a resin of the pine, especially Pinus maritima (πεύκη). The Aramaic tserua () has been described as the fruit of Pinus pinea L., but it has also been held for stacte or storax. The Greek is a species of Pinaceae Rich.
Cancamon
The lexicographer Bar Seroshewai considered the Arabic (), a tree of Yemen known as () or (), Syriac (), Greek , Latin cancamum, mentioned by Dioscorides (De materia medica 1.32) and Pliny (Hist. Nat. 12.44; 12.98). Cancamon has been held for Commiphora kataf, but also as Aleurites laccifer (Euphorbiaceae), Ficus spec. (Artocarpeae), and Butea frondosa (Papilionaceae).
Sanskrit kunkuma () is saffron (Crocus sativus).
Balm of Mecca
Peter Forsskål (1732–1763) found the plant occurring between Mecca and Medina. He considered it to be the genuine balsam-plant and named it Amyris opobalsamum Forsk. (together with two other varieties, Amyris kataf Forsk. and Amyris kafal Forsk.). Its Arabic name is or , which is identical to the Hebrew or . Bruce found the plant occurring in Abyssinia. In the 19th century it was discovered in the East Indies also.
Linnaeus distinguished two varieties: Amyris gileadensis L. (= Amyris opobalsamum Forsk.), and Amyris opobalsamum L., the variant found by Belon in a garden near Cairo, brought there from Arabia Felix. More recent naturalists (Lindley, Wight and Walker) have included the species Amyris gileadensis L. in the genus Protium. Botanists enumerate sixteen balsamic plants of this genus, each exhibiting some peculiarity.
There is little reason to doubt that the plants of the Jericho balsam gardens were stocked with Amyris gileadensis L., or Amyris opobalsamum, which was found by Bruce in Abyssinia, the fragrant resin of which is known in commerce as the "balsam of Mecca". According to De Sacy, the true balm of Gilead (or Jericho) has long been lost, and there is only "balm of Mecca".
The accepted name of the balsam plant is Commiphora gileadensis (L.) Christ., synonym Commiphora opobalsamum.
Cedronella
Cedronella canariensis, a perennial herb in the mint family, is also known as Balm of Gilead, or Herb of Gilead.
Flammability
Balsam oil was too volatile and flammable to be used as fuel. In the Talmud, a case is cited of a woman planning and carrying out the murder of her daughter-in-law by telling her to adorn herself with balsam oil and then light the lamp (Shab. 26a).
According to the 13th-century (?) Liber Ignium (Book of Fires), balsam was an ingredient of ancient incendiaries akin to Greek fire.
References
Bibliography
Encyclopedias, dictionaries, lexica
CBTEL –
EI –
EJ –
GEL –
HEL –
NCE –
OLD –
SED –
Other works
External links
Incendiary weapons
Perfume ingredients
Traditional medicine
Resins
Biblical archaeology
Ethnobotany
Gilead
Queen of Sheba | Balm of Gilead | Physics | 3,806 |
1,216,779 | https://en.wikipedia.org/wiki/Geranic%20acid | Geranic acid, or 3,7-dimethyl-2,6-octadienoic acid, is a pheromone used by some organisms. It is a double bond isomer of nerolic acid.
Pharmacology
Choline geranate (also described as Choline And Geranic acid, or CAGE) has been developed as a novel biocompatible antiseptic material capable of penetrating skin and aiding the transdermal delivery of co-administered antibiotics.
The antibacterial properties of CAGE were analyzed against 24 and 72 hour old biofilms of 11 clinically isolated ESKAPE pathogens (defined as Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumanii, Pseudomonas aeruginosa, and Enterobacter sp, respectively), including multidrug resistant (MDR) isolates.
CAGE was observed to eradicate in vitro biofilms at concentrations as low as 3.56 mM (0.156% v:v) in as little as 2 hours, which represents both an improved potency and rate of biofilm eradication relative to that reported for most common standard-of-care topical antiseptics in current use. In vitro time-kill studies on 24 hour old Staphylococcus aureus biofilms indicate that CAGE exerts its antibacterial effect upon contact and a 0.1% v:v solution reduced biofilm viability by over three orders of magnitude (a 3log10 reduction) in 15 minutes.
Furthermore, disruption of the protective layer of exopolymeric substances in mature biofilms of Staphylococcus aureus by CAGE (0.1% v:v) was observed in 120 minutes. Insight into the mechanism of action of CAGE was provided with molecular modeling studies alongside in vitro antibiofilm assays. The geranate ion and geranic acid components of CAGE are predicted to act in concert to integrate into bacterial membranes, affect membrane thinning and perturb membrane homeostasis.
References
Carboxylic acids
Pheromones
Monoterpenes | Geranic acid | Chemistry | 454 |
17,430,761 | https://en.wikipedia.org/wiki/Debt%20ratio | The debt ratio or debt to assets ratio is a financial ratio which indicates the percentage of a company's assets which are funded by debt. It is measured as the ratio of total debt to total assets, which is also equal to the ratio of total liabilities and total assets:
Financial analysts and financial managers use the ratio in assessing the financial position of the firm. Companies with high debt to asset ratios are said to be highly leveraged, and are associated with greater risk. A high debt to asset ratio may also indicate a low borrowing capacity, which in turn will limit the firm's financial flexibility.
See also
Equity ratio
Debt-to-income ratio, for households
Debt-to-GDP ratio, for governments
Hamada's equation
References
Corporate Finance: European Edition, by D. Hillier, S. Ross, R. Westerfield, J. Jaffe, and B. Jordan. McGraw-Hill, 1st Edition, 2010.
Financial ratios | Debt ratio | Mathematics | 194 |
52,631,552 | https://en.wikipedia.org/wiki/Iron%20pillar%20of%20Delhi | The iron pillar of Delhi is a structure high with a diameter that was constructed by Chandragupta II (reigned c. 375–415 CE), and now stands in the Qutb complex at Mehrauli in Delhi, India.
The metals used in its construction have a rust-resistant composition. The pillar weighs more than six tonnes and is thought to have been erected elsewhere, possibly outside the Udayagiri Caves, and moved to its present location by Anangpal Tomar in the 11th century.
Physical description
The height of the pillar, from the top to the bottom of its base, is , of which is below ground. Its bell pattern capital is . It is estimated to weigh more than .
The pillar has attracted the attention of archaeologists and materials scientists because of its high resistance to corrosion and has been called a "testimony to the high level of skill achieved by the ancient Indian iron smiths in the extraction and processing of iron". The corrosion resistance results from an even layer of crystalline iron(III) hydrogen phosphate hydrate forming on the high-phosphorus-content iron, which serves to protect it from the effects of the Delhi climate.
Inscriptions
The pillar carries a number of inscriptions of different dates.
Inscription of King Chandra or Chandragupta II
The oldest inscription on the pillar is that of a king named Chandra (IAST: ), generally identified as the Gupta emperor Chandragupta II.
Inscription
The inscription covers an area of 2′9.5″× 10.5″(65.09 cm x 26.67 cm). The ancient writing is preserved well because of the corrosion-resistant iron on which it is engraved. However, during the engraving process, iron appears to have closed up over some of the strokes, making some of the letters imperfect.
It contains verses composed in Sanskrit language, in shardulvikridita metre. It is written in the eastern variety of the Gupta script. The letters vary from 0.3125″ to 0.5″ in size, and resemble closely to the letters on the prayagraj Pillar inscription of Samudragupta. However, it had distinctive s (diacritics), similar to the ones in the Bilsad inscription of Kumaragupta I. While the edges of the characters on the Allahabad inscription are more curved, the ones on the Delhi inscription have more straight edges. This can be attributed to the fact that the Prayagraj inscription was inscribed on softer sandstone, while the Delhi inscription is engraved on the harder material (iron).
The text has some unusual deviations from the standard Sanskrit spelling, such as:
instead of : the use of dental nasal instead of anusvāra
instead of : omission of the second t
instead of : omission of the second t
instead of śatru (enemy): an extra t
Studies
In 1831, the East India Company officer William Elliott made a facsimile of the inscription. Based on this facsimile, in 1834, James Prinsep published a lithograph in the Journal of the Royal Asiatic Society of Great Britain and Ireland. However, this lithograph did not represent every single word of the inscription correctly. Some years later, British engineer T. S. Burt made an ink impression of the inscription. Based on this, in 1838, Prinsep published an improved lithograph in the same journal, with his reading of the script and translation of the text.
Decades later, Bhagwan Lal Indraji made another copy of the inscription on a cloth. Based on this copy, Bhau Daji Lad published a revised text and translation in 1875, in Journal of the Bombay Branch of the Royal Asiatic Society. This reading was the first one to correctly mention the king's name as Chandra. In 1888, John Faithfull Fleet published a critical edition of the text in Corpus Inscriptionum Indicarum.
In 1945, Govardhan Rai Sharma dated the inscription to the first half of the 5th century CE, on paleographic grounds. He observed that its script was similar to the writing on other Gupta-Era inscriptions, including the ones discovered at Bilsad (415 CE), Baigram (449 CE), and Kahanum (449 CE). R. Balasubramaniam (2005) noted that the characters of the Delhi inscription closely resembled the dated inscriptions of Chandragupta II, found at Udayagiri in Madhya Pradesh.
Issuance
The inscription is undated, and contains a eulogy of a king named Candra, whose dynasty it does not mention. The identity of this king, and thus the date of the pillar, has been the subject of much debate. The various viewpoints about the identity of the issuer were assembled and analyzed in a volume edited by M. C. Joshi and published in 1989.
The king is now generally identified with the Gupta King Chandragupta II. This identification is based on several points:
The script and the poetic style of the inscription, which point to a date in the late fourth or early fifth century CE: the Gupta period.
The inscription describes the king as a devotee of the God Vishnu, and records the erection of a dhvaja ("standard", or pillar) of Vishnu, on a hill called Viṣṇupada ("hill of the footprint of Viṣṇu"). Other Gupta inscriptions also describe Chandragupta II as a Bhagavata (devotee of Vishnu). The names of the places mentioned in the inscription are also characteristic of the Gupta Era. For example, (the Indian Ocean) and (the Bengal region).
The short name 'Candra' is inscribed on the archer-type gold coins of Chandragupta II, while his full name and titles appear in a separate, circular legend on the coin.
A royal seal of Chandragupta's wife Dhruvadevi contains the phrase ("Nārāyaṇa, the lord of the illustrious Viṣṇupada").
As the inscription is a eulogy and states that the king has abandoned the earth, there has been some discussion as to whether it is posthumous, i.e. whether King Chandra was dead when the record was created. Dasharatha Sharma (1938) argued that it was non-posthumous. According to B. Chhabra and G. S. Gai, the inscription states that the king's mind is "fixed upon Vishnu with devotion", and therefore, indicates that the king was alive at the time. They theorize that it may have been recorded when Chandragupta II abdicated his throne, and settled down as a vanaprastha (retiree) in Viṣṇupada.
Text
Following is the Roman script transliteration of the text:
J. F. Fleet's 1888 translation is as follows:
Due to the tablets installed on the building in 1903 by Pandit Banke Rai, the reading provided by him enjoys wide currency. However, Bankelal's reading and interpretation have been challenged by more recent scholarship. The inscription has been revisited by Michael Willis in his book Archaeology of Hindu Ritual, his special concern being the nature of the king's spiritual identity after death. His reading and translation of verse 2 is as follows:
The Sanskrit portion given above can be translated as follows:
Willis concludes:
Samvat 1109 inscription
One short inscription on the pillar is associated with the Tomara king Anangpal, although it is hard to decipher. Alexander Cunningham (1862–63) read the inscription as follows:
Based on this reading, Cunningham theorized that Anangpal had moved the pillar to its current location while establishing the city of Delhi. However, his reading has been contested by the later scholars. Buddha Rashmi Mani (1997) read it as follows:
Original location
The pillar was installed as a trophy in building the Quwwat-ul-Islam mosque and the Qutb complex by Sultan Iltutmish in the 13th century. Its original location, whether on the site itself or from elsewhere, is debated.
According to the inscription of king Chandra, the pillar was erected at Vishnupadagiri (Vishnupada). J. F. Fleet (1898) identified this place with Mathura, because of its proximity to Delhi (the find spot of the inscription) and the city's reputation as a Vaishnavite pilgrimage centre. However, archaeological evidence indicates that during the Gupta period, Mathura was a major centre of Buddhism, although Vaishnavism may have existed there. Moreover, Mathura lies in plains, and only contains some small hillocks and mounds: there is no true giri (hill) in Mathura.
Based on paleographic similarity to the dated inscriptions from Udayagiri, the Gupta-era iconography, analysis of metallurgy and other evidence, Meera Dass and R. Balasubramaniam (2004) theorized that the iron pillar was originally erected at Udayagiri. According to them, the pillar, with a wheel or discus at the top, was originally located at the Udayagiri Caves. This conclusion was partly based on the fact that the inscription mentions Vishnupada-giri (IAST: Viṣṇupadagiri, meaning "hill with footprint of Viṣṇu"). This conclusion was endorsed and elaborated by Michael D. Willis in his The Archaeology of Hindu Ritual, published in 2009.
The key point in favour of placing the iron pillar at Udayagiri is that this site was closely associated with Chandragupta and the worship of Vishnu in the Gupta period. In addition, there are well-established traditions of mining and working iron in central India, documented particularly by the iron pillar at Dhar and local place names like Lohapura and Lohangī Pīr (see Vidisha). The king of Delhi, Iltutmish, is known to have attacked and sacked Vidisha in the thirteenth century and this would have given him an opportunity to remove the pillar as a trophy to Delhi, just as the Tughluq rulers brought Asokan pillars to Delhi in the 1300s.
Relocation
It is not certain when the pillar was moved to Delhi from its original location. Alexander Cunningham attributed the relocation to the Tomara king Anangpal, based on the short pillar inscription ascribed to this king. Pasanaha Chariu, an 1132 CE Jain Apabhramsha text composed by Vibudh Shridhar, states that "the weight of his pillar caused the Lord of the Snakes to tremble". The identification of this pillar with the iron pillar lends support to the theory that the pillar was already in Delhi during Anangpal's reign.
Another theory is that the relocation happened during the Muslim rule in Delhi. Some scholars have assumed that it happened around 1200 CE, when Qutb al-Din Aibak commenced the construction of the Qutb complex as a general of Muhammad of Ghor.
Finbarr Barry Flood (2009) theorizes that it was Qutb al-Din's successor Iltutmish (r. 1210–1236 CE), who moved the pillar to Delhi. According to this theory, the pillar was originally erected in Vidisha and that the pillar was moved to the Qutb complex, by Iltutmish when he attacked and sacked Vidisha in the thirteenth century.
Scientific analysis
The iron pillar in India was produced by the forge welding of pieces of wrought iron. In a report published in the journal Current Science, R. Balasubramaniam of the IIT Kanpur explains how the pillar's resistance to corrosion is due to a passive protective film at the iron-rust interface. The presence of second-phase particles (slag and unreduced iron oxides) in the microstructure of the iron, that of high amounts of phosphorus in the metal, and the alternate wetting and drying existing under atmospheric conditions are the three main factors in the three-stage formation of that protective passive film.
Lepidocrocite and goethite are the first amorphous iron oxyhydroxides that appear upon oxidation of iron. High corrosion rates are initially observed. Then, an essential chemical reaction intervenes slag and unreduced iron oxides (second phase particles) in the iron microstructure alter the polarisation characteristics and enrich the metal–scale interface with phosphorus, thus indirectly promoting passivation of the iron (cessation of rusting activity).
The second-phase particles act as a cathode, and the metal itself serves as anode, for a mini-galvanic corrosion reaction during environment exposure. Part of the initial iron oxyhydroxides is also transformed into magnetite, which somewhat slows down the process of corrosion. The ongoing reduction of lepidocrocite and the diffusion of oxygen and complementary corrosion through the cracks and pores in the rust still contribute to the corrosion mechanism from atmospheric conditions.
The next main agent to intervene in protection from oxidation is phosphorus, enhanced at the metal–scale interface by the same chemical interaction previously described between the slags and the metal. The ancient Indian smiths did not add lime to their furnaces. The use of limestone as in modern blast furnaces yields pig iron that is later converted into steel; in the process, most phosphorus is carried away by the slag.
The absence of lime in the slag and the use of specific quantities of wood with high phosphorus content (for example, Cassia auriculata) during the smelting induces a higher phosphorus content (> 0.1%, average 0.25%) than in modern iron produced in blast furnaces (usually less than 0.05%).
This high phosphorus content and particular repartition are essential catalysts in the formation of a passive protective film of misawite (d-FeOOH), an amorphous iron oxyhydroxide that forms a barrier by adhering next to the interface between metal and rust. Misawite, the initial corrosion-resistance agent, was thus named because of the pioneering studies of Misawa and co-workers on the effects of phosphorus and copper and those of alternating atmospheric conditions in rust formation.
The most critical corrosion-resistance agent is iron hydrogen phosphate hydrate (FePO4-H3PO4-4H2O) under its crystalline form and building up as a thin layer next to the interface between metal and rust. Rust initially contains iron oxide/oxyhydroxides in their amorphous forms. Due to the initial corrosion of metal, there is more phosphorus at the metal–scale interface than in the bulk of the metal. Alternate environmental wetting and drying cycles provide the moisture for phosphoric-acid formation. Over time, the amorphous phosphate is precipitated into its crystalline form (the latter being therefore an indicator of old age, as this precipitation is a rather slow happening). The crystalline phosphate eventually forms a continuous layer next to the metal, which results in an excellent corrosion resistance layer. In 1,600 years, the film has grown just one-twentieth of a millimetre thick.
In 1969, in his first book, Chariots of the Gods?, Erich von Däniken cited the absence of corrosion on the Delhi pillar and the unknown nature of its creation as evidence of extraterrestrial visitation. When informed by an interviewer, in 1974, that the column was not in fact rust-free, and that its method of construction was well-understood, von Däniken responded that he no longer considered the pillar or its creation to be a mystery.
Balasubramaniam states that the pillar is "a living testimony to the skill of metallurgists of ancient India". An interview with Balasubramaniam and his work can be seen in the 2005 article by the writer and editor Matthew Veazey. Further research published in 2009 showed that corrosion has developed evenly over the surface of the pillar.
It was claimed in the 1920s that iron manufactured in Mirjati near Jamshedpur is similar to the iron of the Delhi pillar. Further work on Adivasi (tribal) iron by the National Metallurgical Laboratory in the 1960s did not verify this claim.
Evidence of a cannonball strike
A significant indentation on the middle section of the pillar, approximately from the current courtyard ground level, has been shown to be the result of a cannonball fired at close range. The impact caused horizontal fissuring of the column in the area diametrically opposite to the indentation site, but the column itself remained intact. While no contemporaneous records, inscriptions, or documents describing the event are known to exist, historians generally agree that Nadir Shah is likely to have ordered the pillar's destruction during his invasion of Delhi in 1739, as he would have considered a Hindu temple monument undesirable within an Islamic mosque complex. Alternatively, he may have sought to dislodge the decorative top portion of the pillar in search of hidden precious stones or other items of value.
No additional damage attributable to cannon fire has been found on the pillar, suggesting that no further shots were taken. Historians have speculated that ricocheting fragments of the cannonball may have damaged the nearby Quwwat-ul-Islam mosque, which suffered damage to its southwestern portion during the same period, and the assault on the pillar might have been abandoned as a result.
See also
Related topics
Ancient iron production
History of metallurgy in South Asia
Parkerizing
Serpent Column
Wootz steel
Other pillars of India
Ashoka's Major Rock Edicts
Dhar iron pillar
List of Edicts of Ashoka
Pillars of Ashoka
Heliodorus pillar
Stambha
Other similar topics
Early Indian epigraphy
Hindu temple architecture
History of India
Indian copper plate inscriptions
Indian rock-cut architecture
List of rock-cut temples in India
Outline of ancient India
South Indian Inscriptions
Tagundaing
References
Bibliography
King Chandra and the Mehrauli Pillar, M.C. Joshi, S.K. Gupta and Shankar Goyal, Eds., Kusumanjali Publications, Meerut, 1989.
The Rustless Wonder – A Study of the Iron Pillar at Delhi, T.R. Anantharaman, Vigyan Prasar New Delhi, 1996.
Delhi Iron Pillar: New Insights. R. Balasubramaniam, Aryan Books International, Delhi, and Indian Institute of Advanced Study, Shimla, 2002, Hardbound, .
The Delhi Iron Pillar: Its Art, Metallurgy and Inscriptions, M.C. Joshi, S.K. Gupta and Shankar Goyal, Eds., Kusumanjali Publications, Meerut, 1996.
The World Heritage Complex of the Qutub, R. Balasubramaniam, Aryan Books International, New Delhi, 2005, Hardbound, .
"Delhi Iron Pillar" (in two parts), R. Balasubramaniam, IIM Metal News Volume 7, No. 2, April 2004, pp. 11–17 and IIM Metal News Volume 7, No. 3, June 2004, pp. 5–13.
New Insights on the 1600-Year Old Corrosion Resistant Delhi Iron Pillar, R. Balasubramaniam, Indian Journal of History of Science 36 (2001) 1–49.
The Early use of Iron in India, Dilip K. Chakrabarti, Oxford University Press, New Delhi, 1992, .
External links
Detailed list of Publications on Delhi Iron Pillar by Balasubramaniam, IIT Kanpur
IIT team solves the pillar mystery
Corrosion resistance of Delhi iron pillar
Nondestructive evaluation of the Delhi iron pillar Current Science, Indian Academy of Sciences, Vol. 88, No. 12, 25 June 2005 (PDF)
The Delhi Iron Pillar
IIT team solves the pillar mystery, 21 Mar 2005, Times of India (About Nondestructive evaluation of the Delhi iron pillar)
"New Insights on the Corrosion Resistant Delhi Iron Pillar" by R. Balasubramaniam
5th-century inscriptions
Buildings and structures completed in the 5th century
Monumental columns in India
Monuments of National Importance in Delhi
Tourist attractions in Delhi
Mehrauli
Archaeological monuments in Delhi
Gupta and post-Gupta inscriptions
Metallurgical industry in India
History of metallurgy
Sanskrit inscriptions in India | Iron pillar of Delhi | Chemistry,Materials_science | 4,139 |
23,795,049 | https://en.wikipedia.org/wiki/Biological%20indicator%20evaluation%20resistometer | A Biological Indicator Evaluation Resistometer (BIER) vessel is a piece of equipment used to determine the time taken to reduce survival of a given organism by 90% (also known as a log 1 reduction). The name derives from how the equipment is used.
A BIER vessel evaluates the resistance of biological indicators to moist heat (steam) sterilization. For example, if a 90% reduction is determined to be 5 minutes for the microorganism being evaluated, then a D-value of 5 is assigned. D values are specific to starting bioload, substrate (the material the spores are on), and microbe species.
BIER vessels typically cost in excess of $100,000, and thus tend to be located where biological indicators are manufactured.
References
Microbiology
Antiseptics | Biological indicator evaluation resistometer | Chemistry,Biology | 164 |
370,644 | https://en.wikipedia.org/wiki/Electronic%20toll%20collection | Electronic toll collection (ETC) is a wireless system to automatically collect the usage fee or toll charged to vehicles using toll roads, HOV lanes, toll bridges, and toll tunnels. It is a faster alternative which is replacing toll booths, where vehicles must stop and the driver manually pays the toll with cash or a card. In most systems, vehicles using the system are equipped with an automated radio transponder device. When the vehicle passes a roadside toll reader device, a radio signal from the reader triggers the transponder, which transmits back an identifying number which registers the vehicle's use of the road, and an electronic payment system charges the user the toll.
A major advantage is the driver does not have to stop, reducing traffic delays. Electronic tolling is cheaper than a staffed toll booth, reducing transaction costs for government or private road owners. The ease of varying the amount of the toll makes it easy to implement road congestion pricing, including for high-occupancy lanes, toll lanes that bypass congestion, and city-wide congestion charges. The payment system usually requires users to sign up in advance and load money into a declining-balance account, which is debited each time they pass a toll point.
Electronic toll lanes may operate alongside conventional toll booths so that drivers who do not have transponders can pay at the booth. Open road tolling is an increasingly popular alternative which eliminates toll booths altogether; electronic readers mounted beside or over the road read the transponders as vehicles pass at highway speeds, eliminating traffic bottlenecks created by vehicles slowing down to go through a toll booth lane. Vehicles without transponders are either excluded or pay by plate – a license plate reader takes a picture of the license plate to identify the vehicle, and a bill may be mailed to the address where the car's license plate number is registered, or drivers may have a certain amount of time to pay online or by phone.
Singapore was the first city in the world to implement an electronic road toll collection system known as the Singapore Area Licensing Scheme for purposes of congestion pricing, in 1974. Since 2005, nationwide GNSS road pricing systems have been deployed in several European countries. With satellite-based tolling solutions, it is not necessary to install electronic readers beside or above the road in order to read transponders since all vehicles are equipped with On Board Units having Global Navigation Satellite System (GNSS) receivers in order to determine the distance traveled on the tolled road network - without the use of any roadside infrastructure.
US Nobel Economics Prize winner William Vickrey was the first to propose a system of electronic tolling for the Washington Metropolitan Area in 1959. In the 1960s and the 1970s, the first prototype systems were tested. Norway has been a world pioneer in the widespread implementation of this technology, beginning in 1986. Italy was the first country to deploy a full electronic toll collection system in motorways at national scale in 1989.
History
In 1959, Nobel Economics Prize winner William Vickrey was the first to propose a similar system of electronic tolling for the Washington Metropolitan Area. He proposed that each car would be equipped with a transponder: "The transponder's personalized signal would be picked up when the car passed through an intersection, and then relayed to a central computer which would calculate the charge according to the intersection and the time of day and add it to the car's bill." In the 1960s and the 1970s, free flow tolling was tested with fixed transponders at the undersides of the vehicles and readers, which were located under the surface of the highway. Plans were however scrapped and it never came into actual implementation. Modern toll transponders are typically mounted under the windshield, with readers located in overhead gantries.
After tests in 1974, in 1975, Singapore became the first country in the world to implement an electronic road toll collection system known as the Singapore Area Licensing Scheme for purposes of congestion pricing on its more urbanized roads. It was refined in 1998 as Electronic Road Pricing (ERP).
Italy deployed a full ETC in motorways at national scale in 1989. Telepass, the brand name of the ETC belonging to Autostrade S.p.A. now Autostrade per l'Italia, was designed by Dr. Eng Pierluigi Ceseri and Dr. Eng. Mario Alvisi and included a full operational real time Classification of Vehicles and Enforcement via cameras interconnected with the PRA (Public Register of Automobiles) via a network of more than 3.000 Km. optical fibers. Telepass introduced the concept of ETC Interoperability because interconnected 24 different Italian motorway operators allowing users to travel between different concession areas and paying only at the end of the journey. Dr. Eng. Mario Alvisi is considered the father of ETC in motorways because not only co-designed Telepass but was able to make it the first standardized operating ETC system in the world as European standard in 1996. He acted as a consultant for deployment of ETC in many countries including Japan, United States, Brazil. In Japan, only the ETC System was constructed in all of the controlled-access expressways in 2001. By 2019, 92% of drivers are using ETC.
ETC was first introduced in Bergen, Norway, in 1986, operating together with traditional tollbooths. In 1991, Trondheim introduced the world's first use of completely unaided full-speed electronic tolling. Norway now has 25 toll roads operating with electronic fee collection (EFC), as the Norwegian technology is called (see AutoPASS). In 1995, Portugal became the first country to apply a single, universal system to all tolls in the country, the Via Verde, which can also be used in parking lots and gas stations. The United States is another country with widespread use of ETC in several states, though many U.S. toll roads maintain the option of manual collection.
As of March 2018, in Japan, a total of approximately 2.61 million vehicles are equipped with devices compliant with the ETC 2.0.
Overview
In some urban settings, automated gates are in use in electronic-toll lanes, with 5 mph (8 km/h) legal limits on speed; in other settings, 20 mph (35 km/h) legal limits are not uncommon. However, in other areas such as the Garden State Parkway in New Jersey, and at various locations in California, Florida, Pennsylvania, Delaware, and Texas, cars can travel through electronic lanes at full speed. Illinois' Open Road Tolling program features 274 contiguous miles of barrier-free roadways, where I-PASS or E-ZPass users continue to travel at highway speeds through toll plazas, while cash payers pull off the main roadway to pay at tollbooths. Currently over 80% of Illinois' 1.4 million daily drivers use an I-PASS.
Enforcement is accomplished by a combination of a camera which takes a picture of the car and a radio frequency keyed computer which searches for a drivers window/bumper mounted transponder to verify and collect payment. The system sends a notice and fine to cars that pass through without having an active account or paying a toll.
Factors hindering full-speed electronic collection include:
significant non-participation, entailing lines in manual lanes and disorderly traffic patterns as the electronic- and manual- collection cars "sort themselves out" into their respective lanes;
problems with pursuing toll evaders;
need, in at least some current (barrier) systems, to confine vehicles in lanes, while interacting with the collection devices, and the dangers of high-speed collisions with the confinement structures;
vehicle hazards to toll employees present in some electronic-collection areas;
the fact that in some areas at some times, long lines form even to pass through the electronic-collection lanes;
costs and other issues raised when retrofitting existing toll collection facilities
unionized toll collectors can also be problematic.
Even if line lengths are the same in electronic lanes as in manual ones, electronic tolls save registered cars time: eliminating the stop at a window or toll machine, between successive cars passing the collection machine, means a fixed-length stretch of their journey past it is traveled at a higher average speed, and in a lower time. This is at least a psychological improvement, even if the length of the lines in automated lanes is sufficient to make the no-stop-to-pay savings insignificant compared to time still lost due waiting in line to pass the toll gate. Toll plazas are typically wider than the rest of the highway; reducing the need for them makes it possible to fit toll roads into tight corridors.
Despite these limitations, if delay at the toll gate is reduced, the tollbooth can serve more vehicles per hour. The greater the throughput of any toll lane, the fewer lanes required, so construction costs can be reduced. Specifically, the toll-collecting authorities have incentives to resist pressure to limit the fraction of electronic lanes in order to limit the length of manual-lane lines. In the short term, the greater the fraction of automated lanes, the lower the cost of operation (once the capital costs of automating are amortized). In the long term, the greater the relative advantage that registering and turning one's vehicle into an electronic-toll one provides, the faster cars will be converted from manual-toll use to electronic-toll use, and therefore the fewer manual-toll cars will drag down average speed and thus capacity.
In some countries, some toll agencies that use similar technology have set up (or are setting up) reciprocity arrangements, which permit one to drive a vehicle on another operator's tolled road with the tolls incurred charged to the driver's toll-payment account with their home operator. An example is the United States E-ZPass tag, which is accepted on toll roads, bridges and tunnels in fifteen states from Illinois to Maine.
In Australia, there are a number or organizations that provide tags known as e-TAG that can be used on toll roads. They include Transport for NSW's E-Toll and Transurban's Linkt. A toll is debited to the customer's account with their tag provider. Some toll road operators – including Sydney's Sydney Harbour Tunnel, Lane Cove Tunnel and Westlink M7, Melbourne's CityLink and Eastlink, and Brisbane's Gateway Motorway – encourage use of such tags, and apply an additional vehicle matching fee to vehicles without a tag.
A similar device in France, called Liber-T for light vehicles and TIS-PL for HGVs, is accepted on all toll roads in the country.
In Brazil, the Sem Parar/Via-Fácil system allows customers to pass through tolls in more than 1,000 lanes in the states of São Paulo, Paraná, Rio Grande do Sul, Santa Catarina, Bahia and Rio de Janeiro. Sem Parar/Via-Fácil also allows users to enter and exit more than 100 parking lots. There are also other systems, such as via expressa, onda livre and auto expresso, that are present in the states of Rio de Janeiro, Rio Grande do Sul, Santa Catarina, Parana and Minas Gerais.
Since 2016, National Highway Authority of Pakistan implemented electronic toll collection on its motorway network using a RFID-based tag called the "M-TAG". The tag is attached to the windscreen of vehicles and is automatically scanned at toll plazas on entry and exit, meanwhile debiting the calculated toll tax from a prepaid M-TAG account.
The European Union issued the EFC-directive, which attempts to standardize European toll collection systems. Systems deployed after January 1, 2007 must support at least one of the following technologies: satellite positioning, mobile communications using the GSM-GPRS standard or 5.8 GHz microwave technology. Furthermore, the European Commission issued the Regulation on the European Electronic Toll Service (EETS) which must be implemented by all Member States from 19 October 2021. All toll roads in Ireland must support the eToll tag standard.
From 2015, the Norwegian government requires commercial trucks above 3.5 tons on its roads to have a transponder and a valid road toll subscription. Before this regulation, two-thirds of foreign trucks failed to pay road tolls.
Use in urban areas and for congestion pricing
The most revolutionary application of ETC is in the urban context of congested cities, allowing to charge tolls without vehicles having to slow down. This application made feasible to concession to the private sector the construction and operation of urban freeways, as well as the introduction or improvement of congestion pricing, as a policy to restrict auto travel in downtown areas.
Between 2004 and 2005, Santiago, Chile, implemented the world's first 100% full speed electronic tolling with transponders crossing through the city's core (CBD) in a system of several concessioned urban freeways (Autopista Central and Autopista ). The United Arab Emirates implemented in 2007 a similar road toll collection in Dubai, called Salik. Similar schemes were previously implemented but only on bypass or outer ring urban freeways in several cities around the world: Toronto in 1997 (Highway 407), several roads in Norway (AutoPASS), Melbourne in 2000 (CityLink), and Tel Aviv also in 2000 (Highway 6).
Congestion pricing or urban toll schemes were implemented to enter the downtown area using ETC technology and/or cameras and video recognition technology to get the plate numbers in several cities around the world: urban tolling in Norway's three major cities:
Singapore in 1974 introduced the world's first successful congestion pricing scheme implemented with manual control (see also Singapore's Area Licensing Scheme), and was refined in 1998 (see Singapore's Electronic Road Pricing), Bergen (1986), Oslo (1990), and Trondheim (1991) (see Trondheim Toll Scheme); Rome in 2001 as an upgrade to the manual zone control system implemented in 1998; London in 2003 and extended in 2007 (see London congestion charge); Stockholm, tested in 2006 and made the charge permanent in 2007 (see Stockholm congestion tax); and in Valletta, the capital city of Malta, since May 2007.
In January 2008, Milan began a one-year trial program called Ecopass, a pollution pricing program in which low-emission-standard vehicles pay a user fee; alternative fuel vehicles and vehicles using conventional fuels but compliant with the Euro IV emission standard are exempted. The program was extended through December 2011 and in January 2012 was replaced by a congestion pricing scheme called Area C.
New York City considered the implementation of a congestion pricing scheme. New York City Council approved such a plan in 2008, but it was not implemented because the New York State Assembly did not approve it. (see New York congestion pricing)
In 2006, San Francisco transport authorities began a comprehensive study to evaluate the feasibility of introducing congestion pricing. The charge would be combined with other traffic reduction implementations, allowing money to be raised for public transit improvements and bike and pedestrian enhancements. The various pricing scenarios considered were presented in public meetings in December 2008, with final study results expected in 2009. (see San Francisco congestion pricing)
Taiwan Highway Electronic Toll Collection System (see Electronic Toll Collection (Taiwan)) In December 2013, the old toll stations were replaced by distance-based pay-as-you-go all-electronic toll collection on all of Taiwan's major freeways. All tolls are collected electronically by overhead gantries with multi-lane free flow, not at traditional toll booths. Taiwan was the first country to switch from manual tolling to all-electronic, multi-lane free-flow tolling on all of its freeways. To simulate the previous model, where a vehicle would not pass toll collection over short-distance travel, each vehicle receives 20 kilometers per diem of free travel and is billed NT$1.2 per kilometer thereafter. Buses and trailers are subject to heavy vehicle surcharges. The highway administration may alter fares (e.g. remove the per diem) during peak travel seasons to facilitate distribution of congestion to midnight hours. The toll gates divide the highway into segments, each having a price value determined by distance to the next gate (interchange). A daily gate count is calculated at midnight, and the total charge is deducted in 48 hours. Each vehicle receives a further discount after the first 200 kilometers, and eTag subscribers with prepaid accounts get a further 10% reduction. Non-subscribers are billed by license plate recognition and mail statements, or can make a payment at chain convenient store at third day after vehicle travel, since a subscription to ETC is not mandated by law. Taiwan was the first country to transfer from flat-rate toll stations to a distance-based pay-as-you-go tolling system on all of its freeways. It has the longest ETC freeway mileage in the world.
Use for non-toll transactions
E-ZPass in the northeastern United States can be used to pay at some airport, train, and festival parking lots, and has been tested for use in drive-thrus at private restaurants.
SunPass in Florida can be used to pay for parking at the Palm Beach International Airport, Tampa International Airport, Orlando International Airport, Fort Lauderdale-Hollywood International Airport, and the Hard Rock Stadium. Despite SunPass' interoperability with Peach Pass and E-ZPass, those systems are not accepted at these facilities.
E-Pass can also be used to pay for parking at Orlando International Airport. However, it is not compatible with other airports that use SunPass for parking.
The NTTA TollTag in Texas can be used to pay for passage and parking in the Dallas/Fort Worth International Airport. Despite TollTag's interoperability with EZ Tag, TxTag, PikePass, and K-TAG, those systems are not accepted at this facility.
Via Verde in Portugal can be used at many gas stations and car parks and at some McDonald's drive-thru restaurants.
BroBizz can be used in toll stations part of EasyGo, as well as some other places within Denmark and Scandinavia, such as for ferries, parking and car washes.
AutoPASS can be used in toll stations part of EasyGo, as well as some ferries within Norway and Scandinavia.
Sem Parar in Brazil can be used in many gas stations, car pars on airports and shopping malls and at some McDonald's drive-thru restaurants.
Technologies
Electronic toll collection systems rely on four major components: automated vehicle identification, automated vehicle classification, transaction processing, and violation enforcement.
The four components are somewhat independent, and, in fact, some toll agencies have contracted out functions separately. In some cases, this division of functions has resulted in difficulties. In one notable example, the New Jersey E-ZPass regional consortium's Violation Enforcement contractor did not have access to the Transaction Processing contractor's database of customers. This, together with installation problems in the automated vehicle identification system, led to many customers receiving erroneous violation notices, and a violation system whose net income, after expenses, was negative, as well as customer dissatisfaction.
Automated vehicle identification
Source:
Automated vehicle identification (AVI) is the process of determining the identity of a vehicle subject to tolls. The majority of toll facilities record the passage of vehicles through a limited number of toll gates. At such facilities, the task is then to identify the vehicle in the gate area.
Some early AVI systems used barcodes affixed to each vehicle, to be read optically at the toll booth. Optical systems proved to have poor reading reliability, especially when faced with inclement weather and dirty vehicles.
Most current AVI systems rely on radio-frequency identification, where an antenna at the toll gate communicates with a transponder on the vehicle via Dedicated Short Range Communications (DSRC). RFID tags have proved to have excellent accuracy, and can be read at highway speeds. The major disadvantage is the cost of equipping each vehicle with a transponder, which can be a major start-up expense, if paid by the toll agency, or a strong customer deterrent, if paid by the customer.
To avoid the need for transponders, some systems, notably the 407 ETR (Express Toll Route) near Toronto and the A282 (M25) Dartford Crossing in the United Kingdom, use automatic number plate recognition. Here, a system of cameras captures images of vehicles passing through tolled areas, and the image of the number plate is extracted and used to identify the vehicle. This allows customers to use the facility without any advance interaction with the toll agency. The disadvantage is that fully automatic recognition has a significant error rate, leading to billing errors and the cost of transaction processing (which requires locating and corresponding with the customer) can be significant. Systems that incorporate a manual review stage have much lower error rates, but require a continuing staffing expense.
A few toll facilities cover a very wide area, making fixed toll gates impractical. The most notable of these is a truck tolling system in (Germany). This system instead uses Global Positioning System location information to identify when a vehicle is located on a tolled Autobahn. Implementation of this system turned out to be far lengthier and more costly than expected.
As smart phone use becomes more commonplace, some toll road management companies have turned to mobile phone apps to inexpensively automate and expedite paying tolls from the lanes. One such example application is Alabama Freedom Pass mobile, used to link customer accounts at sites operated by American Roads LLC. The app communicates in real time with the facility transaction processing system to identify and debit customer accounts or bill a major credit card.
Automated vehicle classification
Source:
Automated vehicle classification is closely related to automated vehicle identification (AVI). Most toll facilities charge different rates for different types of vehicles, making it necessary to distinguish the vehicles passing through the toll facility.
The simplest method is to store the vehicle class in the customer record, and use the AVI data to look up the vehicle class. This is low-cost, but limits user flexibility, in such cases as the automobile owner who occasionally tows a trailer.
More complex systems use a variety of sensors. Inductive sensors embedded in the road surface can determine the gaps between vehicles, to provide basic information on the presence of a vehicle. With clever software processing of the inductive data a wide range of vehicle classes can be derived by careful analysis of the inductive profile. Treadles permit counting the number of axles as a vehicle passes over them and, with offset-treadle installations, also detect dual-tire vehicles. Light-curtain laser profilers record the shape of the vehicle, which can help distinguish trucks and trailers. In modern systems simple laser light curtains are being replaced with more technically advanced Lidar systems. These safety critical systems, used in autonomous vehicles, are less sensitive to environmental conditions.
Transaction processing
Source:
Transaction processing deals with maintaining customer accounts, posting toll transactions and customer payments to the accounts, and handling customer inquiries. The transaction processing component of some systems is referred to as a "customer service center". In many respects, the transaction processing function resembles banking, and several toll agencies have contracted out transaction processing to a bank.
Customer accounts may be postpaid, where toll transactions are periodically billed to the customer, or prepaid, where the customer funds a balance in the account which is then depleted as toll transactions occur. The prepaid system is more common, as the small amounts of most tolls makes pursuit of uncollected debts uneconomic. Most postpaid accounts deal with this issue by requiring a security deposit, effectively rendering the account a prepaid one.
Violation enforcement
Source:
A violation enforcement system (VES) is useful in reducing unpaid tolls, as an unmanned toll gate otherwise represents a tempting target for toll evasion. Several methods can be used to deter toll violators.
Police patrols at toll gates can be highly effective. In addition, in most jurisdictions, the legal framework is already in place for punishing toll evasion as a traffic infraction. However, the expense of police patrols makes their use on a continuous basis impractical, such that the probability of being stopped is likely to be low enough as to be an insufficient deterrent .
A physical barrier, such as a gate arm, ensures that all vehicles passing through the toll booth have paid a toll. Violators are identified immediately, as the barrier will not permit the violator to proceed. However, barriers also force authorized customers, which are the vast majority of vehicles passing through, to slow to a near-stop at the toll gate, negating much of the speed and capacity benefits of electronic tolling. Furthermore, a violator can effectively block a toll collection lane for an indefinite time.
Automatic number plate recognition, while rarely used as the primary vehicle identification method, is more commonly used in violation enforcement. In the VES context, the number of images collected is much smaller than in the AVI context. This makes manual review, with its greater accuracy over fully automated methods, practical. However, many jurisdictions require legislative action to permit this type of enforcement, as the number plate identifies only the vehicle, not its operator, and many traffic enforcement regulations require identifying the operator in order to issue an infraction.
An example of this is the vToll system on the Illinois Tollway, which requires transponder users to enter their license plate information before using the system. If the transponder fails to read, the license plate number is matched to the transponder account, and the regular toll amount is deducted from the account rather than a violation being generated. If the license plate cannot be found in the database, then it is processed as a violation. Illinois' toll violation system has a 7-day grace period, allowing tollway users to pay missed tolls online with no penalty the 7 days following the missed toll.
In the United States, a growing number of states are sharing information on toll violators, where toll agencies can report out-of-state toll violators to the Department of Motor Vehicles (or similar agency) of the violator's home state. The state motor vehicle agency can then block the renewal of the vehicle's registration until the violator has paid all outstanding tolls, plus penalties and interest in some situations. Toll authorities are also resorting to using collection agencies and litigation for habitual toll violators with large unpaid debts, and some states can pursue criminal prosecution of repeat toll violators, where the violator could serve time in jail, if convicted. Many toll agencies also publicize a list of habitual toll violators through media outlets and newspapers. Some toll agencies offer amnesty periods, where toll violators can settle their outstanding debts without incurring penalties or being subject to litigation or prosecution.
Privacy
Electronic toll collection poses a concern to privacy because the systems record when specific motor vehicles pass toll stations. From this information, one can infer the likely location of the vehicle's owner or primary driver at specific times. Technically speaking, using ecash and other modern cryptography methods, one could design systems that do not know where individuals are, but can still collect and enforce tolls.
From the legal standpoint, a proper privacy framework can pose strict bounds on the data retention and rights to access and utilization, especially after the tolls have been successfully paid. For example, images of ANPR cameras may be required deletion as soon as possible for successful tolls.
See also
Tollbooth
Lane control lights
List of electronic toll collection systems
Congestion pricing
FASTag
GNSS Road Pricing
GSM
Open road tolling
Tachograph
Pay-by-plate parking
References
External links
International Bridge, Tunnel and Turnpike Association IBTTA
Overview of international CEN and ISO electronic fee collection standards
Radio-frequency identification
Road congestion charge schemes
Wireless locating
Car costs
Articles containing video clips
Toll (fee) | Electronic toll collection | Technology,Engineering | 5,684 |
6,314,958 | https://en.wikipedia.org/wiki/C/1847%20T1%20%28Mitchell%29 | Miss Mitchell's Comet, formally designated as C/1847 T1, is a non-periodic comet that American astronomer Maria Mitchell discovered in 1847.
The discovery was initially credited to Francesco de Vico. Vico, observing from Rome, was the first to report the comet's discovery in Europe. However, Mitchell observed the comet two days before Vico did, so she became recognized as the comet's discoverer.
The comet had a weakly hyperbolic orbit solution while inside the planetary region of the Solar System. An orbit solution when the comet is outside of the planetary region shows that the comet is bound to the Sun.
References
18471001
Non-periodic comets
Hyperbolic comets | C/1847 T1 (Mitchell) | Astronomy | 140 |
99,610 | https://en.wikipedia.org/wiki/Small%20intestine | The small intestine or small bowel is an organ in the gastrointestinal tract where most of the absorption of nutrients from food takes place. It lies between the stomach and large intestine, and receives bile and pancreatic juice through the pancreatic duct to aid in digestion. The small intestine is about long and folds many times to fit in the abdomen. Although it is longer than the large intestine, it is called the small intestine because it is narrower in diameter.
The small intestine has three distinct regions – the duodenum, jejunum, and ileum. The duodenum, the shortest, is where preparation for absorption through small finger-like protrusions called villi begins. The jejunum is specialized for the absorption through its lining by enterocytes: small nutrient particles which have been previously digested by enzymes in the duodenum. The main function of the ileum is to absorb vitamin B12, bile salts, and whatever products of digestion that were not absorbed by the jejunum.
Structure
Size
The length of the small intestine can vary greatly, from as short as to as long as , also depending on the measuring technique used. The typical length in a living person is . The length depends both on how tall the person is and how the length is measured. Taller people generally have a longer small intestine and measurements are generally longer after death and when the bowel is empty.
It is approximately in diameter in newborns after 35 weeks of gestational age, and in diameter in adults. On abdominal X-rays, the small intestine is considered to be abnormally dilated when the diameter exceeds 3 cm. On CT scans, a diameter of over 2.5 cm is considered abnormally dilated. The surface area of the human small intestinal mucosa, due to enlargement caused by folds, villi and microvilli, averages .
Parts
The small intestine is divided into three structural parts.
The duodenum is a short structure ranging from in length, and shaped like a "C". It surrounds the head of the pancreas. It receives gastric chyme from the stomach, together with digestive juices from the pancreas (digestive enzymes) and the liver (bile). The digestive enzymes break down proteins and bile emulsifies fats into micelles. The duodenum contains Brunner's glands, which produce a mucus-rich alkaline secretion containing bicarbonate. These secretions, in combination with bicarbonate from the pancreas, neutralize the stomach acids contained in gastric chyme.
The jejunum is the midsection of the small intestine, connecting the duodenum to the ileum. It is about long, and contains the circular folds, and intestinal villi that increase its surface area. Products of digestion (sugars, amino acids, and fatty acids) are absorbed into the bloodstream here. The suspensory muscle of duodenum marks the division between the duodenum and the jejunum.
The ileum: The final section of the small intestine. It is about 3 m long, and contains villi similar to the jejunum. It absorbs mainly vitamin B12 and bile acids, as well as any other remaining nutrients. The ileum joins to the cecum of the large intestine at the ileocecal junction.
The jejunum and ileum are suspended in the abdominal cavity by mesentery. The mesentery is part of the peritoneum. Arteries, veins, lymph vessels and nerves travel within the mesentery.
Blood supply
The small intestine receives a blood supply from the celiac trunk and the superior mesenteric artery. These are both branches of the aorta. The duodenum receives blood from the coeliac trunk via the superior pancreaticoduodenal artery and from the superior mesenteric artery via the inferior pancreaticoduodenal artery. These two arteries both have anterior and posterior branches that meet in the midline and anastomose. The jejunum and ileum receive blood from the superior mesenteric artery. Branches of the superior mesenteric artery form a series of arches within the mesentery known as arterial arcades, which may be several layers deep. Straight blood vessels known as vasa recta travel from the arcades closest to the ileum and jejunum to the organs themselves.
Microanatomy
The three sections of the small intestine look similar to each other at a microscopic level, but there are some important differences. The parts of the intestine are as follows:
Gene and protein expression
About 20,000 protein coding genes are expressed in human cells and 70% of these genes are expressed in the normal duodenum. Some 300 of these genes are more specifically expressed in the duodenum with very few genes expressed only in the small intestine. The corresponding specific proteins are expressed in glandular cells of the mucosa, such as fatty acid binding protein FABP6. Most of the more specifically expressed genes in the small intestine are also expressed in the duodenum, for example FABP2 and the DEFA6 protein expressed in secretory granules of Paneth cells.
Development
The small intestine develops from the midgut of the primitive gut tube. By the fifth week of embryological life, the ileum begins to grow longer at a very fast rate, forming a U-shaped fold called the primary intestinal loop. The loop grows so fast in length that it outgrows the abdomen and protrudes through the umbilicus. By week 10, the loop retracts back into the abdomen. Between weeks six and ten the small intestine rotates anticlockwise, as viewed from the front of the embryo. It rotates a further 180 degrees after it has moved back into the abdomen. This process creates the twisted shape of the large intestine.
Function
Food from the stomach is allowed into the duodenum through the pylorus by a muscle called the pyloric sphincter.
Digestion
The small intestine is where most chemical digestion takes place. Many of the digestive enzymes that act in the small intestine are secreted by the pancreas and liver and enter the small intestine via the pancreatic duct. Pancreatic enzymes and bile from the gallbladder enter the small intestine in response to the hormone cholecystokinin, which is produced in the response to the presence of nutrients. Secretin, another hormone produced in the small intestine, causes additional effects on the pancreas, where it promotes the release of bicarbonate into the duodenum in order to neutralize the potentially harmful acid coming from the stomach.
The three major classes of nutrients that undergo digestion are proteins, lipids (fats) and carbohydrates:
Proteins are degraded into small peptides and amino acids before absorption. Chemical breakdown begins in the stomach and continues in the small intestine. Proteolytic enzymes, including trypsin and chymotrypsin, are secreted by the pancreas and cleave proteins into smaller peptides. Carboxypeptidase, which is a pancreatic brush border enzyme, splits one amino acid at a time. Aminopeptidase and dipeptidase free the end amino acid products.
Lipids (fats) are degraded into fatty acids and glycerol. Pancreatic lipase breaks down triglycerides into free fatty acids and monoglycerides. Pancreatic lipase works with the help of the salts from the bile secreted by the liver and stored in the gall bladder. Bile salts attach to triglycerides to help emulsify them, which aids access by pancreatic lipase. This occurs because the lipase is water-soluble but the fatty triglycerides are hydrophobic and tend to orient towards each other and away from the watery intestinal surroundings. The bile salts emulsify the triglycerides in the watery surroundings until the lipase can break them into the smaller components that are able to enter the villi for absorption.
Some carbohydrates are degraded into simple sugars, or monosaccharides (e.g., glucose). Pancreatic amylase breaks down some carbohydrates (notably starch) into oligosaccharides. Other carbohydrates pass undigested into the large intestine for further handling by intestinal bacteria. Brush border enzymes take over from there. The most important brush border enzymes are dextrinase and glucoamylase, which further break down oligosaccharides. Other brush border enzymes are maltase, sucrase and lactase. Lactase is absent in some adult humans and, for them, lactose (a disaccharide), as well as most polysaccharides, is not digested in the small intestine. Some carbohydrates, such as cellulose, are not digested at all, despite being made of multiple glucose units. This is because the cellulose is made out of beta-glucose, making the inter-monosaccharidal bindings different from the ones present in starch, which consists of alpha-glucose. Humans lack the enzyme for splitting the beta-glucose-bonds, something reserved for herbivores and bacteria from the large intestine.
Absorption
Digested food is now able to pass into the blood vessels in the wall of the intestine through either diffusion or active transport. The small intestine is the site where most of the nutrients from ingested food are absorbed. The inner wall, or mucosa, of the small intestine, is lined with intestinal epithelium, a simple columnar epithelium. Structurally, the mucosa is covered in wrinkles or flaps called circular folds, which are considered permanent features in the mucosa. They are distinct from rugae which are considered non-permanent or temporary allowing for distention and contraction. From the circular folds project microscopic finger-like pieces of tissue called villi (Latin for "shaggy hair"). The individual epithelial cells also have finger-like projections known as microvilli. The functions of the circular folds, the villi, and the microvilli are to increase the amount of surface area available for the absorption of nutrients, and to limit the loss of said nutrients to intestinal fauna.
Each villus has a network of capillaries and fine lymphatic vessels called lacteals close to its surface. The epithelial cells of the villi transport nutrients from the lumen of the intestine into these capillaries (amino acids and carbohydrates) and lacteals (lipids). The absorbed substances are transported via the blood vessels to different organs of the body where they are used to build complex substances such as the proteins required by our body. The material that remains undigested and unabsorbed passes into the large intestine.
Absorption of the majority of nutrients takes place in the jejunum, with the following notable exceptions:
Iron is absorbed in the duodenum.
Folate (Vitamin B9) is absorbed in the duodenum and jejunum.
Vitamin B12 and bile salts are absorbed in the terminal ileum. Vitamin B12 will only be absorbed by the ileum after binding to a protein known as intrinsic factor.
Water is absorbed by osmosis and lipids by passive diffusion throughout the small intestine.
Sodium bicarbonate is absorbed by active transport and glucose and amino acid co-transport
Fructose is absorbed by facilitated diffusion.
Immunological
The small intestine supports the body's immune system. The presence of gut flora appears to contribute positively to the host's immune system.
Peyer's patches, located within the ileum of the small intestine, are an important part of the digestive tract's local immune system. They are part of the lymphatic system, and provide a site for antigens from potentially harmful bacteria or other microorganisms in the digestive tract to be sampled, and subsequently presented to the immune system.
Clinical significance
The small intestine is a complex organ, and as such, there are a very large number of possible conditions that may affect the function of the small bowel. A few of them are listed below, some of which are common, with up to 10% of people being affected at some time in their lives, while others are vanishingly rare.
Small intestine obstruction or obstructive disorders
Meconium ileus
Paralytic ileus
Volvulus
Hernia
Intussusception
Adhesions
Obstruction from external pressure
Obstruction by masses in the lumen (foreign bodies, bezoar, gallstones)
Infectious diseases
Giardiasis
Ascariasis
Tropical sprue
Tapeworm (Diphyllobothrium latum, Taenia solium, Hymenolepis nana)
Hookworm (e.g. Necator americanus, Ancylostoma duodenale)
Nematodes (e.g. Ascaris lumbricoides)
Other Protozoa (e.g. Cryptosporidium parvum, Cyclospora, Microsporidia, Entamoeba histolytica)
Bacterial infections
Enterotoxigenic Escherichia coli
Salmonella enterica
Campylobacter
Shigella
Yersinia
Clostridioides difficile (antibiotic-associated colitis, Pseudomembranous colitis)
Mycobacterium (Mycobacterium avium paratuberculosis, disseminated Mycobacterium tuberculosis)
Whipple's disease
Vibrio (cholera)
Enteric (typhoid) fever (Salmonella enterica var. typhii) and paratyphoid fever
Bacillus cereus
Clostridium perfringens (gas gangrene)
Viral infections
Rotavirus
Norovirus
Astrovirus
Adenovirus
Calicivirus
Neoplasms (cancers)
Adenocarcinoma
Carcinoid
Gastrointestinal stromal tumor (GIST)
Lymphoma
Sarcoma
Leiomyoma
Metastatic tumors, especially SCLC or melanoma
Small intestine cancer
Developmental, congenital or genetic conditions
Duodenal (intestinal) atresia
Hirschsprung's disease
Meckel's diverticulum
Pyloric stenosis
Pancreas divisum
Ectopic pancreas
Enteric duplication cyst
Situs inversus
Cystic fibrosis
Malrotation
Persistent urachus
Omphalocele
Gastroschisis
Disaccharidase (lactase) deficiencies
Primary bile acid malabsorption
Gardner syndrome
Familial adenomatous polyposis syndrome (FAP)
Other conditions
Crohn's disease, and the more general inflammatory bowel disease
Typhlitis (neutropenic colitis in the immunosuppressed
Coeliac disease (sprue or non-tropical sprue)
Mesenteric ischemia
Embolus or thrombus of the superior mesenteric artery or the superior mesenteric vein
Arteriovenous malformation
Gastric dumping syndrome
Irritable bowel syndrome
Duodenal (peptic) ulcers
Gastrointestinal perforation
Hyperthyroidism
Diverticulitis
Radiation enterocolitis
Mesenteric cysts
Peritoneal Infection
Sclerosing retroperitonitis
Small intestinal bacterial overgrowth
Endometriosis
Other animals
The small intestine is found in all tetrapods and also in teleosts, although its form and length vary enormously between species. In teleosts, it is relatively short, typically around one and a half times the length of the fish's body. It commonly has a number of pyloric caeca, small pouch-like structures along its length that help to increase the overall surface area of the organ for digesting food. There is no ileocaecal valve in teleosts, with the boundary between the small intestine and the rectum being marked only by the end of the digestive epithelium.
In tetrapods, the ileocaecal valve is always present, opening into the colon. The length of the small intestine is typically longer in tetrapods than in teleosts, but is especially so in herbivores, as well as in mammals and birds, which have a higher metabolic rate than amphibians or reptiles. The lining of the small intestine includes microscopic folds to increase its surface area in all vertebrates, but only in mammals do these develop into true villi.
The boundaries between the duodenum, jejunum, and ileum are somewhat vague even in humans, and such distinctions are either ignored when discussing the anatomy of other animals, or are essentially arbitrary.
There is no small intestine as such in non-teleost fish, such as sharks, sturgeons, and lungfish. Instead, the digestive part of the gut forms a spiral intestine, connecting the stomach to the rectum. In this type of gut, the intestine itself is relatively straight but has a long fold running along the inner surface in a spiral fashion, sometimes for dozens of turns. This valve greatly increases both the surface area and the effective length of the intestine. The lining of the spiral intestine is similar to that of the small intestine in teleosts and non-mammalian tetrapods.
In lampreys, the spiral valve is extremely small, possibly because their diet requires little digestion. Hagfish have no spiral valve at all, with digestion occurring for almost the entire length of the intestine, which is not subdivided into different regions.
Society and culture
In traditional Chinese medicine, the small intestine is a yang organ.
Additional images
References
Bibliography
Solomon et al. (2002) Biology Sixth Edition, Brooks-Cole/Thomson Learning
Townsend et al. (2004) Sabiston Textbook of Surgery, Elsevier
External links
Small intestine at the Human Protein Atlas
Abdomen
Digestive system
Organs (anatomy) | Small intestine | Biology | 3,853 |
1,227,824 | https://en.wikipedia.org/wiki/Lift%20jet | A lift jet is a lightweight jet engine installed only for upward thrust.
An early experimental program using lift engines was the Rolls-Royce Thrust Measuring Rig (TMR), nicknamed the "Flying Bedstead", first run in 1955.
In the early 1960s both the Soviet Union and Western nations considered lift engines to provide STOL or even VTOL capability to combat aircraft. The Soviet Union did concurrent testing of versions of combat aircraft using variable geometry wings or lift jets but ruled out lift jets. Problems associated with lift engines include high fuel consumption, extra weight (which is simply dead weight when the engines are not needed for lift), and taking up fuselage volume that could be used for fuel or other systems. It was decided that variable-geometry wings provided comparable advantages in take-off performance without as many penalties and the Mikoyan MiG-23 and Sukhoi Su-24 entered service.
An operational military aircraft which used lift engines was the Soviet Yakovlev Yak-38, a VTOL fighter used by the AVMF's small aircraft carriers, which were not large enough to support conventional fixed-wing aircraft.
An alternative to the lift jet for vertical thrust is the lift fan used on the STOVL Lockheed F-35B version of the U.S. Joint Strike Fighter.
See also
Lift fan
VTOL
References
Jet engines
VTOL aircraft | Lift jet | Technology | 277 |
1,082,784 | https://en.wikipedia.org/wiki/Epson | Seiko Epson Corporation, commonly known as Epson, is a Japanese multinational electronics company and one of the world's largest manufacturers of printers and information- and imaging-related equipment. Headquartered in Suwa, Nagano, Japan, the company has numerous subsidiaries worldwide and manufactures inkjet, dot matrix, thermal and laser printers for consumer, business and industrial use, scanners, laptop and desktop computers, video projectors, watches, point of sale systems, robots and industrial automation equipment, semiconductor devices, crystal oscillators, sensing systems and other associated electronic components.
The company has developed as one of manufacturing and research and development (formerly known as Seikosha) of the former Seiko Group, a name traditionally known for manufacturing Seiko timepieces. Seiko Epson was one of the major companies in the Seiko Group, but is neither a subsidiary nor an affiliate of Seiko Group Corporation.
History
Origins
The roots of Seiko Epson Corporation go back to a company called Daiwa Kogyo, Ltd. which was founded in May 1942 by Hisao Yamazaki, a local clock shop owner and former employee of K. Hattori, in Suwa, Nagano. Daiwa Kogyo was supported by an investment from the Hattori family (founder of the Seiko Group) and began as a manufacturer of watch parts for Daini Seikosha (currently Seiko Instruments). The company started operation in a renovated miso storehouse with 22 employees.
In 1943, Daini Seikosha established a factory in Suwa for manufacturing Seiko watches with Daiwa Kogyo. In 1959, the Suwa Factory was split up and merged into Daiwa Kogyo to form Suwa Seikosha Co., Ltd: the forerunner of the Seiko Epson Corporation. The company has developed many timepiece technologies, such as the world's first portable quartz timer (Seiko QC-951) in 1963, the world's first quartz watch (Seiko Quartz Astron 35SQ) in 1969, the first automatic power-generating quartz watch (Seiko Auto-Quartz) in 1988, and the Spring Drive watch movement in 1999.
The watch business is the root of the company's ultra-precision machining and micromechatronics technologies and still a major business for Seiko Epson, although it accounts for a low percentage of total revenues. Watches made by the company are sold through the Seiko Watch Corporation, a subsidiary of Seiko Group. The watch brand Orient Watch, also known as Orient Star, has been owned by Epson since 2009 and was fully integrated into the company in 2017.
Printers
In 1961, Suwa Seikosha established a company called Shinshu Seiki Co. as a subsidiary to supply precision parts for Seiko watches. When Seiko was selected to be the official time keeper for the 1964 Summer Olympics in Tokyo, a printing timer was required to time events, and Shinshu Seiki started developing an electronic printer.
In September 1968, Shinshu Seiki launched the world's first mini-printer, the EP-101 ("EP" for Electronic Printer), which was soon incorporated into many calculators. In June 1975, the name Epson was coined for the next generation of printers based on the EP-101, which was released to the public. The Epson name was coined by joining the initials EP (Electronic Printer) and the word son, making "Epson" mean "Electronic Printer's Son". In April of the same year, Epson America Inc. was established to sell printers for Shinshu Seiki Co.
In June 1978, the TX-80 (TP-80), an eighty-column dot matrix printer, was released to the market and was mainly used as a system printer for the Commodore PET computer. After two years of further development, an improved model, the MX-80 (MP-80), was launched in October 1980. It was soon advertised as the best selling printer in the United States. By 1982 Epson reportedly had 75% of the printer market; its products were so beloved that Steve Wozniak joked, "I doubt we'll ever bomb Japan as long as they make Epson printers".
In July 1982, Shinshu Seiki officially named itself the Epson Corporation and launched the world's first handheld computer, the HX-20 (HC-20), and in May 1983, the world's first portable colour LCD TV was developed and launched by the company.
In November 1985, Suwa Seikosha Co., Ltd. and the Epson Corporation merged to form Seiko Epson Corporation.
The company developed the Micro Piezo inkjet technology, which used a piezoelectric crystal in each nozzle and did not heat the ink at the print head while spraying it onto the page, and released the Epson MJ-500 inkjet cartridge for the Epson Stylus 800 printer in March 1993. Shortly after in 1994, Epson released the first 720 dpi colour inkjet printer, the Epson Stylus Color (P860A) utilizing the Micro Piezo head technology. Newer models of the Stylus series employed Epson's special DURABrite ink and used two hard drives (an HD 850 and an HD 860).
In 1994, Epson started to outsource sales representatives to help sell their products in retail stores in the United States. The same year, they started the Epson Weekend Warrior sales program. The purpose of the program was to help improve sales, improve retail sales reps' knowledge of Epson products, and to address Epson customer service in a retail environment. Reps were assigned on weekend shifts, typically around 12–20 hours a week. Epson started the Weekend Warrior program with TMG Marketing (now Mosaic Sales Solutions), and later with Keystone Marketing Inc, then returned to Mosaic, and switched again to Campaigners Inc. on June 24, 2007 after the Mosaic contract expired. The sales reps of Campaigners, Inc. are not outsourced; Epson hired rack jobbers to ensure retailers displayed products properly, freeing up its regular sales force to concentrate on profitable sales solutions to value-added resellers and system integrators, leaving "retail" to reps who did not require sales skills.
Personal computers
Epson entered the personal computer market in 1983 with the QX-10, a CP/M-compatible Z80 machine. By 1986, the company had shifted to the growing PC market with the Equity line. EPSON manufactured and sold NEC PC-9801 clones in Japan. Epson withdrew from the international PC market in 1996. The company still produces and sells PCs in Japan as of 2024.
21st century
In June 2003, the company became public following their listing on the 1st section of the Tokyo Stock Exchange. Since 2017, the company is a constituent of the Nikkei Stock Average index. Although Seiko Group Corporation (f/k/a K. Hattori, Hattori Seiko, and Seiko Holdings) and the key members of the Hattori family still hold approximately 10% of the outstanding shares of Seiko Epson, the company is managed and operated completely independently from Seiko Group.
Seiko Watch Corporation, a division of Seiko Group, produces Seiko timepieces in-house through its subsidiaries as well as delegates the manufacture of some of its high-end watches (Seiko Astron, Grand Seiko, Credor, etc) to Epson. The company makes some of Seiko's highest-grade watches at the Micro Artist Studio inside its Shiojiri Plant in Shiojiri, Nagano. Beside Seiko timepieces, Epson develops, designs, manufactures, markets, and sells watches under its own brands such as Trume, Orient, and Orient Star.
In 2004, Epson introduced their R-D1 (the first digital rangefinder camera on the market), which supports the Leica M mount and Leica M39 mount lenses with an adapter ring. Because its sensor is smaller than that of the standard 35 mm film frame, lenses mounted on the R-D1 have a narrower field of view by a factor of 1.53. In 2006, the R-D1 was replaced by the R-D1s, a cheaper version with identical hardware. Epson has released a firmware patch to bring the R-D1 up to the full functionality of its successor, being the first digital camera manufacturer to make such an upgrade available for free.
In September 2012, Epson introduced a printer called the Expression Premium XP-800 Small-in-One, with the ability to print wirelessly. The Expression brand name has since been used on various models of scanners. In the third quarter of 2012, Epson's global market share in the sale of printers, copiers and multifunction devices amounted to 15.20 percent.
In September 2015, Epson debuted the ET-4550 printer, which enables the user to pour ink into separate inkwells from ink bottles instead of cartridges.
Epson is also involved in the smartglasses market. Since 2016, the company has three different models: the Moverio BT-100, the Moverio BT-200, and the Moverio Pro BT-2000, the latter of which is an enterprise oriented, upgraded version of the BT-200 with stereoscopic cameras. The company also was the first to release consumer smartglasses with transparent optics, which were popular with drone pilots for providing a first-person view while still being able to see the drone in the sky.
In 2016, Epson presented the large-format SureColor SC-P10000 ink printer; it prints with inks in ten colours on paper up to wide.
ESC/P
To control its printers, Epson introduced a printer control language, the Epson Standard Code for Printers (or ESC/P). It became a de facto industry standard for controlling print formatting during the era of dot matrix printers, whose popularity was initially started by the Epson MX-80.
Robots
Epson Robots is the robotics design and manufacturing department of Epson. Seiko Epson produces some microcontrollers, such as the S1C63. In 1980, Epson started the production of robots.
Ink cartridge controversies
In July 2003, a Netherlands-based consumer association advised its 640,000 members to boycott Epson inkjet printers. The organisation alleged that Epson customers were unfairly charged for ink they could never use. Later that month, however, the group retracted its call for a nationwide boycott and issued a statement conceding that residual ink left in Epson cartridges was necessary for the printers to function properly.
Epson designed ink to be left in the cartridges (having done so ever since the introduction of piezoelectric print heads) due to the way the capping mechanism worked. If the capping mechanism dries out, then the heads risk getting clogged, necessitating expensive repairs. The reason that the Dutch consumer association retracted their statement was that, as pointed out, Epson had made a statement regarding how many pages (at usually a 5% coverage of an A4 sheet of paper) each cartridge could sustain for printing.
Nonetheless, Epson America, Inc. settled a class action lawsuit brought before the Los Angeles Superior Court. It did not admit guilt, but agreed to refund $45 to anyone who purchased an Epson inkjet printer after April 8, 1999 (at least $20 of which must be used at Epson's e-Store).
According to IDG News Service, Epson filed a complaint with the U.S. International Trade Commission (ITC) in February 2006 against 24 companies that manufactured, imported, or distributed Epson-compatible ink cartridges for resale in the U.S. On March 30, 2007, ITC judge Paul Luckern issued an initial determination that the cartridges in question did infringe upon Epson's patents. He also recommended those companies and others to be barred from manufacturing, importing, or reselling Epson cartridges in the U.S., said Epson.
In 2015, it emerged that Epson printers reported cartridges to be empty when in fact up to 20% of their ink remains. As in 2003, the company responded:
See also
Inkjet technology
References
External links
Japanese companies established in 1942
2003 initial public offerings
Manufacturing companies based in Tokyo
Electronics companies established in 1942
Companies listed on the Tokyo Stock Exchange
Computer companies of Japan
Computer hardware companies
Computer peripheral companies
Computer printer companies
Computer systems companies
Display technology companies
Electronics companies of Japan
Japanese brands
Point of sale companies
Robotics companies of Japan
Watch brands
Watch manufacturing companies of Japan | Epson | Technology | 2,609 |
16,065,330 | https://en.wikipedia.org/wiki/Pseudin | Pseudin is a peptide derived from Pseudis paradoxa. Pseudins have some antimicrobial function.
There are several different forms:
pseudin-1
pseudin-2 -- has been proposed as a treatment for type 2 diabetes.
pseudin-4
Pseudin-2
Pseudin-2 is the most abundant version of the pseudins found on the skin of the paradoxical frog. The primary sequence reads as GLNALKKVFQGIHEAIKLINNHVQ. Its secondary/tertiary structure consists of one cationic amphipathic α-helix.
Antibacterial activity
Pseudin-2 was shown to have potent antibacterial activity, but a lower cytotoxicity. The cytotoxicity of a peptide can be measured by its effect on human erythrocytes. It takes a lower concentration of Pseudin-2 to kill bacteria or fungi such as E. coli, S. aureus, and C. albicans than to kill human erythrocytes. It is hypothesized that Pseudin-2 binding to the cell membrane of the bacteria results in a conformational change in which the peptide forms an α-helical shape, which allows it to perform cell lysis by inserting itself in the hydrophobic portion of the membrane. This mechanism is applicable to similar amphipathic α-helical peptides created by many frog species, although most of these peptides aren't very potent against bacteria. By increasing the cationicity and amphipathic nature of the molecule, it is possible to create analogues of Pseudin-2 that are even more selective towards bacteria. This is done by substituting leucine residues with lysine residues and glycine residues with proline residues, which results in two shorter α-helices (linked by the substituted proline) that are more attuned to penetrating bacterial cell membranes.
See also
Exenatide
References
Peptides | Pseudin | Chemistry,Biology | 423 |
1,193,525 | https://en.wikipedia.org/wiki/Normal%20order | In quantum field theory a product of quantum fields, or equivalently their creation and annihilation operators, is usually said to be normal ordered (also called Wick order) when all creation operators are to the left of all annihilation operators in the product. The process of putting a product into normal order is called normal ordering (also called Wick ordering). The terms antinormal order and antinormal ordering are analogously defined, where the annihilation operators are placed to the left of the creation operators.
Normal ordering of a product of quantum fields or creation and annihilation operators can also be defined in many other ways. Which definition is most appropriate depends on the expectation values needed for a given calculation. Most of this article uses the most common definition of normal ordering as given above, which is appropriate when taking expectation values using the vacuum state of the creation and annihilation operators.
The process of normal ordering is particularly important for a quantum mechanical Hamiltonian. When quantizing a classical Hamiltonian there is some freedom when choosing the operator order, and these choices lead to differences in the ground state energy. That's why the process can also be used to eliminate the infinite vacuum energy of a quantum field.
Notation
If denotes an arbitrary product of creation and/or annihilation operators (or equivalently, quantum fields), then the normal ordered form of is denoted by .
An alternative notation is .
Note that normal ordering is a concept that only makes sense for products of operators. Attempting to apply normal ordering to a sum of operators is not useful as normal ordering is not a linear operation.
Bosons
Bosons are particles which satisfy Bose–Einstein statistics. We will now examine the normal ordering of bosonic creation and annihilation operator products.
Single bosons
If we start with only one type of boson there are two operators of interest:
: the boson's creation operator.
: the boson's annihilation operator.
These satisfy the commutator relationship
where denotes the commutator. We may rewrite the last one as:
Examples
1. We'll consider the simplest case first. This is the normal ordering of :
The expression has not been changed because it is already in normal order - the creation operator is already to the left of the annihilation operator .
2. A more interesting example is the normal ordering of :
Here the normal ordering operation has reordered the terms by placing to the left of .
These two results can be combined with the commutation relation obeyed by and to get
or
This equation is used in defining the contractions used in Wick's theorem.
3. An example with multiple operators is:
4. A simple example shows that normal ordering cannot be extended by linearity from the monomials to all operators in a self-consistent way. Assume that we can apply the commutation relations to obtain:
Then, by linearity,
a contradiction.
The implication is that normal ordering is not a linear function on operators, but on the free algebra generated by the operators, i.e. the operators do not satisfy the canonical commutation relations while inside the normal ordering (or any other ordering operator like time-ordering, etc).
Multiple bosons
If we now consider different bosons there are operators:
: the boson's creation operator.
: the boson's annihilation operator.
Here .
These satisfy the commutation relations:
where and denotes the Kronecker delta.
These may be rewritten as:
Examples
1. For two different bosons () we have
2. For three different bosons () we have
Notice that since (by the commutation relations) the order in which we write the annihilation operators does not matter.
Bosonic operator functions
Normal ordering of bosonic operator functions , with occupation number operator , can be accomplished using (falling) factorial powers and Newton series instead of Taylor series:
It is easy to show
that factorial powers are equal to normal-ordered (raw) powers and are therefore normal ordered by construction,
such that the Newton series expansion
of an operator function , with -th forward difference at , is always normal ordered. Here, the eigenvalue equation relates and .
As a consequence, the normal-ordered Taylor series of an arbitrary function is equal to the Newton series of an associated function , fulfilling
if the series coefficients of the Taylor series of , with continuous , match the coefficients of the Newton series of , with integer ,
with -th partial derivative at .
The functions and are related through the so-called normal-order transform according to
which can be expressed in terms of the Mellin transform , see for details.
Fermions
Fermions are particles which satisfy Fermi–Dirac statistics. We will now examine the normal ordering of fermionic creation and annihilation operator products.
Single fermions
For a single fermion there are two operators of interest:
: the fermion's creation operator.
: the fermion's annihilation operator.
These satisfy the anticommutator relationships
where denotes the anticommutator. These may be rewritten as
To define the normal ordering of a product of fermionic creation and annihilation operators we must take into account the number of interchanges between neighbouring operators. We get a minus sign for each such interchange.
Examples
1. We again start with the simplest cases:
This expression is already in normal order so nothing is changed. In the reverse case, we introduce a minus sign because we have to change the order of two operators:
These can be combined, along with the anticommutation relations, to show
or
This equation, which is in the same form as the bosonic case above, is used in defining the contractions used in Wick's theorem.
2. The normal order of any more complicated cases gives zero because there will be at least one creation or annihilation operator appearing twice. For example:
Multiple fermions
For different fermions there are operators:
: the fermion's creation operator.
: the fermion's annihilation operator.
Here .
These satisfy the anti-commutation relations:
where and denotes the Kronecker delta.
These may be rewritten as:
When calculating the normal order of products of fermion operators we must take into account the number of interchanges of neighbouring operators required to rearrange the expression. It is as if we pretend the creation and annihilation operators anticommute and then we reorder the expression to ensure the creation operators are on the left and the annihilation operators are on the right - all the time taking account of the anticommutation relations.
Examples
1. For two different fermions () we have
Here the expression is already normal ordered so nothing changes.
Here we introduce a minus sign because we have interchanged the order of two operators.
Note that the order in which we write the operators here, unlike in the bosonic case, does matter.
2. For three different fermions () we have
Notice that since (by the anticommutation relations) the order in which we write the operators does matter in this case.
Similarly we have
Uses in quantum field theory
The vacuum expectation value of a normal ordered product of creation and annihilation operators is zero. This is because, denoting the vacuum state by , the creation and annihilation operators satisfy
(here and are creation and annihilation operators (either bosonic or fermionic)).
Let denote a non-empty product of creation and annihilation operators. Although this may satisfy
we have
Normal ordered operators are particularly useful when defining a quantum mechanical Hamiltonian. If the Hamiltonian of a theory is in normal order then the ground state energy will be zero:
.
Free fields
With two free fields φ and χ,
where is again the vacuum state. Each of the two terms on the right hand side typically blows up in the limit as y approaches x but the difference between them has a well-defined limit. This allows us to define :φ(x)χ(x):.
Wick's theorem
Wick's theorem states the relationship between the time ordered product of fields and a sum of
normal ordered products. This may be expressed for even as
where the summation is over all the distinct ways in which one may pair up fields. The result for odd looks the same
except for the last line which reads
This theorem provides a simple method for computing vacuum expectation values of time ordered products of operators and was the motivation behind the introduction of normal ordering.
Alternative definitions
The most general definition of normal ordering involves splitting all quantum fields into two parts (for example see Evans and Steer 1996)
.
In a product of fields, the fields are split into the two parts and the parts are moved so as to be always to the left of all the parts. In the usual case considered in the rest of the article, the contains only creation operators, while the contains only annihilation operators. As this is a mathematical identity, one can split fields in any way one likes. However, for this to be a useful procedure one demands that the normal ordered product of any combination of fields has zero expectation value
It is also important for practical calculations that all the commutators (anti-commutator for fermionic fields) of all and are all c-numbers. These two properties means that we can apply Wick's theorem in the usual way, turning expectation values of time-ordered products of fields into products of c-number pairs, the contractions. In this generalised setting, the contraction is defined to be the difference between the time-ordered product and the normal ordered product of a pair of fields.
The simplest example is found in the context of thermal quantum field theory (Evans and Steer 1996). In this case the expectation values of interest are statistical ensembles, traces over all states weighted by . For instance, for a single bosonic quantum harmonic oscillator we have that the thermal expectation value of the number operator is simply the Bose–Einstein distribution
So here the number operator is normal ordered in the usual sense used in the rest of the article yet its thermal expectation values are non-zero. Applying Wick's theorem and doing calculation with the usual normal ordering in this thermal context is possible but computationally impractical. The solution is to define a different ordering, such that the and are linear combinations of the original annihilation and creations operators. The combinations are chosen to ensure that the thermal expectation values of normal ordered products are always zero so the split chosen will depend on the temperature.
References
F. Mandl, G. Shaw, Quantum Field Theory, John Wiley & Sons, 1984.
S. Weinberg, The Quantum Theory of Fields (Volume I) Cambridge University Press (1995)
T.S. Evans, D.A. Steer, Wick's theorem at finite temperature, Nucl. Phys B 474, 481-496 (1996) arXiv:hep-ph/9601268
Quantum field theory | Normal order | Physics | 2,256 |
9,756,443 | https://en.wikipedia.org/wiki/David%20Goeddel | David V. Goeddel (born 1951) is an American molecular biologist who, employed at the time by Genentech, successfully used genetic engineering to coax bacteria into creating synthetic human insulin, human growth hormone, and human tissue plasminogen activator (tPA) for use in therapeutic medicine.
Recruited by Bob Swanson in 1978, he was the first non-university scientist to be hired at Genentech, and the company's third employee. Goeddel became legendary in the biotechnology and molecular biology fields by cloning virtually all of Genentech's early products and/or processes, including synthetic insulin, growth hormone, and tPA, often beating out bigger and more established laboratories in the process.
Together with Steve McKnight and Robert Tjian, he founded Tularik in 1991, and was their president and CEO until Tularik was acquired by Amgen for $1.3 billion in 2004.
Goeddel earned his bachelor's degree in chemistry from the University of California, San Diego, and his PhD in biochemistry from the University of Colorado, Boulder. He is a member of the National Academy of Sciences, and is a recipient of the Eli Lilly Award in Biological Chemistry and the Scheele Award from the Swedish Academy of Pharmaceutical Sciences.
Personal life
Goeddel has two sons who have played Major League Baseball, Erik and Tyler.
References
1951 births
History of biotechnology
Scientists from San Diego
Living people
University of California, San Diego alumni
Members of the United States National Academy of Sciences
Genentech people
University of Colorado Boulder alumni
Fellows of the American Academy of Microbiology | David Goeddel | Biology | 328 |
4,157,661 | https://en.wikipedia.org/wiki/Great%20rhombidodecahedron | In geometry, the great rhombidodecahedron is a nonconvex uniform polyhedron, indexed as U73. It has 42 faces (30 squares, 12 decagrams), 120 edges and 60 vertices. Its vertex figure is a crossed quadrilateral.
Related polyhedra
It shares its vertex arrangement with the truncated great dodecahedron and the uniform compounds of 6 or 12 pentagonal prisms. It additionally shares its edge arrangement with the nonconvex great rhombicosidodecahedron (having the square faces in common), and with the great dodecicosidodecahedron (having the decagrammic faces in common).
Gallery
See also
List of uniform polyhedra
References
External links
Uniform polyhedra | Great rhombidodecahedron | Physics | 160 |
25,958,783 | https://en.wikipedia.org/wiki/GetJar | GetJar is an independent mobile phone app store founded in Lithuania in 2004, with offices in Vilnius, Lithuania and San Mateo, California.
History
The company was founded by Ilja Laurs in 2004, who is currently its Executive Chairman, and Chris Dury, who is the CEO. Accel Partners and Tiger Global Management are among the investors.
GetJar was started by developers for developers in 2004 as an app beta testing platform. The platform started making free apps available in early 2005.
In February 2014, GetJar was acquired by Sungy Mobile. Sungy is based in China and is said to have paid over $5 million in cash and the then market value of $35 million in Sungy stocks.
As of early 2015, the company provides more than 849,036 mobile apps across major mobile platforms including Java ME, BlackBerry, Symbian, Windows Mobile and Android and has over 3 million downloads per day. GetJar allows software developers to upload their applications for free through a developer portal. In June 2010, about 300,000 software developers added apps to GetJar resulting in over one billion downloads. In July 2011, GetJar had over two billion downloads.
In 2020 it was reported, that the downfall of GetJar marketplace was also affected by malware risks inside the website and the apps.
See also
List of digital distribution platforms for mobile devices
References
External links
Official website (Mobile)
Mobile software distribution platforms
Android (operating system) software
Lithuanian brands
Internet properties established in 2004
Companies based in Vilnius
Lithuanian companies established in 2004 | GetJar | Technology | 310 |
3,193,758 | https://en.wikipedia.org/wiki/Pumping%20lemma%20for%20context-free%20languages | In computer science, in particular in formal language theory, the pumping lemma for context-free languages, also known as the Bar-Hillel lemma, is a lemma that gives a property shared by all context-free languages and generalizes the pumping lemma for regular languages.
The pumping lemma can be used to construct a refutation by contradiction that a specific language is not context-free. Conversely, the pumping lemma does not suffice to guarantee that a language is context-free; there are other necessary conditions, such as Ogden's lemma, or the Interchange lemma.
Formal statement
If a language is context-free, then there exists some integer (called a "pumping length") such that every string in that has a length of or more symbols (i.e. with ) can be written as
with substrings and , such that
1. ,
2. , and
3. for all .
Below is a formal expression of the Pumping Lemma.
Informal statement and explanation
The pumping lemma for context-free languages (called just "the pumping lemma" for the rest of this article) describes a property that all context-free languages are guaranteed to have.
The property is a property of all strings in the language that are of length at least , where is a constant—called the pumping length—that varies between context-free languages.
Say is a string of length at least that is in the language.
The pumping lemma states that can be split into five substrings, , where is non-empty and the length of is at most , such that repeating and the same number of times () in produces a string that is still in the language. It is often useful to repeat zero times, which removes and from the string. This process of "pumping up" with additional copies of and is what gives the pumping lemma its name.
Finite languages (which are regular and hence context-free) obey the pumping lemma trivially by having equal to the maximum string length in plus one. As there are no strings of this length the pumping lemma is not violated.
Usage of the lemma
The pumping lemma is often used to prove that a given language is non-context-free, by showing that arbitrarily long strings are in that cannot be "pumped" without producing strings outside .
For example, if is infinite but does not contain an (infinite) arithmetic progression, then is not context-free. In particular, neither the prime numbers nor the square numbers are context-free.
For example, the language can be shown to be non-context-free by using the pumping lemma in a proof by contradiction. First, assume that is context free. By the pumping lemma, there exists an integer which is the pumping length of language . Consider the string in . The pumping lemma tells us that can be written in the form , where , and are substrings, such that , , and for every integer . By the choice of and the fact that , it is easily seen that the substring can contain no more than two distinct symbols. That is, we have one of five possibilities for :
for some .
for some and with
for some .
for some and with .
for some .
For each case, it is easily verified that does not contain equal numbers of each letter for any . Thus, does not have the form . This contradicts the definition of . Therefore, our initial assumption that is context free must be false.
In 1960, Scheinberg proved that is not context-free using a precursor of the pumping lemma.
While the pumping lemma is often a useful tool to prove that a given language is not context-free, it does not give a complete characterization of the context-free languages. If a language does not satisfy the condition given by the pumping lemma, we have established that it is not context-free. On the other hand, there are languages that are not context-free, but still satisfy the condition given by the pumping lemma, for example
for with e.g. j≥1 choose to consist only of bs, for choose to consist only of as; in both cases all pumped strings are still in L.
References
— Reprinted in:
Section 1.4: Nonregular Languages, pp. 77–83. Section 2.3: Non-context-free Languages, pp. 115–119.
Formal languages
Lemmas | Pumping lemma for context-free languages | Mathematics | 909 |
49,638,158 | https://en.wikipedia.org/wiki/Microseminoprotein%2C%20prostate%20associated | Microseminoprotein, prostate associated is a protein that in humans is encoded by the MSMP gene.
Function
This gene encodes a member of the beta-microseminoprotein family. Members of this protein family contain ten conserved cysteine residues that form intra-molecular disulfide bonds. The encoded protein may play a role in prostate cancer tumorigenesis.
References
Further reading
Human proteins | Microseminoprotein, prostate associated | Chemistry | 85 |
27,621,214 | https://en.wikipedia.org/wiki/Highly%20hazardous%20chemical | A highly hazardous chemical, also called a harsh chemical, is a substance classified by the American Occupational Safety and Health Administration as material that is both toxic and reactive and whose potential for human injury is high if released. Highly hazardous chemicals may cause cancer, birth defects, induce genetic damage, cause miscarriage, injury and death from relatively small exposures.
Highly hazardous chemicals include:
External links
OSHA list of highly hazardous chemicals
Occupational Safety and Health Administration
Chemical substances
Chemistry-related lists | Highly hazardous chemical | Physics,Chemistry | 96 |
232,345 | https://en.wikipedia.org/wiki/Butyric%20acid | Butyric acid (; from , meaning "butter"), also known under the systematic name butanoic acid, is a straight-chain alkyl carboxylic acid with the chemical formula . It is an oily, colorless liquid with an unpleasant odor. Isobutyric acid (2-methylpropanoic acid) is an isomer. Salts and esters of butyric acid are known as butyrates or butanoates. The acid does not occur widely in nature, but its esters are widespread. It is a common industrial chemical and an important component in the mammalian gut.
History
Butyric acid was first observed in an impure form in 1814 by the French chemist Michel Eugène Chevreul. By 1818, he had purified it sufficiently to characterize it. However, Chevreul did not publish his early research on butyric acid; instead, he deposited his findings in manuscript form with the secretary of the Academy of Sciences in Paris, France. Henri Braconnot, a French chemist, was also researching the composition of butter and was publishing his findings and this led to disputes about priority. As early as 1815, Chevreul claimed that he had found the substance responsible for the smell of butter. By 1817, he published some of his findings regarding the properties of butyric acid and named it. However, it was not until 1823 that he presented the properties of butyric acid in detail. The name butyric acid comes from , meaning "butter", the substance in which it was first found. The Latin name butyrum (or buturum) is similar.
Occurrence
Triglycerides of butyric acid compose 3–4% of butter. When butter goes rancid, butyric acid is liberated from the glyceride by hydrolysis. It is one of the fatty acid subgroup called short-chain fatty acids. Butyric acid is a typical carboxylic acid that reacts with bases and affects many metals.
It is found in animal fat and plant oils, bovine milk, breast milk, butter, parmesan cheese, body odor, vomit and as a product of anaerobic fermentation (including in the colon). It has a taste somewhat like butter and an unpleasant odor. Mammals with good scent detection abilities, such as dogs, can detect it at 10 parts per billion, whereas humans can detect it only in concentrations above 10 parts per million. In food manufacturing, it is used as a flavoring agent.
In humans, butyric acid is one of two primary endogenous agonists of human hydroxycarboxylic acid receptor 2 (), a G protein-coupled receptor.
Butyric acid is present as its octyl ester in parsnip (Pastinaca sativa) and in the seed of the ginkgo tree.
Production
Industrial
In industry, butyric acid is produced by hydroformylation from propene and syngas, forming butyraldehyde, which is oxidised to the final product.
butyric acid
It can be separated from aqueous solutions by saturation with salts such as calcium chloride. The calcium salt, , is less soluble in hot water than in cold.
Microbial biosynthesis
Butyrate is produced by several fermentation processes performed by obligate anaerobic bacteria. This fermentation pathway was discovered by Louis Pasteur in 1861. Examples of butyrate-producing species of bacteria:
Clostridium butyricum
Clostridium kluyveri
Clostridium pasteurianum
Faecalibacterium prausnitzii
Fusobacterium nucleatum
Butyrivibrio fibrisolvens
Eubacterium limosum
The pathway starts with the glycolytic cleavage of glucose to two molecules of pyruvate, as happens in most organisms. Pyruvate is oxidized into acetyl coenzyme A catalyzed by pyruvate:ferredoxin oxidoreductase. Two molecules of carbon dioxide () and two molecules of hydrogen () are formed as waste products. Subsequently, is produced in the last step of the fermentation. Three molecules of ATP are produced for each glucose molecule, a relatively high yield. The balanced equation for this fermentation is
Other pathways to butyrate include succinate reduction and crotonate disproportionation.
Several species form acetone and n-butanol in an alternative pathway, which starts as butyrate fermentation. Some of these species are:
Clostridium acetobutylicum, the most prominent acetone and butanol producer, used also in industry
Clostridium beijerinckii
Clostridium tetanomorphum
Clostridium aurantibutyricum
These bacteria begin with butyrate fermentation, as described above, but, when the pH drops below 5, they switch into butanol and acetone production to prevent further lowering of the pH. Two molecules of butanol are formed for each molecule of acetone.
The change in the pathway occurs after acetoacetyl CoA formation. This intermediate then takes two possible pathways:
acetoacetyl CoA → acetoacetate → acetone
acetoacetyl CoA → butyryl CoA → butyraldehyde → butanol
For commercial purposes Clostridium species are used preferably for butyric acid or butanol production.
The most common species used for probiotics is the Clostridium butyricum.
Fermentable fiber sources
Highly-fermentable fiber residues, such as those from resistant starch, oat bran, pectin, and guar are transformed by colonic bacteria into short-chain fatty acids (SCFA) including butyrate, producing more SCFA than less fermentable fibers such as celluloses. One study found that resistant starch consistently produces more butyrate than other types of dietary fiber. The production of SCFA from fibers in ruminant animals such as cattle is responsible for the butyrate content of milk and butter.
Fructans are another source of prebiotic soluble dietary fibers which can be digested to produce butyrate. They are often found in the soluble fibers of foods which are high in sulfur, such as the allium and cruciferous vegetables. Sources of fructans include wheat (although some wheat strains such as spelt contain lower amounts), rye, barley, onion, garlic, Jerusalem and globe artichoke, asparagus, beetroot, chicory, dandelion leaves, leek, radicchio, the white part of spring onion, broccoli, brussels sprouts, cabbage, fennel, and prebiotics, such as fructooligosaccharides (FOS), oligofructose, and inulin.
Reactions
Butyric acid reacts as a typical carboxylic acid: it can form amide, ester, anhydride, and chloride derivatives. The latter, butyryl chloride, is commonly used as the intermediate to obtain the others.
Uses
Butyric acid is used in the preparation of various butyrate esters. It is used to produce cellulose acetate butyrate (CAB), which is used in a wide variety of tools, paints, and coatings, and is more resistant to degradation than cellulose acetate. CAB can degrade with exposure to heat and moisture, releasing butyric acid.
Low-molecular-weight esters of butyric acid, such as methyl butyrate, have mostly pleasant aromas or tastes. As a consequence, they are used as food and perfume additives. It is an approved food flavoring in the EU FLAVIS database (number 08.005).
Due to its powerful odor, it has also been used as a fishing bait additive. Many of the commercially available flavors used in carp (Cyprinus carpio) baits use butyric acid as their ester base. It is not clear whether fish are attracted by the butyric acid itself or the substances added to it. Butyric acid was one of the few organic acids shown to be palatable for both tench and bitterling. The substance has been used as a stink bomb by the Sea Shepherd Conservation Society to disrupt Japanese whaling crews.
Pharmacology
Pharmacodynamics
Butyric acid (pKa 4.82) is fully ionized at physiological pH, so its anion is the material that is mainly relevant in biological systems.
It is one of two primary endogenous agonists of human hydroxycarboxylic acid receptor 2 (, also known as GPR109A), a G protein-coupled receptor (GPCR),
Like other short-chain fatty acids (SCFAs), butyrate is an agonist at the free fatty acid receptors FFAR2 and FFAR3, which function as nutrient sensors that facilitate the homeostatic control of energy balance; however, among the group of SCFAs, only butyrate is an agonist of HCA2. It is also an HDAC inhibitor (specifically, HDAC1, HDAC2, HDAC3, and HDAC8), a drug that inhibits the function of histone deacetylase enzymes, thereby favoring an acetylated state of histones in cells. Histone acetylation loosens the structure of chromatin by reducing the electrostatic attraction between histones and DNA. In general, it is thought that transcription factors will be unable to access regions where histones are tightly associated with DNA (i.e., non-acetylated, e.g., heterochromatin). Therefore, butyric acid is thought to enhance the transcriptional activity at promoters, which are typically silenced or downregulated due to histone deacetylase activity.
Pharmacokinetics
Butyrate that is produced in the colon through microbial fermentation of dietary fiber is primarily absorbed and metabolized by colonocytes and the liver for the generation of ATP during energy metabolism; however, some butyrate is absorbed in the distal colon, which is not connected to the portal vein, thereby allowing for the systemic distribution of butyrate to multiple organ systems through the circulatory system. Butyrate that has reached systemic circulation can readily cross the blood–brain barrier via monocarboxylate transporters (i.e., certain members of the SLC16A group of transporters). Other transporters that mediate the passage of butyrate across lipid membranes include SLC5A8 (SMCT1), SLC27A1 (FATP1), and SLC27A4 (FATP4).
Metabolism
Butyric acid is metabolized by various human XM-ligases (ACSM1, ACSM2B, ASCM3, ACSM4, ACSM5, and ACSM6), also known as butyrate–CoA ligase. The metabolite produced by this reaction is butyryl–CoA, and is produced as follows:
Adenosine triphosphate + butyric acid + coenzyme A → adenosine monophosphate + pyrophosphate + butyryl-CoA
As a short-chain fatty acid, butyrate is metabolized by mitochondria as an energy (i.e., adenosine triphosphate or ATP) source through fatty acid metabolism. In particular, it is an important energy source for cells lining the mammalian colon (colonocytes). Without butyrates, colon cells undergo autophagy (i.e., self-digestion) and die.
In humans, the butyrate precursor tributyrin, which is naturally present in butter, is metabolized by triacylglycerol lipase into dibutyrin and butyrate through the reaction:
Tributyrin + dibutyrin + butyric acid
Biochemistry
Butyrate has numerous effects on energy homeostasis and related diseases (diabetes and obesity), inflammation, and immune function (e.g., it has pronounced antimicrobial and anticarcinogenic effects) in humans. These effects occur through its metabolism by mitochondria to generate during fatty acid metabolism or through one or more of its histone-modifying enzyme targets (i.e., the class I histone deacetylases) and G-protein coupled receptor targets (i.e., FFAR2, FFAR3, and ).
In the mammalian gut
Butyrate is essential to host immune homeostasis. Although the role and importance of butyrate in the gut is not fully understood, many researchers argue that a depletion of butyrate-producing bacteria in patients with several vasculitic conditions is essential to the pathogenesis of these disorders. A depletion of butyrate in the gut is typically caused by an absence or depletion of butyrate-producing-bacteria (BPB). This depletion in BPB leads to microbial dysbiosis. This is characterized by an overall low biodiversity and a depletion of key butyrate-producing members. Butyrate is an essential microbial metabolite with a vital role as a modulator of proper immune function in the host. It has been shown that children lacking in BPB are more susceptible to allergic disease and Type 1 Diabetes. Butyrate is also reduced in a diet low in dietary fiber, which can induce inflammation and have other adverse affects insofar as these short-chain fatty acids activate PPAR-γ.
Butyrate exerts a key role for the maintenance of immune homeostasis both locally (in the gut) and systemically (via circulating butyrate). It has been shown to promote the differentiation of regulatory T cells. In particular, circulating butyrate prompts the generation of extrathymic regulatory T cells. The low-levels of butyrate in human subjects could favor reduced regulatory T cell-mediated control, thus promoting a powerful immuno-pathological T-cell response. On the other hand, gut butyrate has been reported to inhibit local pro-inflammatory cytokines. The absence or depletion of these BPB in the gut could therefore be a possible aide in the overly-active inflammatory response. Butyrate in the gut also protects the integrity of the intestinal epithelial barrier. Decreased butyrate levels therefore lead to a damaged or dysfunctional intestinal epithelial barrier. Butyrate reduction has also been associated with Clostridioides difficile proliferation. Conversely, a high-fiber diet results in higher butyric acid concentration and inhibition of C. difficile growth.
In a 2013 research study conducted by Furusawa et al., microbe-derived butyrate was found to be essential in inducing the differentiation of colonic regulatory T cells in mice. This is of great importance and possibly relevant to the pathogenesis and vasculitis associated with many inflammatory diseases because regulatory T cells have a central role in the suppression of inflammatory and allergic responses. In several research studies, it has been demonstrated that butyrate induced the differentiation of regulatory T cells in vitro and in vivo. The anti-inflammatory capacity of butyrate has been extensively analyzed and supported by many studies. It has been found that microorganism-produced butyrate expedites the production of regulatory T cells, although the specific mechanism by which it does so unclear. More recently, it has been shown that butyrate plays an essential and direct role in modulating gene expression of cytotoxic T-cells. Butyrate also has an anti-inflammatory effect on neutrophils, reducing their migration to wounds. This effect is mediated via the receptor .
In the gut microbiomes found in the class Mammalia, omnivores and herbivores have butyrate-producing bacterial communities dominated by the butyryl-CoA:acetate CoA-transferase pathway, whereas carnivores have butyrate-producing bacterial communities dominated by the butyrate kinase pathway.
The odor of butyric acid, which emanates from the sebaceous follicles of all mammals, works on the tick as a signal.
Immunomodulation and inflammation
Butyrate's effects on the immune system are mediated through the inhibition of class I histone deacetylases and activation of its G-protein coupled receptor targets: (GPR109A), FFAR2 (GPR43), and FFAR3 (GPR41). Among the short-chain fatty acids, butyrate is the most potent promoter of intestinal regulatory T cells in vitro and the only one among the group that is an ligand. It has been shown to be a critical mediator of the colonic inflammatory response. It possesses both preventive and therapeutic potential to counteract inflammation-mediated ulcerative colitis and colorectal cancer.
Butyrate has established antimicrobial properties in humans that are mediated through the antimicrobial peptide LL-37, which it induces via HDAC inhibition on histone H3. In vitro, butyrate increases gene expression of FOXP3 (the transcription regulator for ) and promotes colonic regulatory T cells (Tregs) through the inhibition of class I histone deacetylases; through these actions, it increases the expression of interleukin 10, an anti-inflammatory cytokine. Butyrate also suppresses colonic inflammation by inhibiting the IFN-γ–STAT1 signaling pathways, which is mediated partially through histone deacetylase inhibition. While transient IFN-γ signaling is generally associated with normal host immune response, chronic IFN-γ signaling is often associated with chronic inflammation. It has been shown that butyrate inhibits activity of HDAC1 that is bound to the Fas gene promoter in T cells, resulting in hyperacetylation of the Fas promoter and up-regulation of Fas receptor on the T-cell surface.
Similar to other agonists studied, butyrate also produces marked anti-inflammatory effects in a variety of tissues, including the brain, gastrointestinal tract, skin, and vascular tissue. Butyrate binding at FFAR3 induces neuropeptide Y release and promotes the functional homeostasis of colonic mucosa and the enteric immune system.
Cancer
Butyrate has been shown to be a critical mediator of the colonic inflammatory response. It is responsible for about 70% of energy from the colonocytes, being a critical SCFA in colon homeostasis. Butyrate possesses both preventive and therapeutic potential to counteract inflammation-mediated ulcerative colitis (UC) and colorectal cancer. It produces different effects in healthy and cancerous cells: this is known as the "butyrate paradox". In particular, butyrate inhibits colonic tumor cells and stimulates proliferation of healthy colonic epithelial cells. The explanation why butyrate is an energy source for normal colonocytes and induces apoptosis in colon cancer cells, is the Warburg effect in cancer cells, which leads to butyrate not being properly metabolized. This phenomenon leads to the accumulation of butyrate in the nucleus, acting as a histone deacetylase (HDAC) inhibitor. One mechanism underlying butyrate function in suppression of colonic inflammation is inhibition of the IFN-γ/STAT1 signalling pathways. It has been shown that butyrate inhibits activity of HDAC1 that is bound to the Fas gene promoter in T cells, resulting in hyperacetylation of the Fas promoter and upregulation of Fas receptor on the T cell surface. It is thus suggested that butyrate enhances apoptosis of T cells in the colonic tissue and thereby eliminates the source of inflammation (IFN-γ production). Butyrate inhibits angiogenesis by inactivating Sp1 transcription factor activity and downregulating vascular endothelial growth factor gene expression.
In summary, the production of volatile fatty acids such as butyrate from fermentable fibers may contribute to the role of dietary fiber in colon cancer. Short-chain fatty acids, which include butyric acid, are produced by beneficial colonic bacteria (probiotics) that feed on, or ferment prebiotics, which are plant products that contain dietary fiber. These short-chain fatty acids benefit the colonocytes by increasing energy production, and may protect against colon cancer by inhibiting cell proliferation.
Conversely, some researchers have sought to eliminate butyrate and consider it a potential cancer driver. Studies in mice indicate it drives transformation of MSH2-deficient colon epithelial cells.
Potential treatments from butyrate restoration
Owing to the importance of butyrate as an inflammatory regulator and immune system contributor, butyrate depletions could be a key factor influencing the pathogenesis of many vasculitic conditions. It is thus essential to maintain healthy levels of butyrate in the gut. Fecal microbiota transplants (to restore BPB and symbiosis in the gut) could be effective by replenishing butyrate levels. In this treatment, a healthy individual donates their stool to be transplanted into an individual with dysbiosis. A less-invasive treatment option is the administration of butyrate—as oral supplements or enemas—which has been shown to be very effective in terminating symptoms of inflammation with minimal-to-no side-effects. In a study where patients with ulcerative colitis were treated with butyrate enemas, inflammation decreased significantly, and bleeding ceased completely after butyrate provision.
Addiction
Butyric acid is an inhibitor that is selective for class I HDACs in humans. HDACs are histone-modifying enzymes that can cause histone deacetylation and repression of gene expression. HDACs are important regulators of synaptic formation, synaptic plasticity, and long-term memory formation. Class I HDACs are known to be involved in mediating the development of an addiction. Butyric acid and other HDAC inhibitors have been used in preclinical research to assess the transcriptional, neural, and behavioral effects of HDAC inhibition in animals addicted to drugs.
Butyrate salts and esters
The butyrate or butanoate ion, , is the conjugate base of butyric acid. It is the form found in biological systems at physiological pH. A butyric (or butanoic) compound is a carboxylate salt or ester of butyric acid.
Examples
Salts
Sodium butyrate
Esters
Butyl butyrate
Butyryl-CoA
Cellulose acetate butyrate (aircraft dope)
Estradiol benzoate butyrate
Ethyl butyrate
Methyl butyrate
Pentyl butyrate
Tributyrin
See also
List of saturated fatty acids
Histone
Histone-modifying enzyme
Histone acetylase
Histone deacetylase
Hydroxybutyric acids
α-Hydroxybutyric acid
β-Hydroxybutyric acid
γ-Hydroxybutyric acid
Oxobutyric acids
2-Oxobutyric acid (α-ketobutyric acid)
3-Oxobutyric acid (acetoacetic acid)
4-Oxobutyric acid (succinic semialdehyde)
β-Methylbutyric acid
β-Hydroxy β-methylbutyric acid
Notes
References
External links
NIST Standard Reference Data for butanoic acid
GABA analogues
Flavors
Alkanoic acids
Fatty acids
Foul-smelling chemicals
Biomolecules
Histone deacetylase inhibitors | Butyric acid | Chemistry,Biology | 5,007 |
56,079 | https://en.wikipedia.org/wiki/Krull%20dimension | In commutative algebra, the Krull dimension of a commutative ring R, named after Wolfgang Krull, is the supremum of the lengths of all chains of prime ideals. The Krull dimension need not be finite even for a Noetherian ring. More generally the Krull dimension can be defined for modules over possibly non-commutative rings as the deviation of the poset of submodules.
The Krull dimension was introduced to provide an algebraic definition of the dimension of an algebraic variety: the dimension of the affine variety defined by an ideal I in a polynomial ring R is the Krull dimension of R/I.
A field k has Krull dimension 0; more generally, k[x1, ..., xn] has Krull dimension n. A principal ideal domain that is not a field has Krull dimension 1. A local ring has Krull dimension 0 if and only if every element of its maximal ideal is nilpotent.
There are several other ways that have been used to define the dimension of a ring. Most of them coincide with the Krull dimension for Noetherian rings, but can differ for non-Noetherian rings.
Explanation
We say that a chain of prime ideals of the form
has length n. That is, the length is the number of strict inclusions, not the number of primes; these differ by 1. We define the Krull dimension of to be the supremum of the lengths of all chains of prime ideals in .
Given a prime ideal in R, we define the of , written , to be the supremum of the lengths of all chains of prime ideals contained in , meaning that . In other words, the height of is the Krull dimension of the localization of R at . A prime ideal has height zero if and only if it is a minimal prime ideal. The Krull dimension of a ring is the supremum of the heights of all maximal ideals, or those of all prime ideals. The height is also sometimes called the codimension, rank, or altitude of a prime ideal.
In a Noetherian ring, every prime ideal has finite height. Nonetheless, Nagata gave an example of a Noetherian ring of infinite Krull dimension. A ring is called catenary if any inclusion of prime ideals can be extended to a maximal chain of prime ideals between and , and any two maximal chains between
and have the same length. A ring is called universally catenary if any finitely generated algebra over it is catenary. Nagata gave an example of a Noetherian ring which is not catenary.
In a Noetherian ring, a prime ideal has height at most n if and only if it is a minimal prime ideal over an ideal generated by n elements (Krull's height theorem and its converse). It implies that the descending chain condition holds for prime ideals in such a way the lengths of the chains descending from a prime ideal are bounded by the number of generators of the prime.
More generally, the height of an ideal I is the infimum of the heights of all prime ideals containing I. In the language of algebraic geometry, this is the codimension of the subvariety of Spec() corresponding to I.
Schemes
It follows readily from the definition of the spectrum of a ring Spec(R), the space of prime ideals of R equipped with the Zariski topology, that the Krull dimension of R is equal to the dimension of its spectrum as a topological space, meaning the supremum of the lengths of all chains of irreducible closed subsets. This follows immediately from the Galois connection between ideals of R and closed subsets of Spec(R) and the observation that, by the definition of Spec(R), each prime ideal of R corresponds to a generic point of the closed subset associated to by the Galois connection.
Examples
The dimension of a polynomial ring over a field k[x1, ..., xn] is the number of variables n. In the language of algebraic geometry, this says that the affine space of dimension n over a field has dimension n, as expected. In general, if R is a Noetherian ring of dimension n, then the dimension of R[x] is n + 1. If the Noetherian hypothesis is dropped, then R[x] can have dimension anywhere between n + 1 and 2n + 1.
For example, the ideal has height 2 since we can form the maximal ascending chain of prime ideals.
Given an irreducible polynomial , the ideal is not prime (since , but neither of the factors are), but we can easily compute the height since the smallest prime ideal containing is just .
The ring of integers Z has dimension 1. More generally, any principal ideal domain that is not a field has dimension 1.
An integral domain is a field if and only if its Krull dimension is zero. Dedekind domains that are not fields (for example, discrete valuation rings) have dimension one.
The Krull dimension of the zero ring is typically defined to be either or . The zero ring is the only ring with a negative dimension.
A ring is Artinian if and only if it is Noetherian and its Krull dimension is ≤0.
An integral extension of a ring has the same dimension as the ring does.
Let R be an algebra over a field k that is an integral domain. Then the Krull dimension of R is less than or equal to the transcendence degree of the field of fractions of R over k. The equality holds if R is finitely generated as an algebra (for instance by the Noether normalization lemma).
Let R be a Noetherian ring, I an ideal and be the associated graded ring (geometers call it the ring of the normal cone of I). Then is the supremum of the heights of maximal ideals of R containing I.
A commutative Noetherian ring of Krull dimension zero is a direct product of a finite number (possibly one) of local rings of Krull dimension zero.
A Noetherian local ring is called a Cohen–Macaulay ring if its dimension is equal to its depth. A regular local ring is an example of such a ring.
A Noetherian integral domain is a unique factorization domain if and only if every height 1 prime ideal is principal.
For a commutative Noetherian ring the three following conditions are equivalent: being a reduced ring of Krull dimension zero, being a field or a direct product of fields, being von Neumann regular.
Of a module
If R is a commutative ring, and M is an R-module, we define the Krull dimension of M to be the Krull dimension of the quotient of R making M a faithful module. That is, we define it by the formula:
where AnnR(M), the annihilator, is the kernel of the natural map R → EndR(M) of R into the ring of R-linear endomorphisms of M.
In the language of schemes, finitely generated modules are interpreted as coherent sheaves, or generalized finite rank vector bundles.
For non-commutative rings
The Krull dimension of a module over a possibly non-commutative ring is defined as the deviation of the poset of submodules ordered by inclusion. For commutative Noetherian rings, this is the same as the definition using chains of prime ideals. The two definitions can be different for commutative rings which are not Noetherian.
See also
Analytic spread
Dimension theory (algebra)
Gelfand–Kirillov dimension
Hilbert function
Homological conjectures in commutative algebra
Krull's principal ideal theorem
Notes
Bibliography
Irving Kaplansky, Commutative rings (revised ed.), University of Chicago Press, 1974, . Page 32.
Sect.4.7.
Commutative algebra
Dimension | Krull dimension | Physics,Mathematics | 1,660 |
1,157,842 | https://en.wikipedia.org/wiki/Hasse%E2%80%93Weil%20zeta%20function | In mathematics, the Hasse–Weil zeta function attached to an algebraic variety V defined over an algebraic number field K is a meromorphic function on the complex plane defined in terms of the number of points on the variety after reducing modulo each prime number p. It is a global L-function defined as an Euler product of local zeta functions.
Hasse–Weil L-functions form one of the two major classes of global L-functions, alongside the L-functions associated to automorphic representations. Conjecturally, these two types of global L-functions are actually two descriptions of the same type of global L-function; this would be a vast generalisation of the Taniyama-Weil conjecture, itself an important result in number theory.
For an elliptic curve over a number field K, the Hasse–Weil zeta function is conjecturally related to the group of rational points of the elliptic curve over K by the Birch and Swinnerton-Dyer conjecture.
Definition
The description of the Hasse–Weil zeta function up to finitely many factors of its Euler product is relatively simple. This follows the initial suggestions of Helmut Hasse and André Weil, motivated by the Riemann zeta function, which results from the case when V is a single point.
Taking the case of K the rational number field , and V a non-singular projective variety, we can for almost all prime numbers p consider the reduction of V modulo p, an algebraic variety Vp over the finite field with p elements, just by reducing equations for V. Scheme-theoretically, this reduction is just the pullback of the Néron model of V along the canonical map Spec → Spec . Again for almost all p it will be non-singular. We define a Dirichlet series of the complex variable s,
which is the infinite product of the local zeta functions
where Nk is the number of points of V defined over the finite field extension of .
This is well-defined only up to multiplication by rational functions in for finitely many primes p.
Since the indeterminacy is relatively harmless, and has meromorphic continuation everywhere, there is a sense in which the properties of Z(s) do not essentially depend on it. In particular, while the exact form of the functional equation for Z(s), reflecting in a vertical line in the complex plane, will definitely depend on the 'missing' factors, the existence of some such functional equation does not.
A more refined definition became possible with the development of étale cohomology; this neatly explains what to do about the missing, 'bad reduction' factors. According to general principles visible in ramification theory, 'bad' primes carry good information (theory of the conductor). This manifests itself in the étale theory in the Ogg–Néron–Shafarevich criterion for good reduction; namely that there is good reduction, in a definite sense, at all primes p for which the Galois representation ρ on the étale cohomology groups of V is unramified. For those, the definition of local zeta function can be recovered in terms of the characteristic polynomial of
Frob(p) being a Frobenius element for p. What happens at the ramified p is that ρ is non-trivial on the inertia group I(p) for p. At those primes the definition must be 'corrected', taking the largest quotient of the representation ρ on which the inertia group acts by the trivial representation. With this refinement, the definition of Z(s) can be upgraded successfully from 'almost all' p to all p participating in the Euler product. The consequences for the functional equation were worked out by Serre and Deligne in the later 1960s; the functional equation itself has not been proved in general.
Hasse–Weil conjecture
The Hasse–Weil conjecture states that the Hasse–Weil zeta function should extend to a meromorphic function for all complex s, and should satisfy a functional equation similar to that of the Riemann zeta function. For elliptic curves over the rational numbers, the Hasse–Weil conjecture follows from the modularity theorem: each elliptic curve over is modular.
Birch and Swinnerton-Dyer conjecture
The Birch and Swinnerton-Dyer conjecture states that the rank of the abelian group E(K) of points of an elliptic curve E is the order of the zero of the Hasse–Weil L-function L(E, s) at s = 1, and that the first non-zero coefficient in the Taylor expansion of L(E, s) at s = 1 is given by more refined arithmetic data attached to E over K. The conjecture is one of the seven Millennium Prize Problems listed by the Clay Mathematics Institute, which has offered a $1,000,000 prize for the first correct proof.
Elliptic curves over Q
An elliptic curve is a specific type of variety. Let E be an elliptic curve over Q of conductor N. Then, E has good reduction at all primes p not dividing N, it has multiplicative reduction at the primes p that exactly divide N (i.e. such that p divides N, but p2 does not; this is written p || N), and it has additive reduction elsewhere (i.e. at the primes where p2 divides N). The Hasse–Weil zeta function of E then takes the form
Here, ζ(s) is the usual Riemann zeta function and L(E, s) is called the L-function of E/Q, which takes the form
where, for a given prime p,
where in the case of good reduction ap is p + 1 − (number of points of E mod p), and in the case of multiplicative reduction ap is ±1 depending on whether E has split (plus sign) or non-split (minus sign) multiplicative reduction at p. A multiplicative reduction of curve E by the prime p is said to be split if -c6 is a square in the finite field with p elements.
There is a useful relation not using the conductor:
1. If p doesn't divide (where is the discriminant of the elliptic curve) then E has good reduction at p.
2. If p divides but not then E has multiplicative bad reduction at p.
3. If p divides both and then E has additive bad reduction at p.
See also
Arithmetic zeta function
References
Bibliography
J.-P. Serre, Facteurs locaux des fonctions zêta des variétés algébriques (définitions et conjectures), 1969/1970, Sém. Delange–Pisot–Poitou, exposé 19
Zeta and L-functions
Algebraic geometry | Hasse–Weil zeta function | Mathematics | 1,392 |
796,412 | https://en.wikipedia.org/wiki/Experimental%20evolution | Experimental evolution is the use of laboratory experiments or controlled field manipulations to explore evolutionary dynamics. Evolution may be observed in the laboratory as individuals/populations adapt to new environmental conditions by natural selection.
There are two different ways in which adaptation can arise in experimental evolution. One is via an individual organism gaining a novel beneficial mutation. The other is from allele frequency change in standing genetic variation already present in a population of organisms. Other evolutionary forces outside of mutation and natural selection can also play a role or be incorporated into experimental evolution studies, such as genetic drift and gene flow.
The organism used is decided by the experimenter, based on the hypothesis to be tested. Many generations are required for adaptive mutation to occur, and experimental evolution via mutation is carried out in viruses or unicellular organisms with rapid generation times, such as bacteria and asexual clonal yeast. Polymorphic populations of asexual or sexual yeast, and multicellular eukaryotes like Drosophila, can adapt to new environments through allele frequency change in standing genetic variation. Organisms with longer generations times, although costly, can be used in experimental evolution. Laboratory studies with foxes and with rodents (see below) have shown that notable adaptations can occur within as few as 10–20 generations and experiments with wild guppies have observed adaptations within comparable numbers of generations.
More recently, experimentally evolved individuals or populations are often analyzed using whole genome sequencing, an approach known as Evolve and Resequence (E&R). E&R can identify mutations that lead to adaptation in clonal individuals or identify alleles that changed in frequency in polymorphic populations, by comparing the sequences of individuals/populations before and after adaptation. The sequence data makes it possible to pinpoint the site in a DNA sequence that a mutation/allele frequency change occurred to bring about adaptation. The nature of the adaptation and functional follow up studies can shed insight into what effect the mutation/allele has on phenotype.
History
Domestication and breeding
Unwittingly, humans have carried out evolution experiments for as long as they have been domesticating plants and animals. Selective breeding of plants and animals has led to varieties that differ dramatically from their original wild-type ancestors. Examples are the cabbage varieties, maize, or the large number of different dog breeds. The power of human breeding to create varieties with extreme differences from a single species was already recognized by Charles Darwin. In fact, he started out his book The Origin of Species with a chapter on variation in domestic animals. In this chapter, Darwin discussed in particular the pigeon.
Early
One of the first to carry out a controlled evolution experiment was William Dallinger. In the late 19th century, he cultivated small unicellular organisms in a custom-built incubator over a time period of seven years (1880–1886). Dallinger slowly increased the temperature of the incubator from an initial 60 °F up to 158 °F. The early cultures had shown clear signs of distress at a temperature of 73 °F, and were certainly not capable of surviving at 158 °F. The organisms Dallinger had in his incubator at the end of the experiment, on the other hand, were perfectly fine at 158 °F. However, these organisms would no longer grow at the initial 60 °F. Dallinger concluded that he had found evidence for Darwinian adaptation in his incubator, and that the organisms had adapted to live in a high-temperature environment. Dallinger's incubator was accidentally destroyed in 1886, and Dallinger could not continue this line of research.
From the 1880s to 1980, experimental evolution was intermittently practiced by a variety of evolutionary biologists, including the highly influential Theodosius Dobzhansky. Like other experimental research in evolutionary biology during this period, much of this work lacked extensive replication and was carried out only for relatively short periods of evolutionary time.
Modern
Experimental evolution has been used in various formats to understand underlying evolutionary processes in a controlled system. Experimental evolution has been performed on multicellular and unicellular eukaryotes, prokaryotes, and viruses. Similar works have also been performed by directed evolution of individual enzyme, ribozyme and replicator genes.
Aphids
In the 1950s, the Soviet biologist Georgy Shaposhnikov conducted experiments on aphids of the Dysaphis genus. By transferring them to plants normally nearly or completely unsuitable for them, he had forced populations of parthenogenetic descendants to adapt to the new food source to the point of reproductive isolation from the regular populations of the same species.
Fruit flies
One of the first of a new wave of experiments using this strategy was the laboratory "evolutionary radiation" of Drosophila melanogaster populations that Michael R. Rose started in February, 1980. This system started with ten populations, five cultured at later ages, and five cultured at early ages. Since then more than 200 different populations have been created in this laboratory radiation, with selection targeting multiple characters. Some of these highly differentiated populations have also been selected "backward" or "in reverse," by returning experimental populations to their ancestral culture regime. Hundreds of people have worked with these populations over the better part of three decades. Much of this work is summarized in the papers collected in the book Methuselah Flies.
The early experiments in flies were limited to studying phenotypes but the molecular mechanisms, i.e., changes in DNA that facilitated such changes, could not be identified. This changed with genomics technology. Subsequently, Thomas Turner coined the term Evolve and Resequence (E&R) and several studies used E&R approach with mixed success. One of the more interesting experimental evolution studies was conducted by Gabriel Haddad's group at UC San Diego, where Haddad and colleagues evolved flies to adapt to low oxygen environments, also known as hypoxia. After 200 generations, they used E&R approach to identify genomic regions that were selected by natural selection in the hypoxia adapted flies. More recent experiments are following up E&R predictions with RNAseq and genetic crosses. Such efforts in combining E&R with experimental validations should be powerful in identifying genes that regulate adaptation in flies.
Much recently the experimental evolution in flies have taken the course to address the molecular mechanisms and in doing so it might pave way to understand physiology of an organism better and thus redefine disease therapeutics.
Microbes
Many microbial species have short generation times, easily sequenced genomes, and well-understood biology. They are therefore commonly used for experimental evolution studies. The bacterial species most commonly used for experimental evolution include P. fluorescens, Pseudomonas aeruginosa, Enterococcus faecalis and E. coli (see below), while the Yeast S. cerevisiae has been used as a model for the study of eukaryotic evolution.
Lenski's E. coli experiment
One of the most widely known examples of laboratory bacterial evolution is the long-term E.coli experiment of Richard Lenski. On February 24, 1988, Lenski started growing twelve lineages of E. coli under identical growth conditions. When one of the populations evolved the ability to aerobically metabolize citrate from the growth medium and showed greatly increased growth, this provided a dramatic observation of evolution in action. The experiment continues to this day, and is now the longest-running (in terms of generations) controlled evolution experiment ever undertaken. Since the inception of the experiment, the bacteria have grown for more than 60,000 generations. Lenski and colleagues regularly publish updates on the status of the experiments.
Leishmania donovani
Bussotti and collaborators isolated amastigotes from Leishmania donovani and cultured them in vitro for 3800 generations (36 weeks). The culture of these parasites showed how they adapted to in vitro conditions by compensating for the loss of a NIMA-related kinase, important for the correct progression of mitosis, by increasing the expression of another orthologous kinase as the culture generations progressed. Furthermore, it was observed how L. donovani has been adapted to in vitro culture by reducing the expression of 23 transcripts related to flagellar biogenesis and increasing the expression of ribosomal protein clusters and non-coding RNAs such as nucleolar small RNAs. Flagella are considered less necessary by the parasite in in vitro culture and therefore the progression of generations leads to their elimination, causing an energy saving due to lower motility so that proliferation and growth rate in culture is higher. The amplified snoRNAs also lead to increased ribosomal biosynthesis, increased protein biosynthesis and thus increased growth rate of the culture. These adaptations observed over generations of parasites are governed by copy number variations (CNV) and epistatic interactions between affected genes, and allow us to justify Leishmania genomic instability through its post-transcriptional regulation of gene expression.
Laboratory house mice
In 1993, Theodore Garland, Jr. and colleagues started a long-term experiment that involves selective breeding of mice for high voluntary activity levels on running wheels. This experiment also continues to this day (> 105 generations). Mice from the four replicate "High Runner" lines evolved to run almost three times as many running-wheel revolutions per day compared with the four unselected control lines of mice, mainly by running faster than the control mice rather than running for more minutes/day. However, the High Runner lines have evolved in somewhat different ways, with some emphasizing running speed versus duration or vice versa, thus demonstrating "multiple solutions"
that seem to be based partly in evolved muscle characteristics.
The HR mice have an elevated endurance running ability
and maximal aerobic capacity
when tested on a motorized treadmill. They also exhibit alterations in motivation and the reward system of the brain. Pharmacological studies point to alterations in dopamine function and the endocannabinoid system. The High Runner lines have been proposed as a model to study human attention-deficit hyperactivity disorder (ADHD), and administration of Ritalin reduces their wheel running approximately to the levels of control mice.
Multidirectional selection on bank voles
In 2005 Paweł Koteja with Edyta Sadowska and colleagues from the Jagiellonian University (Poland) started a multidirectional selection on a non-laboratory rodent, the bank vole Myodes (= Clethrionomys) glareolus. The voles are selected for three distinct traits, which played important roles in the adaptive radiation of terrestrial vertebrates: high maximum rate of aerobic metabolism, predatory propensity, and herbivorous capability. Aerobic lines are selected for the maximum rate of oxygen consumption achieved during swimming at 38°C; Predatory lines – for a short time to catch live crickets; Herbivorous lines – for capability to maintain body mass when fed a low-quality diet “diluted” with dried, powdered grass. Four replicate lines are maintained for each of the three selection directions and another four as unselected Controls.
After approximately 20 generations of selective breeding, voles from the Aerobic lines evolved a 60% higher swim-induced metabolic rate than voles from the unselected Control lines. Although the selection protocol does not impose a thermoregulatory burden, both the basal metabolic rate and thermogenic capacity increased in the Aerobic lines. Thus, the results have provided some support for the “aerobic capacity model” for the evolution of endothermy in mammals.
More than 85% of the Predatory voles capture the crickets, compared to only about 15% of unselected Control voles, and they catch the crickets faster. The increased predatory behavior is associated with a more proactive coping style (“personality”).
During the test with low-quality diet, the Herbivorous voles lose approximately 2 grams less mass (approximately 10% of the original body mass) than the Control ones. The Herbivorous voles have an altered composition of the bacterial microbiome in their caecum. Thus, the selection has resulted in evolution of the entire holobiome, and the experiment may offer a laboratory model of hologenome evolution.
Synthetic biology
Synthetic biology offers unique opportunities for experimental evolution, facilitating the interpretation of evolutionary changes by inserting genetic modules into host genomes and applying selection specifically targeting such modules. Synthetic biological circuits inserted into the genome of Escherichia coli or the budding yeast Saccharomyces cerevisiae degrade (lose function) during laboratory evolution. With appropriate selection, mechanisms underlying the evolutionary regain of lost biological function can be studied. Experimental evolution of mammalian cells harboring synthetic gene circuits reveals the role of cellular heterogeneity in the evolution of drug resistance, with implications for chemotherapy resistance of cancer cells.
Other examples
Stickleback fish have both marine and freshwater species, the freshwater species evolving since the last ice age. Freshwater species can survive colder temperatures. Scientists tested to see if they could reproduce this evolution of cold-tolerance by keeping marine sticklebacks in cold freshwater. It took the marine sticklebacks only three generations to evolve to match the 2.5 degree Celsius improvement in cold-tolerance found in wild freshwater sticklebacks.
Microbial cells and recently mammalian cells are evolved under nutrient limiting conditions to study their metabolic response and engineer cells for useful characteristics.
For teaching
Because of their rapid generation times microbes offer an opportunity to study microevolution in the classroom. A number of exercises involving bacteria and yeast teach concepts ranging from the evolution of resistance to the evolution of multicellularity. With the advent of next-generation sequencing technology it has become possible for students to conduct an evolutionary experiment, sequence the evolved genomes, and to analyze and interpret the results.
See also
Artificial selection
Bacteriophage experimental evolution
Directed evolution
Domestication
Evolutionary biology
Evolutionary physiology
Genetics
Genomics of domestication
Laboratory experiments of speciation
Quantitative genetics
Selection limits
Selective breeding
Tame Silver Fox
References
Further reading
External links
E. coli Long-term Experimental Evolution Project Site , Lenski lab, Michigan State University
A movie illustrating the dramatic differences in wheel-running behavior.
Experimental Evolution Publications by Ted Garland: Artificial Selection for High Voluntary Wheel-Running Behavior in House Mice — a detailed list of publications.
Experimental Evolution — a list of laboratories that study experimental evolution.
Network for Experimental Research on Evolution, University of California.
Inquiry-based middle school lesson plan: "Born to Run: Artificial Selection Lab"
Digital Evolution for Education software
Evolutionary biology
Biology experiments | Experimental evolution | Biology | 2,975 |
65,954,033 | https://en.wikipedia.org/wiki/The%20Blockhouse%20of%20Boston | The Blockhouse of Boston was a pioneering art and design cooperative of alumni from the Massachusetts College of Art in Boston, Massachusetts that opened its doors in 1947. Blockhouse artisans, primarily the then-recent art school graduate Janet Doub Erickson, designed and produced original textiles including draperies, wall hangings, table linens, costume treatments and other art. The co-op specialized in linoleum blockprints — also known as linocuts — and screen printing. Blockhouse was known for original use of New England themes and motifs intermingled with bold ethnic designs at times inspired by pre-Columbian art and sometimes with modernist motifs. As a journalist described some of Blockhouse principal designer Janet Doub Erickson's inspirations in a 1952 profile, "she goes to New Guinea for her motif, 'Checkerboard,' to China for her "Quan-Yin" design, to Guatemala for "Mayan Stele," and to a Northwest Indian reservation for "Totemotif."
Quite often, however, she just stayed home, looking for inspiration in the architecture and history of Boston and surrounding towns in New England.
Origins, organization, impact, and legacy
Origins
Founded in 1947 by twelve students and alumni of the Massachusetts College of Art, Blockhouse sought to “provide artists the opportunity to establish a dignified and mutually profitable relationship with the buying public.”
The founders described Blockhouse's mission as follows: "Blockhouse hand-printed fabrics are the product of a group of artists searching for a new and socially useful outlet for the expression of their talents. We hope that our designs conceived in freshness of vision and executed with technical skill, will contribute to and stimulate interest in contemporary design as it develops toward a universal idiom."
Organization
Originally located at a gallery in the Oceanside Hotel and Casino on Lexington Avenue in Magnolia, Massachusetts, then Cambridge Street in Boston, as Blockhouse became more successful the cooperative moved to occupy a floor of 10 Arlington Avenue overlooking Boston Common.
In the beginning, the Blockhouse had two small apartments, one for male members and another for females, where the artists could live dormitory-style at virtually no cost and work in the studio on the premises. The original members paid five dollars each to self-fund the cooperative's initial expense renting a space.
As reported in the Boston Globe, "all chores were shared. No one drew a salary. To earn money a member had to design and print. When an article was sold 70 percent of the proceeds went to the designer, the rest to the Block-house fund. Prices were set low for handiwork - as little as $5 a yard for drape material - in order to reach as wide a market as possible."
Blockhouse artists were responsible for every step in production of their designs. This traditional handicraft method, while slowing and limiting production, assured them control to carry their ideas undistorted into the final pieces. In addition to acting as a center for artists, the Blockhouse also taught classes in silk screen and block printing, ceramics, sketching and painting in watercolor and oil.
Over time, the Blockhouse evolved away from its utopian beginnings to become a more commercially focused enterprise. Of the founders, only partners Janet Doub Erickson and Paul Coombs remained active until Blockhouse's closing in 1955 and Janet Doub Erickson's subsequent departure for Mexico to pursue other artistic projects.
Impact
In addition to its innovative designs, which repeatedly won its designers awards and national recognition, Blockhouse's significance was bolstered by its use of post-war marketing techniques to move artistically innovative work into the broader New England and national marketplace through the synthesis of traditional techniques, diverse designs, and modern guerrilla marketing tactics. From 1947 to 1955, when it closed its doors, the work of Blockhouse was featured in Life, Vogue The New Yorker, The New York Times, Harper's Bazaar, The Christian Science Monitor, Women's Wear Daily, the Boston Globe and numerous other regional publications.
Designs from the Blockhouse collection were reproduced in commercial volumes by Wesley Simpson, Inc., Stoffel and Company, Strauss & Mueller, J.H. Thorp, Arundell Clarke, M. Lowenstein Sons, Century Sportswear and The Boka Company. Blockhouse textiles penetrated the larger culture through their popularity with commercial advertisers.
The Blockhouse also sought to penetrate the citadels of high culture. Blockhouse works were featured in exhibitions at Harvard's Fogg Museum, Institute of Contemporary Art, Boston, and the Boston Museum of Fine Arts. Blockhouse work also appeared at the Addison Gallery of American Art, the Wadsworth Atheneum, the Farnsworth Art Museum, the Dallas Museum of Fine Arts, and other galleries across the country.
The United States State Department included Blockhouse textiles in international exhibitions that toured in Europe and Israel during the nineteen-fifties.
Legacy
After Blockhouse disbanded members scattered about New England and other areas of the United States, producing art and teaching Blockhouse-style textile and artistic design through the country. Surviving Blockhouse textiles are mainly in the hands of private collectors and galleries.
Notable members
Blockhouse was founded and led by Paul Coombs and Janet Doub Erickson, both recent graduates of the Massachusetts College of Art. Coombs was a veteran of the two world wars who became interested in art while recovering in the hospital from an injury sustained in the Pacific Rim. Considerably older than his partners, he focused on the commercialization of Blockhouse designs and managed the business side of Blockhouse, although he also contributed original designs.
Other founding members included Elaine Biganess and David Berger.
Janet Doub Erickson was founding partner, chief designer, and head of production. She was credited with producing ninety percent of the Blockhouse's designs. Among honors, awards, and recognitions over her professional life, at Blockhouse she was the second young Boston artist chosen for recognition by the Institute of Contemporary Art and was profiled in a 1951 issue of Life She would go on to author popular books on blockprinting, including Printmaking Without A Press (Reinhold 1966) and Block Printing on Textiles (Watson-Guptill 1961). She taught block printing in Massachusetts, Connecticut, New York, California, and elsewhere over her long career after Blockhouse. Her enthusiastic promotion of block printing was influential in its post-war artistic renaissance. Later in life she wrote on textile design and vernacular architecture and published another book of her line drawings of Boston during the Blockhouse period.
Eight other artists joined Blockhouse but were less active in design, production, and commercialization.
References
Mass Art
Visual arts education
Art movements
American printmakers
Graphic design
Design companies established in 1947
Design companies disestablished in 1955
Design companies of the United States
Design history
Designing Women
American graphic designers
Massachusetts College of Art and Design alumni
Textile design
American textile designers | The Blockhouse of Boston | Engineering | 1,405 |
41,100,981 | https://en.wikipedia.org/wiki/Miriam%20Kastner | Miriam Kastner (born January 22, 1935) is a Bratislavan born, (former Czechoslovakia) Israeli raised, American oceanographer and geochemist. Kastner is currently a distinguished professor at Scripps Institution of Oceanography at the University of California, San Diego. She is still recognized by her fundamental contributions to science and is well spoken of amongst colleagues.
Education
Miriam Kastner enjoyed the sciences since she was a child and originally wanted to be a mathematician, however she later decided down the road that, math was not the career for her as there were far fewer careers to pursue in mathematics. Early on Miriam noticed that not many women were scientists, which inspired her to research different sciences.
Kastner attended the Hebrew University of Jerusalem in 1964, where she received a minor in chemistry and a master's degree in geology. After graduation, she wrote her first formal paper about the hydrothermal systems of the Guaymas Basin, in the Gulf of California. Kastner attendedHarvard University, Boston, in 1970, where she was exposed to oceanography and later received her doctorate in geoscience. For three years, Kastner was the only woman in her department while studying at Harvard. Thus, women were not taken very seriously by other faculty members resulting in a discouraging environment. Faculty members also expected less from their female students and counterparts, although there were some who supported Kastner and fellow female academics, such as the Ph.D. committee.
Career
Over the course of her career, Kastner progressed from being an associate professor, to a professor, and is now a distinguished professor at the Scripps Institution of Oceanography, where she participated in writing and publishing 174 articles and journals. Kastner has worked with the Scripps Institution from 1972 till present. Kastner became the second female professor at the Scripps Institute, only two months after the first geophysicist had joined the faculty; this paved the way for many female scientists at the time and in the future. Prior to teaching at Scripps Institution, Kastner worked as a research associate at Harvard University in the department of geological sciences until 1970. In 1971 she worked at the University of Chicago as a research associate in the Department of Geophysical Sciences. Some believe she has accomplished more work than anyone else among the marine geology community and her publications contain high quality data and ideas that show consistency in addressing the big issues in Earth sciences. Miriam Kastner's research is primarily based in mineralogy and petrology, though the most important issue pursued in her career is fluid flow at subducting plate boundaries.
Kastner from the SIO (Scripps Institution of Oceanography) situated in La Jolla, California, demonstrated that society had no insight on the subsea vents until the 1980s. Since discovering the scientific truth of the sea, it has been observed that the ocean cycles develop through these vents once every five million years. This also illustrated change in the subduction zones that change once every 200million years.
Throughout her long and successful career, Miriam Kastner produced dozens of publications highlighting her key research. Her first publication, dating back to 1965, examined the mineral glauconite and documented its properties. Over the next 15 or so years her research focused more on the analysis of deep sea sedimentation. For the next 20 years of her career she continued her research on deep sea sedimentation but her focus shifted more to hydrogeology and fluid dynamics and the effect of this sedimentation and mineral deposits. In recent years, she has examined isotopes and their concentrations in oceans. Most recently, Miriam has compiled a brief synopsis of her ocean drilling work over the past 50 years.
Academic roles
Along with being a professor, Kastner has served many roles at Scripps Institution of Oceanography, including chair and vice chair of the faculty, associate director and director in the geosciences research division, chair of Academic Senate Committee on Research, as well as curricular group coordinator of geological sciences. From 2003 to 2005, she served on the National Research Council's Ocean Studies Board. As a female in a once male dominated profession, Kastner expressed that it was difficult to garner support from science-related funding agencies. She was glad to see recent improvement on the increase of women pursuing science related degrees, however she believes there is still room for improvement, despite roughly 50% of women being in a science program, but only approximately 20% are field researchers in the institutes. Young women should have more confidence when applying for field research positions, as support for women in the sciences has improved drastically in comparison to her earlier years.
Early career achievements
Much of what Kastner has achieved came from the earlier part of her career when she put her talents to work and directed her focus on the origin of authigenic feldspars, she also focused on zeolites in the oceanic sediments during that time. Delving deeper into the significance of Kastner's work, her first publication named “Notes on the mineralogy and origin of glauconite” documents her findings on the properties and classification of glauconite. Although there were others with documented observations of glauconite the results varied greatly and Kastner was the first to point out that these studies are largely flawed due to the failure to take into account the large deposits of non-structural iron oxides which would ultimately skew the results. With the oceanic sediments she determined that the diagenetic transformations of opal-A to opal-CT and quartz is important to the formation of siliceous marine deposits. Kastner also found that dolomite formation is ultimately controlled by its associated pore-fluid geochemistry. The discovery solved an outstanding problem in carbonate mineral science. Kastner's measurements of the Sr distribution coefficient was critical in building strontium concentrations in calcite, which was ultimately used for paleoclimate studies that are dependent on carbonate Sr proxies, the discovery also was used for indicating carbonate recrystallization. Kastner also worked vigorously on phosphate deposits, her work included a revision of the stability of P-O bonds in apatite and phosphate ions, after the revision there was a recalculation of the ocean residence time of phosphorus. Her research focuses on the geochemistry of fluid work interactions, mostly with ocean chemistry. This encompasses the significance to marine minerals, to gain knowledge and understanding of how the Earth works. Gas hydrate research has interested Kastner and many fellow geo-scientists due to both its possible contribution to global warming, and as a potential energy source provided as a result of the amount of methane found in these oceanic hydrates. By studying these marine events, Kastner has stated that this can allow for people to be better prepared to predict global warming and have the possibility of avoiding sudden climatic response to anthropogenic perturbations.
Key research
Kastner's area of research is “mostly geochemistry on fluid work interactions", specifically with seawater. Her research expertise is on the fluctuation of fluids at plate boundaries, specifically where two plates meet to cause earthquakes and at ridge-crests where hypothermal deposits are found. She has authored over 80 scientific articles. Kastner's work is based on numerous studies, including the following:
Long-term monitoring in observatories of marine gas hydrates and implications for climate change, slope stability, and ocean chemistry
On the oceanic contribution of methane to the atmosphere
Chemical paleoceanography: establishing new marine phases based on the ocean's geological history.
Sediment, geochemical and diagenetic processes with emphasis on marine authigenic minerals like phosphates, silicates, carbonates
One of Kastner's most important publications is one of her most recent “50 years of scientific drilling”. This article is particularly significant as it highlights some of her major findings over the last 50 years. This paper does well to review drilling projects as well as highlight major scientific achievements of the work. The major drilling projects mentioned in the article are as follows, the Mohole Project, Joides and the Deep Sea Drilling Project, the Ocean Drilling Program, and the Integrated Ocean Drilling Program and International Ocean Discovery Program. Each of these projects have made significant contributions to the field of geology. The Mohole Project was famous for recovering large deposits of subseafloor basalt, the Joides and the Deep Sea Drilling Project was known for being one of the first to identify and record the sedimentary rock layering of our ocean floors, and the Integrated Ocean Drilling Program and International Ocean Discovery Program made findings that helped shape the education system of undergraduates as well as in grades K-12. Some of the major achievements of scientific ocean drilling are listed below:
Helped set the standard for the geological time scale by refining the geomagnetic time scale and how it relates to astronomical chronologies
Helped link long term climate changes to Earth's orbital variability
Proved that Antarctica was largely iceless approximately 40 million years ago
Discovered the most complete marine records of the Cretaceous/Paleogene mass extinction and potential evidence linking the extinction event to a large asteroid
Provided early evidence of the theory of plate tectonics
Provided the first evidence of the age dependent growth of the oceanic lithosphere
Accurately narrowed down the age of the Earth based on sedimentary record
This is not a complete list.
Getty kouros test
In the early 1990s, Kastner produced an experimental result which cast doubt on a thesis about dolomite leaching in dating the Getty kouros statue at the centre of a forgery claim. By artificially inducing de-dolomitization in the laboratory, a she produced a result since confirmed by Stanley Margolis a geology professor at the University of California at Davis who had previously determined that this process could occur only over the course of many centuries making forgery unlikely.
Publications
Kastner has published many articles, here is a partial list:
1965- Y. K. Bentor, Miriam Kastner. Notes on the Mineralogy and Origin of Glauconite. SEPM Journal of Sedimentary Research (1965), Vol. 35
1972- Siever R, Kastner M. Shale petrology by electron microprobe; pyrite-chloride relations. Journal of Sedimentary Research (1972), 42(2):350-355
1977- Kastner M, Keene JB, Gieskes JM. Diagenesis of siliceous oozes-I. Chemical controls on the rate of opal-A to opal-CT transformation-an experimental study Geochimica Et Cosmochimica Acta. 41: 1041-1051,1053-1059.
1980- Lonsdale PF, Bischoff JL, Burns VM, Kastner M, Sweeney RE. A high-temperature hydrothermal deposit on the seabed at a gulf of California spreading center Earth and Planetary Science Letters. 49: 8-20
1980- Einsele G, Gieskes JM, Curray J, Moore DM, Aguayo E, Aubry MP, Fornari D, Guerrero J, Kastner M, Kelts K, Lyle M, Matoba Y, Molina-Cruz A, Niemitz J, Rueda J, et al. Intrusion of basaltic sills into highly porous sediments, and resulting hydrothermal activity Nature. 283: 441-445.
1980- Spiess FN, Macdonald KC, Atwater T, Ballard R, Carranza A, Cordoba D, Cox C, Garcia VM, Francheteau J, Guerrero J, Hawkins J, Haymon R, Hessler R, Juteau T, Kastner M, et al. East pacific rise: hot springs and geophysical experiments. Science. 207: 1421-33.
1981- Kastner M, Siever R. Low temperature feldspars in sedimentary rocks American Journal of Science. 279: 435-4Haymon RM, Kastner M. Hot spring deposits on the East Pacific Rise at 21°N: preliminary description of mineralogy and genesis Earth and Planetary Science Letters. 53: 363-381. 79
1981- Baker PA, Kastner M. Constraints on the formation of sedimentary dolomite. Science. 213: 214-6.
1983- Kastner M, Siever R. Siliceous sediments of the Guaymas Basin: the effect of high thermal gradients on diagenesis. Journal of Geology.
1986- Kastner M, Gieskes JM, Hu JY. Carbonate recrystallization in basal sediments: Evidence for convective fluid flow on a ridge flank Nature. 321: 158-161.
1990- Garrison RE, Kastner M. Phosphatic sediments and rocks recovered from the Peru margin during ODP Leg 112 Proc., Scientific Reports, Odp, Leg 112, Peru Continental Margin.
1991- Martin JB, Kastner M, Elderfield H. Lithium: sources in pore fluids of Peru slope sediments and implications for oceanic fluxes Marine Geology. 102: 281-292.
1993- Martin JB, Gieskes JM, Torres M, Kastner M. Bromine and iodine in Peru margin sediments and pore fluids: Implications for fluid origins Geochimica Et Cosmochimica Acta. 57: 4377-4389.
1995- Martin EE, Macdougall JD, Herbert TD, Paytan A, Kastner M. Strontium and neodymium isotopic analyses of marine barite separates Geochimica Et Cosmochimica Acta. 59: 1353-1361.
1996- Paytan A, Kastner M, Chavez FP. Glacial to Interglacial Fluctuations in Productivity in the Equatorial Pacific as Indicated by Marine Barite Science. 274: 1355-7.
1998- Ransom B, Kim D, Kastner M, Wainwright S. Organic matter preservation on continental slopes: importance of mineralogy and surface area Geochimica Et Cosmochimica Acta. 62: 1329-1345.
1998- Paytan A, Kastner M, Campbell D, Thiemens MH. Sulfur isotopic composition of cenozoic seawater sulfate Science. 282: 1459-62.
2001- Valentine DL, Blanton DC, Reeburgh WS, Kastner M. Water column methane oxidation adjacent to an area of active hydrate dissociation, Eel River Basin Geochimica Et Cosmochimica Acta. 65: 2633-2640.
2008- Newman KR, Cormier MH, Weissel JK, Driscoll NW, Kastner M, Solomon EA, Robertson G, Hill JC, Singh H, Camilli R, Eustice R. Active methane venting observed at giant pockmarks along the U.S. mid-Atlantic shelf break Earth and Planetary Science Letters. 267: 341-352. DOI: 10.1016/J.Epsl.2007.11.053
2009- Solomon EA, Kastner M, Wheat CG, Jannasch H, Robertson G, Davis EE, Morris JD. Long-term hydrogeochemical records in the oceanic basement and forearc prism at the Costa Rica subduction zone Earth and Planetary Science Letters. 282: 240-251. DOI: 10.1016/J.Epsl.2009.03.022
2011- Joye SB, Leifer I, MacDonald IR, Chanton JP, Meile CD, Teske AP, Kostka JE, Chistoserdova L, Coffin R, Hollander D, Kastner M, Montoya JP, Rehder G, Solomon E, Treude T, et al. Comment on "A persistent oxygen anomaly reveals the fate of spilled methane in the deep Gulf of Mexico". Science. 332: 1033; author reply 1.
2012- Solomon EA, Kastner M. Progressive barite dissolution in the Costa Rica forearc - Implications for global fluxes of Ba to the volcanic arc and mantle Geochimica Et Cosmochimica Acta. 83: 110-124. DOI: 10.1016/J.Gca.2011.12.021
2019- Martinez-Ruiz F, Paytan A, Gonzalez-Muñoz M, Jroundi F, Abad M, Lam P, Bishop J, Horner T, Morton P, Kastner M. Barite formation in the ocean: Origin of amorphous and crystalline precipitates Chemical Geology. 511: 441-451.
Awards and honours
Newcomb Cleveland prize, American Association for the Advancement of Science (1980)
Guggenheim Fellow (1982)
Charles R. Bennett Service through Chemistry Award, American Chemical Society (1984)
Ocean Science Education Award, Office of Naval Research (1991)
Fellow of John Simon Guggenheim Memorial Foundation (1992)
Fellow, American Association for the Advancement of Science (1993)
Fellow, American Geophysical Union (1997)
Fellow, Geochemical Society and European Association of Geochemistry (1998)
Hans Petterson Medal, Royal Swedish Academy of Sciences (1999)
Fellow, Geological Society of America (2004)
Maurice Ewing Medal, American Geophysical Union (2008)
Fellow, International Association of GeoChemistry (2010)
Francis Shepard Medal for Excellence in Marine Geology, Society for Sedimentary Geology (2011)
V. M. Goldshmidt Award, Geochemical Society (2015)
Leopold-von-Buch-Plakette, German Geological Society (2018)
References
American geochemists
American oceanographers
1935 births
Living people
American women geologists
Fellows of the Geological Society of America
Fellows of the American Geophysical Union
21st-century American scientists
20th-century American women scientists
21st-century American women scientists
Harvard University alumni
Scripps Institution of Oceanography alumni
Recipients of the V. M. Goldschmidt Award | Miriam Kastner | Chemistry | 3,707 |
15,182,568 | https://en.wikipedia.org/wiki/Cihuacalli | Cihuacalli was an Aztec word that referred to buildings known as "women-house" which was a place for women's work. This likely included prostitution, but was not limited to that. It likely included kitchens or other areas for the drugery of woman's work. Under Montezuma I these buildings were also used as part of the ceremonies of fallen warriors, where an effigy was placed for four days for women to cry over and then moved in front of the temple to be burned. The Aztecs considered the prostitution in the Cahuacalli to be sacred and it catered to religious and political authorities.
The Cihuacalli was a group of enclosed compound with rooms that faced centrally towards a patio which stood a statue of Tlazolteotl. Tlazolteotl was the goddess of sexual impurity and sinful behavior, and was whom they prayed to for absolution. Aztec religious leaders believed that if a woman choose to practice prostitution they should do so under the protection of Tlazolteotl, who incited sexual activity while performing spiritual cleansing for sexual acts.
There are stories that also refer to certain places, either inside the Cihuacalli or outside, where women would perform erotic dances in front of men. The poet Tlaltecatzin of Cuauhchinanco noted that special "Joyful Women" would perform erotic dances at certain homes outside of the compound.
References
Aztec
Prostitution | Cihuacalli | Biology | 299 |
1,126,592 | https://en.wikipedia.org/wiki/Temporally%20ordered%20routing%20algorithm | The Temporally Ordered Routing Algorithm (TORA) is an algorithm for routing data across Wireless Mesh Networks or Mobile ad hoc networks.
It was developed by Vincent Park and Scott Corson at the University of Maryland and the Naval Research Laboratory. Park has patented his work, and it was licensed by Nova Engineering, who are marketing a wireless router product based on Park's algorithm.
Operation
The TORA attempts to achieve a high degree of scalability using a "flat", non-hierarchical routing algorithm. In its operation the algorithm attempts to suppress, to the greatest extent possible, the generation of far-reaching control message propagation. In order to achieve this, the TORA does not use a shortest path solution, an approach which is unusual for routing algorithms of this type.
TORA builds and maintains a Directed Acyclic Graph (DAG) rooted at a destination. No two nodes may have the same height.
Information may flow from nodes with higher heights to nodes with lower heights. Information can therefore be thought of as a fluid that may only flow downhill. By maintaining a set of totally ordered heights at all times, TORA achieves loop-free multipath routing, as information cannot 'flow uphill' and so cross back on itself.
The key design concepts of TORA is localization of control messages to a very small set of nodes near the occurrence of a topological change. To accomplish this, nodes need to maintain the routing information about adjacent (one hop) nodes. The protocol performs three basic functions:
Route creation
Route maintenance
Route erasure
During the route creation and maintenance phases, nodes use a height metric to establish a directed acyclic graph (DAG) rooted at destination. Thereafter links are assigned based on the relative height metric of neighboring nodes. During the times of mobility the DAG is broken and the route maintenance unit comes into picture to reestablish a DAG routed at the destination.
Timing is an important factor for TORA because the height metric is dependent on the logical time of the link failure. TORA's route erasure phase is essentially involving flooding a broadcast clear packet (CLR) throughout the network to erase invalid routes.
Route creation
A node which requires a link to a destination because it has no downstream neighbours for it sends a QRY (query) packet and sets its (formerly unset) route-required flag. A QRY packet contains the destination id of the node a route is sought to. The reply to a query is called an update UPD packet. It contains the height quintuple of the neighbour node answering to a query and the destination field which tells for which destination the update was meant for.
A node receiving a QRY packet does one of the following:
If its route required flag is set, this means that it doesn't have to forward the QRY, because it has itself already issued a QRY for the destination, but better discard it to prevent message overhead.
If the node has no downstream links and the route-required flag was not set, it sets its route-required flag and rebroadcasts the QRY message.
A node receiving an update packet updates the height value of its neighbour in the table and takes one of the following actions:
If the reflection bit of the neighbours height is not set and its route required flag is set it sets its height for the destination to that of its neighbours but increments d by one. It then deletes the RR flag and sends an UPD message to the neighbours, so they may route through it.
If the neighbours route is not valid (which is indicated by the reflection bit) or the RR flag was unset, the node only updates the entry of the neighbours node in its table.
Each node maintains a neighbour table containing the height of the neighbour nodes. Initially the height of all the nodes is NULL. (This is not zero "0" but NULL "-") so their quintuple is (-,-,-,-,i). The height of a destination neighbour is (0,0,0,0,dest).
Node C requires a route, so it broadcasts a QRY.
The QRY propagates until it hits a node which has a route to the destination, this node then sends an UPD message.
The UPD is also propagated, while node E sends a new UPD.
Route Maintenance
Route maintenance in TORA has five different cases according to the flowchart below as an example:
Partition Detection and Route Erasure
The links between Nodes D-F and E-F reverse.
Node D propagates the reference level.
Node E "reflects" the reference level. The reference heights of the neighbors are equal, with the reflection bit not set. E sets the reflection bit to indicate the reflection and sets its offset to 0.
Node C propagates the new reference level.
Node A propagates the reference level.
Route Erasure
When a node has detected a partition it sets its height and the heights of all its neighbours for the destination in its table to NULL and it issues a CLR (Clear) packet. The CLR packet consists of the reflected reference level (t,oid,1) and the destination id.
If a node receives a CLR packet and the reference level matches its own reference level it sets all heights of the neighbours and its own for the destination to NULL and broadcasts the CLR packet. If the reference level doesn't match its own it just sets the heights of the neighbours its table matching the reflected reference level to NULL and updates their link status.
References
External links
TORA Specification (Internet Draft 2001, expired)
MODIS Group Management of Data and Information Systems
Routing algorithms
Wireless networking
Ad hoc routing protocols | Temporally ordered routing algorithm | Technology,Engineering | 1,171 |
36,985,868 | https://en.wikipedia.org/wiki/Frobenioid | In arithmetic geometry, a Frobenioid is a category with some extra structure that generalizes the theory of line bundles on models of finite extensions of global fields. Frobenioids were introduced by . The word "Frobenioid" is a portmanteau of Frobenius and monoid, as certain Frobenius morphisms between Frobenioids are analogues of the usual Frobenius morphism, and some of the simplest examples of Frobenioids are essentially monoids.
The Frobenioid of a monoid
If M is a commutative monoid, it is acted on naturally by the monoid N of positive integers under multiplication, with an element n of N multiplying an element of M by n. The Frobenioid of M is the semidirect product of M and N. The underlying category of this Frobenioid is category of the monoid, with one object and a morphism for each element of the monoid. The standard Frobenioid is the special case of this construction when M is the additive monoid of non-negative integers.
Elementary Frobenioids
An elementary Frobenioid is a generalization of the Frobenioid of a commutative monoid, given by a sort of semidirect product of the monoid of positive integers by a family Φ of commutative monoids over a base category D. In applications the category D is sometimes the category of models of finite separable extensions of a global field, and Φ corresponds to the line bundles on these models, and the action of a positive integers n in N is given by taking the nth power of a line bundle.
Frobenioids and poly-Frobenioids
A Frobenioid consists of a category C together with a functor to an elementary Frobenioid, satisfying some complicated conditions related to the behavior of line bundles and divisors on models of global fields. One of Mochizuki's fundamental theorems states that under various conditions a Frobenioid can be reconstructed from the category C. A poly-Frobenioid is an extension of a Frobenioid.
See also
Category theory
Anabelian geometry
Inter-universal Teichmüller theory
References
External links
What is an étale theta function?
Algebraic geometry
Number theory | Frobenioid | Mathematics | 483 |
30,117,312 | https://en.wikipedia.org/wiki/Wilk%20Elektronik | Wilk Elektronik is a Polish manufacturer of computer memory under the brand name "GOODRAM" based in Łaziska Górne. After the bankruptcy of Qimonda it remains the only European producer of RAM modules.
History
The company was established in Tychy in 1991 as a RAM distributor. In 1996 it became the leader in the Polish memory distribution market. In 2003 the company moved to Łaziska Górne where it has been manufacturing its own products under the brand name "GOODRAM" ever since. Another brand, "Gooddrive", under which flash drives and SSD had been sold, was replaced in 2011 with the unified "GOODRAM" brand.
Since 2008, WE has been the official distributor of Toshiba flash products for Central and Eastern Europe as well as Middle East and Africa. It also cooperates with Elpida, Micron and Samsung.
In 2009, Wilk Elektronik's revenues reached above 100 mil. USD.
W.E. was ranked 43rd in a 2009 ranking of 200 Polish IT-companies compiled by Computerworld.
Products
RAM modules
Memory cards
USB flash drives
Solid-state drives
References
Computer hardware companies
Electronics companies established in 1991
Computer memory companies
Electronics companies of Poland
Information technology companies of Poland
Polish brands
Polish companies established in 1991 | Wilk Elektronik | Technology | 261 |
51,141,372 | https://en.wikipedia.org/wiki/Metalliferous%20Mines%20Regulations%201961 | The Metalliferous Mines Regulations 1961 replaces both the Metalliferous Mines Regulations, 1926 and the Mysore Gold Mines Regulations, 1953 to prevent possible dangers, accidents and deaths from mining in India.
Important regulations
9: Notice of Accident.
10: Notice of disease
60, 61, 63, & 64: Mine plans and Sections
106 to 118: Method of working in mines
119 to 130: Danger from fire, dust gas and water
146, 148: Standards of lighting in the mines
153 to 170: Use of explosive in mines
See also
Coal Mines Regulation Act 1908
References
Mining law and governance
Mine safety
Occupational safety and health organizations
Safety engineering
Indian legislation
1961 in Indian law
Mining in India | Metalliferous Mines Regulations 1961 | Engineering | 136 |
43,135,272 | https://en.wikipedia.org/wiki/International%20Society%20of%20Biomechanics | The International Society of Biomechanics, commonly known as the ISB, is a society dedicated to promoting biomechanics in its various forms. It promotes the study of all areas of biomechanics at the international level, although special emphasis is given to the biomechanics of human movement. The Society encourages international contacts amongst scientists, promotes the dissemination of knowledge, and forms liaisons with national organizations. The Society's membership includes scientists from a variety of disciplines including anatomy, physiology, engineering (mechanical, industrial aerospace, etc.), orthopedics, rehabilitation medicine, sport science and medicine, ergonomics, electro-physiological kinesiology and others.
History
The decision to establish the society was made at the 3rd International Seminar on Biomechanics held in Rome in 1971. This meeting was organized by the “Working Group on Biomechanics” which was part of the International Council of Sport and Physical Education, which itself was part of the United Nations Educational, Scientific, and Cultural Organization (UNESCO). At this meeting on September 29 it was voted to form the ISB at the next meeting. The 4th International Seminar on Biomechanics was held at Penn State University from August 26 until August 31, 1973. The constitution was voted on and approved on August 29. Two hundred and fifty of those present became charter members of the society.
Executive Council
The ISB is governed by its Executive Council. This council is elected every two years, by ballot, and is composed of officers and council members that represent countries from throughout the world and scientific areas that span all facets of biomechanics. The council, which meets annually, provides leadership for the continued development of the Society. Many on-going activities are performed by Council appointed sub-committees. The council also publishes a quarterly newsletter, known as ISB NOW, to inform members of Society developments and future events.
Congresses
The ISB was formed in 1973 and has held a conference every other year since then. The counting of the congress started with the 1st International Seminar on Biomechanics held in Zurich in 1967. The list of conferences and their geographical locations are given below.
Wartenweiler Memorial Lecture
At each ISB Congress the Wartenweiler Memorial Lecture is presented. This lecture is named to honor Jurg Wartenweiler (1915-1976) who was the first president of the ISB. He organized The First International Seminar on Biomechanics in Zürich, Switzerland in 1967. This conference eventually morphed into the biennial ISB Congresses. He was a faculty member at the ETH Zürich. Typically this lecture has been the first academic presentation of the conference. The list of Wartenweiler Memorial Lecturers and their topics follow.
Muybridge Medal
At the ISB Congress every two years, the ISB presents the Muybridge Award. This award is the most prestigious award of the Society and is awarded for career achievements in biomechanics. The award is named after Eadward Muybridge (1830-1904), who was one of the first to use cinematography for the study of human and animal movement. The list of Muybridge Award winners and their lecture topics follow,
Honorary Member
The ISB has a number of categories of membership including: student, charter, full, and emeritus. The remaining category is that of honorary member, which is restricted to a few individuals whose work has made outstanding contributions to the development of Biomechanics. The honorary membership currently consists of 16 individuals. Unfortunately some of these members have died (Levan Chkhaidze, James Hay, Ernst Jokl, Chauncey Morehouse, John Paul, Jacquelin Perry, David Winter). The other honorary members and their current academic affiliations are,
Peter R. Cavanagh, University of Washington, USA
Paavo Komi, University of Jyvaskyla, Finland
Hideji Matsui, University of Nagoya, Japan
Doris Miller, University of Western Ontario, Canada
Mitsumasa Miyashita, University of Tokyo, Japan
Richard C. Nelson, Penn State University, USA
Benno Nigg, University of Calgary, Canada
Robert Norman, University of Waterloo, Canada
Fred Yeadon, Loughborough University, UK
Affiliated Groups
Many other biomechanics groups and societies are affiliate members of the ISB. These groups include:
American Society of Biomechanics
Australian and New Zealand Society of Biomechanics
Brazilian Society of Biomechanics
British Association of Sport and Exercise Sciences
Bulgarian Society of Biomechanics
Canadian Society of Biomechanics
Chinese Society of Sports Biomechanics
Czech Society of Biomechanics
Danish Society of Biomechanics
German Society of Biomechanics
Hellenic Society of Biomechanics (Greece)
International Society of Biomechanics in Sports
Japanese Society of Biomechanics
Korean Society for Orthopaedic Research, Biomechanics, and Basic Science
Polish Society of Biomechanics
Portuguese Society of Biomechanics
Russian Society of Biomechanics
Societe de Biomecanique (France)
Taiwanese Society of Biomechanics
International Women in Biomechanics
Technical and Working Groups
The Society also supports technical and working groups, which are groups of individuals dedicated to enhancing knowledge of specialized areas within biomechanics. Currently active technical sections include,
Computer Simulation
Shoulder Biomechanics
Footwear Biomechanics
3-D Motion Analysis
References
Biomechanics
International learned societies | International Society of Biomechanics | Physics | 1,110 |
8,562,026 | https://en.wikipedia.org/wiki/Electroluminescent%20wire | Electroluminescent wire (often abbreviated as EL wire) is a thin copper wire coated in a phosphor that produces light through electroluminescence when an alternating current is applied to it. It can be used in a wide variety of applications—vehicle and structure decoration, safety and emergency lighting, toys, clothing etc.—much as rope light or Christmas lights are often used. Unlike these types of strand lights, EL wire is not a series of points, but produces a continuous unbroken line of visible light. Its thin diameter makes it flexible and ideal for use in a variety of applications such as clothing or costumes.
Structure
EL wire's construction consists of five major components. First is a solid-copper wire core coated with phosphor. A very fine wire or pair of wires is spiral-wound around the phosphor-coated copper core and then the outer Indium tin oxide (ITO) conductive coating is evaporated on. This fine wire is electrically isolated from the copper core. Surrounding this "sandwich" of copper core, phosphor and fine copper wire is a clear PVC sleeve. Finally, surrounding this thin and clear PVC sleeve is another clear, colored translucent or fluorescent PVC sleeve.
An alternating current electric potential of approximately 90 to 120 volts at about 1000 Hz is applied between the copper core wire and the fine wire that surrounds the copper core. The wire can be modeled as a coaxial capacitor with about 1 nF of capacitance per 30 cm, and the rapid charging and discharging of this capacitor excites the phosphor to emit light. The colors of light that can be produced efficiently by phosphors are limited, so many types of wire use an additional fluorescent organic dye in the clear PVC sleeve to produce the final result. These organic dyes produce colors like red and purple when excited by the blue-green light of the core.
A resonant oscillator is typically used to generate the high voltage drive signal. Because of the capacitance load of the EL wire, using an inductive (coiled) transformer makes the driver a very efficient tuned LC oscillator. The efficiency of EL wire is very high, and thus up to a hundred meters of EL wire can be driven by AA batteries for several hours.
In recent years, the LC circuit has been replaced for some applications with a single chip switched capacitor inverter IC such as the Supertex HV850; this can run 30 cm of angel hair wire at high efficiency, and is suitable for solar lanterns and safety applications. The other advantage of these chips is that the control signals can be derived from a microcontroller, so brightness and colour can be varied programmatically; this can be controlled by using external sensors that sense, for example, battery state, ambient temperature, or ambient light etc.
EL wire - in common with other types of EL devices - does have limitations: at high frequency it dissipates a lot of heat, and that can lead to breakdown and loss of brightness over time. Because the wire is unshielded and typically operates at a relatively high voltage, EL wire can produce high-frequency interference (corresponding to the frequency of the oscillator) that can be picked up by sensitive audio equipment, such as guitar pickups.
There is also a voltage limit: typical EL wire breaks down at around 180 volts peak-to-peak, so if using an unregulated transformer, back-to-back zener diodes and series current-limiting resistors are essential.
In addition, EL sheet and wire can sometimes be used as a touch sensor, since compressing the capacitor will change its value.
Sequencers
EL wire sequencers can flash electroluminescent wire, or EL wire, in sequential patterns. EL wire requires a low-power, high-frequency driver to cause the wire to illuminate. Most EL wire drivers simply light up one strand of EL wire in a constant-on mode, and some drivers may additionally have a blink or strobe mode. A sound-activated driver will light EL wire in synchronization to music, speech, or other ambient sound, but an EL wire sequencer will allow multiple lengths of EL wire to be flashed in a desired sequence. The lengths of EL wire can all be the same color, or a variety of colors.
The images above show a sign that displays a telephone number, where the numbers were formed using different colors of EL wire. There are ten numbers, each of which is connected to a different channel of the EL wire sequencer.
Like EL wire drivers, sequencers are rated to drive (or power) a range or specific length of EL wire. For example, using a sequencer rated for 1.5 to 14 meters (5 to 45 feet), if less than 1.5m is used, there is a risk of burning out the sequencer, and if more than 14m is used, the EL wire will not light as brightly as intended.
There are commercially available EL wire sequencers capable of lighting three, four, five, or ten lengths of EL wire. There are professional and experimental sequencers with many more than ten channels, but for most applications, ten channels is enough. Sequencers usually have options for changing the speed, reversing, changing the order of the sequence, and sometimes for changing whether the first wires remain lit or go off as the rest of the wires in the sequence are lit. EL wire sequencers tend to be smaller than a pack of cigarettes and most are powered by batteries. This versatility lends to the sequencers' use at nighttime events where mains electricity is not available.
Applications
By arranging each strand of EL wire into a shape slightly different from the previous one, it is possible to create animations using EL wire sequencers. EL wire sequencers are also used for costumes and have been used to create animations on various items such as kimono, purses, neckties, and motorcycle tanks. They are increasingly popular among artists, dancers, maker culture, and similar creative communities, such as exhibited in the annual Burning Man alt-culture festival.
References
5,753,381 US Patent, Electroluminescent Filament
Notes
External links
How Electroluminescent (EL) Wire Works, by Joanna Burgess // How Stuff Works
Display technology
Lighting
Luminescence
Wire | Electroluminescent wire | Chemistry,Engineering | 1,328 |
5,480,019 | https://en.wikipedia.org/wiki/Immunochemistry | Immunochemistry is the study of the chemistry of the immune system. This involves the study of the properties, functions, interactions and production of the chemical components (antibodies/immunoglobulins, toxin, epitopes of proteins like CD4, antitoxins, cytokines/chemokines, antigens) of the immune system. It also include immune responses and determination of immune materials/products by immunochemical assays.
In addition, immunochemistry is the study of the identities and functions of the components of the immune system. Immunochemistry is also used to describe the application of immune system components, in particular antibodies, to chemically labelled antigen molecules for visualization.
Various methods in immunochemistry have been developed and refined, and used in scientific study, from virology to molecular evolution. Immunochemical techniques include: enzyme-linked immunosorbent assay, immunoblotting (e.g., Western blot assay), precipitation and agglutination reactions, immunoelectrophoresis, immunophenotyping, immunochromatographic assay and cyflometry.
One of the earliest examples of immunochemistry is the Wasserman test to detect syphilis. Svante Arrhenius was also one of the pioneers in the field; he published Immunochemistry in 1907 which described the application of the methods of physical chemistry to the study of the theory of toxins and antitoxins.
Immunochemistry is also studied from the aspect of using antibodies to label epitopes of interest in cells (immunocytochemistry) or tissues (immunohistochemistry).
References
Branches of immunology | Immunochemistry | Biology | 380 |
41,716,790 | https://en.wikipedia.org/wiki/NdhF | The chloroplast NADH dehydrogenase F (ndhF) gene is found in all vascular plant divisions and is highly conserved. Its DNA fragment resides in the small single-copy region of the chloroplast genome, and is thought to encode a hydrophobic protein containing 664 amino acids and to have a mass of 72.9 kDa.
Application
The ndhF fragment has been a very useful tool in phylogenetic reconstruction at a number of taxonomic levels.
See also
Chloroplast
Chloroplast DNA
RuBisCO
NADPH dehydrogenase (quinone)
References
Photosynthesis
EC 4.1.1
EC 1.6.5
Oxidoreductases
NADPH-dependent enzymes | NdhF | Chemistry,Biology | 156 |
7,247,692 | https://en.wikipedia.org/wiki/Security%20and%20safety%20features%20new%20to%20Windows%20Vista | There are a number of security and safety features new to Windows Vista, most of which are not available in any prior Microsoft Windows operating system release.
Beginning in early 2002 with Microsoft's announcement of its Trustworthy Computing initiative, a great deal of work has gone into making Windows Vista a more secure operating system than its predecessors. Internally, Microsoft adopted a "Security Development Lifecycle" with the underlying ethos of "Secure by design, secure by default, secure in deployment". New code for Windows Vista was developed with the SDL methodology, and all existing code was reviewed and refactored to improve security.
Some specific areas where Windows Vista introduces new security and safety mechanisms include User Account Control, parental controls, Network Access Protection, a built-in anti-malware tool, and new digital content protection mechanisms.
User Account Control
User Account Control is a new infrastructure that requires user consent before allowing any action that requires administrative privileges. With this feature, all users, including users with administrative privileges, run in a standard user mode by default, since most applications do not require higher privileges. When some action is attempted that needs administrative privileges, such as installing new software or changing system or security settings, Windows will prompt the user whether to allow the action or not. If the user chooses to allow, the process initiating the action is elevated to a higher privilege context to continue. While standard users need to enter a username and password of an administrative account to get a process elevated (Over-the-shoulder Credentials), an administrator can choose to be prompted just for consent or ask for credentials. If the user doesn't click Yes, after 30 seconds the prompt is denied.
UAC asks for credentials in a Secure Desktop mode, where the entire screen is faded out and temporarily disabled, to present only the elevation UI. This is to prevent spoofing of the UI or the mouse by the application requesting elevation. If the application requesting elevation does not have focus before the switch to Secure Desktop occurs, then its taskbar icon blinks, and when focussed, the elevation UI is presented (however, it is not possible to prevent a malicious application from silently obtaining the focus).
Since the Secure Desktop allows only highest privilege System applications to run, no user mode application can present its dialog boxes on that desktop, so any prompt for elevation consent can be safely assumed to be genuine. Additionally, this can also help protect against shatter attacks, which intercept Windows inter-process messages to run malicious code or spoof the user interface, by preventing unauthorized processes from sending messages to high privilege processes. Any process that wants to send a message to a high privilege process must get itself elevated to the higher privilege context, via UAC.
Applications written with the assumption that the user will be running with administrator privileges experienced problems in earlier versions of Windows when run from limited user accounts, often because they attempted to write to machine-wide or system directories (such as Program Files) or registry keys (notably HKLM) UAC attempts to alleviate this using File and Registry Virtualization, which redirects writes (and subsequent reads) to a per-user location within the user's profile. For example, if an application attempts to write to “C:\program files\appname\settings.ini” and the user doesn't have permissions to write to that directory, the write will get redirected to “C:\Users\username\AppData\Local\VirtualStore\Program Files\appname\.”
Encryption
BitLocker, formerly known as "Secure Startup", this feature offers full disk encryption for the system volume. Using the command-line utility, it is possible to encrypt additional volumes. Bitlocker utilizes a USB key or Trusted Platform Module (TPM) version 1.2 of the TCG specifications to store its encryption key. It ensures that the computer running Windows Vista starts in a known-good state, and it also protects data from unauthorized access. Data on the volume is encrypted with a Full Volume Encryption Key (FVEK), which is further encrypted with a Volume Master Key (VMK) and stored on the disk itself.
Windows Vista is the first Microsoft Windows operating system to offer native support for the TPM 1.2 by providing a set of APIs, commands, classes, and services for the use and management of the TPM. A new system service, referred to as TPM Base Services, enables the access to and sharing of TPM resources for developers who wish to build applications with support for the device.
Encrypting File System (EFS) in Windows Vista can be used to encrypt the system page file and the per-user Offline Files cache. EFS is also more tightly integrated with enterprise Public Key Infrastructure (PKI), and supports using PKI-based key recovery, data recovery through EFS recovery certificates, or a combination of the two. There are also new Group Policies to require smart cards for EFS, enforce page file encryption, stipulate minimum key lengths for EFS, enforce encryption of the user's Documents folder, and prohibit self-signed certificates. The EFS encryption key cache can be cleared when a user locks his workstation or after a certain time limit.
The EFS rekeying wizard allows the user to choose a certificate for EFS and to select and migrate existing files that will use the newly chosen certificate. Certificate Manager also allows users to export their EFS recovery certificates and private keys. Users are reminded to back up their EFS keys upon first use through a balloon notification. The rekeying wizard can also be used to migrate users in existing installations from software certificates to smart cards. The wizard can also be used by an administrator or users themselves in recovery situations. This method is more efficient than decrypting and reencrypting files.
Windows Firewall
Windows Vista significantly improves the firewall to address a number of concerns around the flexibility of Windows Firewall in a corporate environment:
IPv6 connection filtering
Outbound packet filtering, reflecting increasing concerns about spyware and viruses that attempt to "phone home".
With the advanced packet filter, rules can also be specified for source and destination IP addresses and port ranges.
Rules can be configured for services by its service name chosen by a list, without needing to specify the full path file name.
IPsec is fully integrated, allowing connections to be allowed or denied based on security certificates, Kerberos authentication, etc. Encryption can also be required for any kind of connection. A connection security rule can be created using a wizard that handles the complex configuration of IPsec policies on the machine. Windows Firewall can allow traffic based on whether the traffic is secured by IPsec.
A new management console snap-in named Windows Firewall with Advanced Security which provides access to many advanced options, including IPsec configuration, and enables remote administration.
Ability to have separate firewall profiles for when computers are domain-joined or connected to a private or public network. Support for the creation of rules for enforcing server and domain isolation policies.
Windows Defender
Windows Vista includes Windows Defender, Microsoft's anti-spyware utility. According to Microsoft, it was renamed from 'Microsoft AntiSpyware' because it not only features scanning of the system for spyware, similar to other free products on the market, but also includes Real Time Security agents that monitor several common areas of Windows for changes which may be caused by spyware. These areas include Internet Explorer configuration and downloads, auto-start applications, system configuration settings, and add-ons to Windows such as Windows Shell extensions.
Windows Defender also includes the ability to remove ActiveX applications that are installed and block startup programs. It also incorporates the SpyNet network, which allows users to communicate with Microsoft, send what they consider is spyware, and check which applications are acceptable.
Device Installation Control
Windows Vista allow administrators to enforce hardware restrictions via Group Policy to prevent users from installing devices, to restrict device installation to a predefined white list, or to restrict access to removable media and classes of devices.
Parental Controls
Windows Vista includes a range of parental controls for administrators to monitor and restrict computer activity of standard user accounts that are not part of a domain; User Account Control enforces administrative restrictions. Features include: Windows Vista Web Filter—implemented as a Winsock LSP filter to function across all Web browsers—which prohibits access to websites based on categories of content or specific addresses (with an option to block all file downloads); Time Limits, which prevents standard users from logging in during a date or time specified by an administrator (and which locks restricted accounts that are already logged in during such times); Game Restrictions, which allows administrators to block games based on names, contents, or ratings defined by a video game content rating system such as the Entertainment Software Rating Board (ESRB), with content restrictions taking precedence over rating restrictions (e.g., Everyone 10+ (E10+) games may be permitted to run in general, but E10+ games with mild language will still be blocked if mild language itself is blocked); Application Restrictions, which uses application whitelists for specific applications; and Activity Reports, which monitors and records activities of restricted standard user accounts.
Windows Parental Controls includes an extensible set of options, with application programming interfaces (APIs) for developers to replace bundled features with their own.
Exploit protection functionality
Windows Vista uses Address Space Layout Randomization (ASLR) to load system files at random addresses in memory. By default, all system files are loaded randomly at any of the possible 256 locations. Other executables have to specifically set a bit in the header of the Portable Executable (PE) file, which is the file format for Windows executables, to use ASLR. For such executables, the stack and heap allocated is randomly decided. By loading system files at random addresses, it becomes harder for malicious code to know where privileged system functions are located, thereby making it unlikely for them to predictably use them. This helps prevent most remote execution attacks by preventing return-to-LIBC buffer overflow attacks.
The Portable Executable format has been updated to support embedding of exception handler address in the header. Whenever an exception is thrown, the address of the handler is verified with the one stored in the executable header. If they match, the exception is handled, otherwise it indicates that the run-time stack has been compromised, and hence the process is terminated.
Function pointers are obfuscated by XOR-ing with a random number, so that the actual address pointed to is hard to retrieve. So would be to manually change a pointer, as the obfuscation key used for the pointer would be very hard to retrieve. Thus, it is made hard for any unauthorized user of the function pointer to be able to actually use it. Also metadata for heap blocks are XOR-ed with random numbers. In addition, check-sums for heap blocks are maintained, which is used to detect unauthorized changes and heap corruption. Whenever a heap corruption is detected, the application is killed to prevent successful completion of the exploit.
Windows Vista binaries include intrinsic support for detection of stack-overflow. When a stack overflow in Windows Vista binaries is detected, the process is killed so that it cannot be used to carry on the exploit. Also Windows Vista binaries place buffers higher in memory and non buffers, like pointers and supplied parameters, in lower memory area. So to actually exploit, a buffer underrun is needed to gain access to those locations. However, buffer underruns are much less common than buffer overruns.
Application isolation
Windows Vista introduces Mandatory Integrity Control to set integrity levels for processes. A low integrity process can not access the resources of a higher integrity process. This feature is being used to enforce application isolation, where applications in a medium integrity level, such as all applications running in the standard user context can not hook into system level processes which run in high integrity level, such as administrator mode applications but can hook onto lower integrity processes like Windows Internet Explorer 7 or 8. A lower privilege process cannot perform a window handle validation of higher process privilege, cannot SendMessage or PostMessage to higher privilege application windows, cannot use thread hooks to attach to a higher privilege process, cannot use Journal hooks to monitor a higher privilege process and cannot perform DLL–injection to a higher privilege process.
Data Execution Prevention
Windows Vista offers full support for the NX (No-Execute) feature of modern processors. DEP was introduced in Windows XP Service Pack 2 and Windows Server 2003 Service Pack 1. This feature, present as NX (EVP) in AMD's AMD64 processors and as XD (EDB) in Intel's processors, can flag certain parts of memory as containing data instead of executable code, which prevents overflow errors from resulting in arbitrary code execution.
If the processor supports the NX-bit, Windows Vista automatically enforces hardware-based Data Execution Prevention on all processes to mark some memory pages as non-executable data segments (like the heap and stack), and subsequently any data is prevented from being interpreted and executed as code. This prevents exploit code from being injected as data and then executed.
If DEP is enabled for all applications, users gain additional resistance against zero-day exploits. But not all applications are DEP-compliant and some will generate DEP exceptions. Therefore, DEP is not enforced for all applications by default in 32-bit versions of Windows and is only turned on for critical system components. However, Windows Vista introduces additional NX policy controls that allow software developers to enable NX hardware protection for their code, independent of system-wide compatibility enforcement settings. Developers can mark their applications as NX-compliant when built, which allows protection to be enforced when that application is installed and runs. This enables a higher percentage of NX-protected code in the software ecosystem on 32-bit platforms, where the default system compatibility policy for NX is configured to protect only operating system components. For x86-64 applications, backward compatibility is not an issue and therefore DEP is enforced by default for all 64-bit programs. Also, only processor-enforced DEP is used in x86-64 versions of Windows Vista for greater security.
Digital rights management
New digital rights management and content-protection features have been introduced in Windows Vista to help digital content providers and corporations protect their data from being copied.
PUMA: Protected User Mode Audio (PUMA) is the new User Mode Audio (UMA) audio stack. Its aim is to provide an environment for audio playback that restricts the copying of copyrighted audio, and restricts the enabled audio outputs to those allowed by the publisher of the protected content.
Protected Video Path - Output Protection Management (PVP-OPM) is a technology that prevents copying of protected digital video streams, or their display on video devices that lack equivalent copy protection (typically HDCP). Microsoft claims that without these restrictions the content industry may prevent PCs from playing copyrighted content by refusing to issue license keys for the encryption used by HD DVD, Blu-ray Disc, or other copy-protected systems.
Protected Video Path - User-Accessible Bus (PVP-UAB) is similar to PVP-OPM, except that it applies encryption of protected content over the PCI Express bus.
Rights Management Services (RMS) support, a technology that will allow corporations to apply DRM-like restrictions to corporate documents, email, and intranets to protect them from being copied, printed, or even opened by people not authorized to do so.
Windows Vista introduces a Protected Process, which differs from usual processes in the sense that other processes cannot manipulate the state of such a process, nor can threads from other processes be introduced in it. A Protected Process has enhanced access to DRM-functions of Windows Vista. However, currently, only the applications using Protected Video Path can create Protected Processes.
The inclusion of new digital rights management features has been a source of criticism of Windows Vista.
Windows Service Hardening
Windows Service Hardening compartmentalizes the services such that if one service is compromised, it cannot easily attack other services on the system. It prevents Windows services from doing operations on file systems, registry or networks which they are not supposed to, thereby reducing the overall attack surface on the system and preventing entry of malware by exploiting system services. Services are now assigned a per-service Security identifier (SID), which allows controlling access to the service as per the access specified by the security identifier. A per-service SID may be assigned during the service installation via the ChangeServiceConfig2 API or by using the SC.EXE command with the sidtype verb. Services can also use access control lists (ACL) to prevent external access to resources private to itself.
Services in Windows Vista also run in a less privileged account such as Local Service or Network Service, instead of the System account. Previous versions of Windows ran system services in the same login session as the locally logged-in user (Session 0). In Windows Vista, Session 0 is now reserved for these services, and all interactive logins are done in other sessions. This is intended to help mitigate a class of exploits of the Windows message-passing system, known as Shatter attacks. The process hosting a service has only the privileges specified in the RequiredPrivileges registry value under HKLM\System\CurrentControlSet\Services.
Services also need explicit write permissions to write to resources, on a per-service basis. By using a write-restricted access token, only those resources which have to be modified by a service are given write access, so trying to modify any other resource fails. Services will also have pre-configured firewall policy, which gives it only as much privilege as is needed for it to function properly. Independent software vendors can also use Windows Service Hardening to harden their own services. Windows Vista also hardens the named pipes used by RPC servers to prevent other processes from being able to hijack them.
Authentication and logon
Graphical identification and authentication (GINA), used for secure authentication and interactive logon has been replaced by Credential Providers. Combined with supporting hardware, Credential Providers can extend the operating system to enable users to log on through biometric devices (fingerprint, retinal, or voice recognition), passwords, PINs and smart card certificates, or any custom authentication package and schema third-party developers wish to create. Smart card authentication is flexible as certificate requirements are relaxed. Enterprises may develop, deploy, and optionally enforce custom authentication mechanisms for all domain users. Credential Providers may be designed to support Single sign-on (SSO), authenticating users to a secure network access point (leveraging RADIUS and other technologies) as well as machine logon. Credential Providers are also designed to support application-specific credential gathering, and may be used for authentication to network resources, joining machines to a domain, or to provide administrator consent for User Account Control. Authentication is also supported using IPv6 or Web services. A new Security Service Provider, CredSSP is available through Security Support Provider Interface that enables an application to delegate the user's credentials from the client (by using the client-side SSP) to the target server (through the server-side SSP). The CredSSP is also used by Terminal Services to provide single sign-on.
Windows Vista can authenticate user accounts using Smart Cards or a combination of passwords and Smart Cards (Two-factor authentication). Windows Vista can also use smart cards to store EFS keys. This makes sure that encrypted files are accessible only as long as the smart card is physically available. If smart cards are used for logon, EFS operates in a single sign-on mode, where it uses the logon smart card for file encryption without further prompting for the PIN.
Fast User Switching which was limited to workgroup computers on Windows XP, can now also be enabled for computers joined to a domain, starting with Windows Vista. Windows Vista also includes authentication support for the Read-Only Domain Controllers introduced in Windows Server 2008.
Cryptography
Windows Vista features an update to the crypto API known as Cryptography API: Next Generation (CNG). The CNG API is a user mode and kernel mode API that includes support for elliptic curve cryptography (ECC) and a number of newer algorithms that are part of the National Security Agency (NSA) Suite B. It is extensible, featuring support for plugging in custom cryptographic APIs into the CNG runtime. It also integrates with the smart card subsystem by including a Base CSP module which implements all the standard backend cryptographic functions that developers and smart card manufacturers need, so that they do not have to write complex CSPs. The Microsoft certificate authority can issue ECC certificates and the certificate client can enroll and validate ECC and SHA-2 based certificates.
Revocation improvements include native support for the Online Certificate Status Protocol (OCSP) providing real-time certificate validity checking, CRL prefetching and CAPI2 Diagnostics. Certificate enrollment is wizard-based, allows users to input data during enrollment and provides clear information on failed enrollments and expired certificates. CertEnroll, a new COM-based enrollment API replaces the XEnroll library for flexible programmability. Credential roaming capabilities replicate Active Directory key pairs, certificates and credentials stored in Stored user names and passwords within the network.
Network Access Protection
Windows Vista introduces Network Access Protection (NAP), which ensures that computers connecting to or communicating with a network conform to a required level of system health as set by the administrator of a network. Depending on the policy set by the administrator, the computers which do not meet the requirements will either be warned and granted access, allowed access to limited network resources, or denied access completely. NAP can also optionally provide software updates to a non-compliant computer to upgrade itself to the level as required to access the network, using a Remediation Server. A conforming client is given a Health Certificate, which it then uses to access protected resources on the network.
A Network Policy Server, running Windows Server 2008 acts as health policy server and clients need to use Windows XP SP3 or later. A VPN server, RADIUS server or DHCP server can also act as the health policy server.
Other networking-related security features
The interfaces for TCP/IP security (filtering for local host traffic), the firewall hook, the filter hook, and the storage of packet filter information has been replaced with a new framework known as the Windows Filtering Platform (WFP). WFP provides filtering capability at all layers of the TCP/IP protocol stack. WFP is integrated in the stack, and is easier for developers to build drivers, services, and applications that must filter, analyze, or modify TCP/IP traffic.
In order to provide better security when transferring data over a network, Windows Vista provides enhancements to the cryptographic algorithms used to obfuscate data. Support for 256-bit and 384-bit Elliptic curve Diffie–Hellman (DH) algorithms, as well as for 128-bit, 192-bit and 256-bit Advanced Encryption Standard (AES) is included in the network stack itself and in the Kerberos protocol and GSS messages. Direct support for SSL and TLS connections in new Winsock API allows socket applications to directly control security of their traffic over a network (such as providing security policy and requirements for traffic, querying security settings) rather than having to add extra code to support a secure connection. Computers running Windows Vista can be a part of logically isolated networks within an Active Directory domain. Only the computers which are in the same logical network partition will be able to access the resources in the domain. Even though other systems may be physically on the same network, unless they are in the same logical partition, they won't be able to access partitioned resources. A system may be part of multiple network partitions. The Schannel SSP includes new cipher suites that support Elliptic curve cryptography, so ECC cipher suites can be negotiated as part of the standard TLS handshake. The Schannel interface is pluggable so advanced combinations of cipher suites can substitute a higher level of functionality.
IPsec is now fully integrated with Windows Firewall and offers simplified configuration and improved authentication. IPsec supports IPv6, including support for Internet key exchange (IKE), AuthIP and data encryption, client-to-DC protection, integration with Network Access Protection and Network Diagnostics Framework support. To increase security and deployability of IPsec VPNs, Windows Vista includes AuthIP which extends the IKE cryptographic protocol to add features like authentication with multiple credentials, alternate method negotiation and asymmetric authentication.
Security for wireless networks is being improved with better support for newer wireless standards like 802.11i (WPA2). EAP Transport Layer Security (EAP-TLS) is the default authentication mode. Connections are made at the most secure connection level supported by the wireless access point. WPA2 can be used even in ad hoc mode. Windows Vista enhances security when joining a domain over a wireless network. It can use Single Sign On to use the same credentials to join a wireless network as well as the domain housed within the network. In this case, the same RADIUS server is used for both PEAP authentication for joining the network and MS-CHAP v2 authentication to log into the domain. A bootstrap wireless profile can also be created on the wireless client, which first authenticates the computer to the wireless network and joins the network. At this stage, the machine still does not have any access to the domain resources. The machine will run a script, stored either on the system or on USB thumb drive, which authenticates it to the domain. Authentication can be done whether by using username and password combination or security certificates from a Public key infrastructure (PKI) vendor such as VeriSign.
Windows Vista also includes an Extensible Authentication Protocol Host (EAPHost) framework that provides extensibility for authentication methods for commonly used protected network access technologies such as 802.1X and PPP. It allows networking vendors to develop and easily install new authentication methods known as EAP methods.
Windows Vista supports the use of PEAP with PPTP. The authentication mechanisms supported are PEAPv0/EAP-MSCHAPv2 (passwords) and PEAP-TLS (smartcards and certificates).
Windows Vista Service Pack 1 includes Secure Socket Tunneling Protocol, a new Microsoft proprietary VPN protocol which provides a mechanism to transport Point-to-Point Protocol (PPP) traffic (including IPv6 traffic) through an SSL channel.
x86-64-specific features
64-bit versions of Windows Vista enforce hardware-based Data Execution Prevention (DEP), with no fallback software emulation. This ensures that the less effective software-enforced DEP (which is only safe exception handling and unrelated to the NX bit) is not used. Also, DEP, by default, is enforced for all 64-bit applications and services on x86-64 versions and those 32-bit applications that opt in. In contrast, in 32-bit versions, software-enforced DEP is an available option and by default is enabled only for essential system components.
An upgraded Kernel Patch Protection, also referred to as PatchGuard, prevents third-party software, including kernel-mode drivers, from modifying the kernel, or any data structure used by the kernel, in any way; if any modification is detected, the system is shut down. This mitigates a common tactic used by rootkits to hide themselves from user-mode applications. PatchGuard was first introduced in the x64 edition of Windows Server 2003 Service Pack 1, and was included in Windows XP Professional x64 edition.
Kernel-mode drivers on 64-bit versions of Windows Vista must be digitally signed; even administrators will not be able to install unsigned kernel-mode drivers. A boot-time option is available to disable this check for a single session of Windows. 64-bit user-mode drivers are not required to be digitally signed.
Code Integrity check-sums signed code. Before loading system binaries, it is verified against the check-sum to ensure it has not modified. The binaries are verified by looking up their signatures in the system catalogs. The Windows Vista boot loader checks the integrity of the kernel, the Hardware Abstraction Layer (HAL), and the boot-start drivers. Aside from the kernel memory space, Code Integrity verifies binaries loaded into a protected process and system installed dynamic libraries that implement core cryptographic functions.
Other features and changes
A number of specific security and reliability changes have been made:
Stronger encryption is used for storing LSA secrets (cached domain records, passwords, EFS encryption keys, local security policy, auditing etc.)
Support for the IEEE 1667 authentication standard for USB flash drives with a hotfix for Windows Vista Service Pack 2.
The Kerberos SSP has been updated to support AES encryption. The SChannel SSP also has stronger AES encryption and ECC support.
Software Restriction Policies introduced in Windows XP have been improved in Windows Vista. The Basic user security level is exposed by default instead of being hidden. The default hash rule algorithm has been upgraded from MD5 to the stronger SHA256. Certificate rules can now be enabled through the Enforcement Property dialog box from within the Software Restriction Policies snap-in extension.
To prevent accidental deletion of Windows, Vista does not allow formatting the boot partition when it is active (right-clicking the C: drive and choosing "Format", or typing in "Format C:" (w/o quotes) at the Command Prompt will yield a message saying that formatting this volume is not allowed). To format the main hard drive (the drive containing Windows), the user must boot the computer from a Windows installation disc or choose the menu item "Repair Your Computer" from the Advanced System Recovery Options by pressing F8 upon turning on the computer.
Additional EFS settings allow configuring when encryption policies are updated, whether files moved to encrypted folders are encrypted, Offline Files cache files encryption and whether encrypted items can be indexed by Windows Search.
The Stored User Names and Passwords (Credentials Manager) feature includes a new wizard to back up user names and passwords to a file and restore them on systems running Windows Vista or later operating systems.
A new policy setting in Group Policy enables the display of the date and time of the last successful interactive logon, and the number of failed logon attempts since the last successful logon with the same user name. This will enable a user to determine if the account was used without his or her knowledge. The policy can be enabled for local users as well as computers joined to a functional-level domain.
Windows Resource Protection prevents potentially damaging system configuration changes, by preventing changes to system files and settings by any process other than Windows Installer. Also, changes to the registry by unauthorized software are blocked.
Protected-Mode Internet Explorer: Internet Explorer 7 and later introduce several security changes such as phishing filter, ActiveX opt-in, URL handling protection, protection against cross-domain scripting attacks and status-bar spoofing. They run as a low integrity process on Windows Vista, can write only to the Temporary Internet Files folder, and cannot gain write access to files and registry keys in a user's profile, protecting the user from malicious content and security vulnerabilities, even in ActiveX controls. Also, Internet Explorer 7 and later use the more secure Data Protection API (DPAPI) to store their credentials such as passwords instead of the less secure Protected Storage (PStore).
Network Location Awareness integration with the Windows Firewall. All newly connected networks get defaulted to "Public Location" which locks down listening ports and services. If a network is marked as trusted, Windows remembers that setting for the future connections to that network.
User-Mode Driver Framework prevents drivers from directly accessing the kernel but instead access it through a dedicated API. This new feature is important because a majority of system crashes can be traced to improperly installed third-party device drivers.
Windows Security Center has been upgraded to detect and report the presence of anti-malware software as well as monitor and restore several Internet Explorer security settings and User Account Control. For anti-virus software that integrates with the Security Center, it presents the solution to fix any problems in its own user interface. Also, some Windows API calls have been added to let applications retrieve the aggregate health status from the Windows Security Center, and to receive notifications when the health status changes.
Protected Storage (PStore) has been deprecated and therefore made read-only in Windows Vista. Microsoft recommends using DPAPI to add new PStore data items or manage existing ones. Internet Explorer 7 and later also use DPAPI instead of PStore to store their credentials.
The built-in administrator account is disabled by default on a clean installation of Windows Vista. It cannot be accessed from safe mode too as long as there is at least one additional local administrator account.
See also
Computer security
References
External links
Vista vulnerabilities from SecurityFocus
Windows Vista
Microsoft Windows security technology
Windows Vista
Microsoft lists | Security and safety features new to Windows Vista | Technology | 6,892 |
5,658 | https://en.wikipedia.org/wiki/Human%20cannibalism | Human cannibalism is the act or practice of humans eating the flesh or internal organs of other human beings. A person who practices cannibalism is called a cannibal. The meaning of "cannibalism" has been extended into zoology to describe animals consuming parts of individuals of the same species as food.
Anatomically modern humans, Neanderthals, and Homo antecessor are known to have practised cannibalism to some extent in the Pleistocene. Cannibalism was occasionally practised in Egypt during ancient and Roman times, as well as later during severe famines. The Island Caribs of the Lesser Antilles, whose name is the origin of the word cannibal, acquired a long-standing reputation as eaters of human flesh, reconfirmed when their legends were recorded in the 17th century. Some controversy exists over the accuracy of these legends and the prevalence of actual cannibalism in the culture. Depicting indigenous peoples as cannibals was a common fantasy and rationale for European colonialism and 'civilising missions'.
Cannibalism has been well documented in much of the world, including Fiji (once nicknamed the "Cannibal Isles"), the Amazon Basin, the Congo, and the Māori people of New Zealand. Cannibalism was also practised in New Guinea and in parts of the Solomon Islands, and human flesh was sold at markets in some parts of Melanesia and of the Congo Basin. A form of cannibalism popular in early modern Europe was the consumption of body parts or blood for medical purposes. Reaching its height during the 17th century, this practice continued in some cases into the second half of the 19th century.
Cannibalism has occasionally been practised as a last resort by people suffering from famine. Well-known examples include the ill-fated Donner Party (1846–1847), the Holodomor (1932–1933), and the crash of Uruguayan Air Force Flight 571 (1972), after which the survivors ate the bodies of the dead. Additionally, there are cases of people engaging in cannibalism for sexual pleasure, such as Albert Fish, Issei Sagawa, Jeffrey Dahmer, and Armin Meiwes. Cannibalism has been both practised and fiercely condemned in several recent wars, especially in Liberia and the Democratic Republic of the Congo. It was still practised in Papua New Guinea as of 2012, for cultural reasons.
Cannibalism has been said to test the bounds of cultural relativism because it challenges anthropologists "to define what is or is not beyond the pale of acceptable human behavior". A few scholars argue that no firm evidence exists that cannibalism has ever been a socially acceptable practice anywhere in the world, but such views have been largely rejected as irreconcilable with the actual evidence.
Etymology
The word "cannibal" is derived from Spanish caníbal or caríbal, originally used as a name variant for the Kalinago (Island Caribs), a people from the West Indies said to have eaten human flesh. The older term anthropophagy, meaning "eating humans", is also used for human cannibalism.
Reasons and types
Cannibalism has been practised under a variety of circumstances and for various motives. To adequately express this diversity, Shirley Lindenbaum suggests that "it might be better to talk about 'cannibalisms in the plural.
Institutionalized, survival, and pathological cannibalism
One major distinction is whether cannibal acts are accepted by the culture in which they occur ("institutionalized cannibalism"), or whether they are merely practised under starvation conditions to ensure one's immediate survival ("survival cannibalism"), or by isolated individuals considered criminal and often pathological by society at large ("cannibalism as psychopathology" or as "aberrant behavior").
Institutionalized cannibalism, sometimes also called "learned cannibalism", is the consumption of human body parts as "an institutionalized practice" generally accepted in the culture where it occurs.
By contrast, survival cannibalism means "the consumption of others under conditions of starvation such as shipwreck, military siege, and famine, in which persons normally averse to the idea are driven [to it] by the will to live". Also known as famine cannibalism, such forms of cannibalism resorted to only in situations of extreme necessity have occurred in many cultures where cannibalism is otherwise clearly rejected. The survivors of the shipwrecks of the Essex and Méduse in the 19th century are said to have engaged in cannibalism, as did the members of Franklin's lost expedition and the Donner Party.
Such cases often involve only necro-cannibalism (eating the corpse of someone already dead) as opposed to homicidal cannibalism (killing someone for food). In modern English law, the latter is always considered a crime, even in the most trying circumstances. The case of R v Dudley and Stephens, in which two men were found guilty of murder for killing and eating a cabin boy while adrift at sea in a lifeboat, set the precedent that necessity is no defence to a charge of murder. This decision outlawed and effectively ended the practice of shipwrecked sailors drawing lots in order to determine who would be killed and eaten to prevent the others from starving, a time-honoured practice formerly known as a "custom of the sea".
In other cases, cannibalism is an expression of a psychopathology or mental disorder, condemned by the society in which it occurs and "considered to be an indicator of [a] severe personality disorder or psychosis". Well-known cases include Albert Fish, Issei Sagawa, and Armin Meiwes. Fantasies of cannibalism, whether acted out or not, are not specifically mentioned in manuals of mental disorders such as the DSM, presumably because at least serious cases (that lead to murder) are very rare.
Exo-, endo-, and autocannibalism
Within institutionalized cannibalism, exocannibalism is often distinguished from endocannibalism. Endocannibalism refers to the consumption of a person from the same community. Often it is a part of a funerary ceremony, similar to burial or cremation in other cultures. The consumption of the recently deceased in such rites can be considered "an act of affection" and a major part of the grieving process. It has also been explained as a way of guiding the souls of the dead into the bodies of living descendants.
In contrast, exocannibalism is the consumption of a person from outside the community. It is frequently "an act of aggression, often in the context of warfare", where the flesh of killed or captured enemies may be eaten to celebrate one's victory over them.
Some scholars explain both types of cannibalism as due to a belief that eating a person's flesh or internal organs will endow the cannibal with some of the positive characteristics of the deceased. However, several authors investigating exocannibalism in New Zealand, New Guinea, and the Congo Basin observe that such beliefs were absent in these regions.
A further type, different from both exo- and endocannibalism, is autocannibalism (also called autophagy or self-cannibalism), "the act of eating parts of oneself". It does not ever seem to have been an institutionalized practice, but occasionally occurs as pathological behaviour, or due to other reasons such as curiosity. Also on record are instances of forced autocannibalism committed as acts of aggression, where individuals are forced to eat parts of their own bodies as a form of torture.
Exocannibalism is thus often associated with the consumption of enemies as an act of aggression, a practice also known as war cannibalism. Endocannibalism is often associated with the consumption of deceased relatives in funerary rites driven by a practice known as funerary or mortuary cannibalism.
Additional motives
Medicinal cannibalism (also called medical cannibalism) means "the ingestion of human tissue ... as a supposed medicine or tonic". In contrast to other forms of cannibalism, which Europeans generally frowned upon, the "medicinal ingestion" of various "human body parts was widely practiced throughout Europe from the sixteenth to the eighteenth centuries", with early records of the practice going back to the first century CE. It was also frequently practised in China.
Sacrificial cannibalism refers the consumption of the flesh of victims of human sacrifice, for example among the Aztecs. Human and animal remains excavated in Knossos, Crete, have been interpreted as evidence of a ritual in which children and sheep were sacrificed and eaten together during the Bronze Age. According to Ancient Roman reports, the Celts in Britain practised sacrificial cannibalism, and archaeological evidence backing these claims has by now been found.
Infanticidal cannibalism or cannibalistic infanticide refers to cases where newborns or infants are killed because they are "considered unwanted or unfit to live" and then "consumed by the mother, father, both parents or close relatives".
Infanticide followed by cannibalism was practised in various regions, but is particularly well documented among Aboriginal Australians. Among animals, such behaviour is called filial cannibalism, and it is common in many species, especially among fish.
Human predation is the hunting of people from unrelated and possibly hostile groups in order to eat them. In parts of the Southern New Guinea lowland rain forests, hunting people "was an opportunistic extension of seasonal foraging or pillaging strategies", with human bodies just as welcome as those of animals as sources of protein, according to the anthropologist Bruce M. Knauft. As populations living near coasts and rivers were usually better nourished and hence often physically larger and stronger than those living inland, they "raided inland 'bush' peoples with impunity and often with little fear of retaliation". Cases of human predation are also on record for the neighbouring Bismarck Archipelago and for Australia. In the Congo Basin, there lived groups such as the Bankutu who hunted humans for food even when game was plentiful.
The term innocent cannibalism has been used for cases of people eating human flesh without knowing what they are eating. It is a subject of myths, such as the myth of Thyestes who unknowingly ate the flesh of his own sons. There are also actual cases on record, for example from the Congo Basin, where cannibalism had been quite widespread and where even in the 1950s travellers were sometimes served a meat dish, learning only afterwards that the meat had been of human origin.
Gastronomic and functionalist explanations
The term gastronomic cannibalism has been suggested for cases where human flesh is eaten to "provide a supplement to the regular thus essentially for its nutritional or, in an alternative definition, for cases where it is "eaten without ceremony (other than culinary), in the same manner as the flesh of any other animal". While the term has been criticized as being too vague to clearly identify a specific type of cannibalism, various records indicate that nutritional or culinary concerns could indeed play a role in such acts even outside of periods of starvation. Referring to the Congo Basin, where many of the eaten were butchered slaves rather than enemies killed in war, the anthropologist Emil Torday notes that "the most common [reason for cannibalism] was simply gastronomic: the natives loved 'the flesh that speaks' [as human flesh was commonly called] and paid for it". The historian Key Ray Chong observes that, throughout Chinese history, "learned cannibalism was often practiced ... for culinary appreciation".
In his popular book Guns, Germs and Steel, Jared Diamond suggests that "protein starvation is probably also the ultimate reason why cannibalism was widespread in traditional New Guinea highland societies", and both in New Zealand and Fiji, cannibals explained their acts as due to a lack of animal meat. In Liberia, a former cannibal argued that it would have been wasteful to let the flesh of killed enemies spoil, and eaters of human flesh in New Guinea and the neighbouring Bismarck Archipelago expressed the same sentiment.
In many cases, human flesh was also described as particularly delicious, especially when it came from women, children, or both. Such statements are on record for various regions and peoples, including the Aztecs, today's Liberia and Nigeria, the Fang people in west-central Africa, the Congo Basin, China up to the 14th century, Sumatra, Borneo, Australia, New Guinea, New Zealand, Vanuatu, and Fiji.
Some Europeans and Americans who ate human flesh accidentally, out of curiosity, or to comply with local customs likewise tended to describe it as very good.
There is a debate among anthropologists on how important functionalist reasons are for the understanding of institutionalized cannibalism. Diamond is not alone in suggesting "that the consumption of human flesh was of nutritional benefit for some populations in New Guinea" and the same case has been made for other "tropical peoples ... exploiting a diverse range of animal foods", including human flesh. The materialist anthropologist Marvin Harris argued that a "shortage of animal protein" was also the underlying reason for Aztec cannibalism. The cultural anthropologist Marshall Sahlins, on the other hand, rejected such explanations as overly simplistic, stressing that cannibal customs must be regarded as "complex phenomen[a]" with "myriad attributes" which can only be understood if one considers "symbolism, ritual, and cosmology" in addition to their "practical function".
In pre-modern medicine, an explanation given by the now-discredited theory of humorism for cannibalism was that it was caused by a black acrimonious humor, which, being lodged in the linings of the ventricles of the heart, produced a voracity for human flesh. On the other hand, the French philosopher Michel de Montaigne understood war cannibalism as a way of expressing vengeance and hatred towards one's enemies and celebrating one's victory over them, thus giving an interpretation that is close to modern explanations. He also pointed out that some acts of Europeans in his own time could be considered as equally barbarous, making his essay "Of Cannibals" () a precursor to later ideas of cultural relativism.
Body parts and culinary practices
Nutritional value of the human body
Archaeologist James Cole investigated the nutritional value of the human body and found it to be similar to that of animals of similar size.
He notes that, according to ethnographic and archaeological records, nearly all edible parts of humans were sometimes eaten – not only skeletal muscle tissue ("flesh" or "meat" in a narrow sense), but also "lungs, liver, brain, heart, nervous tissue, bone marrow, genitalia and skin", as well as kidneys. For a typical adult man, the combined nutritional value of all these edible parts is about 126,000 kilocalories (kcal). The nutritional value of women and younger individuals is lower because of their lower body weight – for example, around 86% of a male adult for an adult woman and 30% for a boy aged around 5 or 6.
As the daily energy need of an adult man is about 2,400 kilocalories, a dead male body could thus have fed a group of 25 men for a bit more than two days, provided they ate nothing but the human flesh alone – longer if it was part of a mixed diet. The nutritional value of the human body is thus not insubstantial, though Cole notes that for prehistoric hunters, large megafauna such as mammoths, rhinoceros, and bisons would have been an even better deal as long as they were available and could be caught, because of their much higher body weight.
Hearts and livers
Cases of people eating human livers and hearts, especially of enemies, have been reported from across the world. After the Battle of Uhud (625), Hind bint Utba ate (or at least attempted to) the liver of Hamza ibn Abd al-Muttalib, an uncle of Muhammad. At that time, the liver was considered "the seat of life".
French Catholics ate livers and hearts of Huguenots at the St. Bartholomew's Day massacre in 1572, in some cases also offering them for sale.
In China, medical cannibalism was practised over centuries. People voluntarily cut their own body parts, including parts of their livers, and boiled them to cure ailing relatives. Children were sometimes killed because eating their boiled hearts was considered a good way of extending one's life. Emperor Wuzong of Tang supposedly ordered provincial officials to send him "the hearts and livers of fifteen-year-old boys and girls" when he had become seriously ill, hoping in vain this medicine would cure him. Later private individuals sometimes followed his example, paying soldiers who kidnapped preteen children for their kitchen.
When "human flesh and organs were sold openly at the marketplace" during the Taiping Rebellion in 1850–1864, human hearts became a popular dish, according to some who afterwards freely admitted having consumed them.
According to a missionary's report from the brutal suppression of the Dungan Revolt of 1895–1896 in northwestern China, "thousands of men, women and children were ruthlessly massacred by the imperial soldiers" and "many a meal of human hearts and livers was partaken of by soldiers", supposedly out of a belief that this would give them "the courage their enemies had displayed".
In World War II, Japanese soldiers ate the livers of killed Americans in the Chichijima incident.
Many Japanese soldiers who died during the occupation of Jolo Island in the Philippines had their livers eaten by local Moro fighters, according to Japanese soldier Fujioka Akiyoshi.
During the Cultural Revolution (1966–1976), hundreds of incidents of cannibalism occurred, mostly motivated by hatred against supposed "class enemies", but sometimes also by health concerns. In a case recorded by the local authorities, a school teacher in Mengshan County "heard that consuming a 'beauty's heart' could cure disease". He then chose a 13- or 14-year-old student of his and publicly denounced her as a member of the enemy faction, which was enough to get her killed by an angry mob. After the others had left, he "cut open the girl's chest ..., dug out her heart, and took it home to enjoy".
In a further case that took place in Wuxuan County, likewise in the Guangxi region, three brothers were beaten to death as supposed enemies; afterwards their livers were cut out, baked, and consumed "as medicine".
According to the Chinese writer Zheng Yi, who researched these events, "the consumption of human liver was mentioned at least fifty or sixty times" in just a small number of archival documents. He talked with a man who had eaten human liver and told him that "barbecued liver is delicious".
During a massacre of the Madurese minority in the Indonesian part of Borneo in 1999, reporter Richard Lloyd Parry met a young cannibal who had just participated in a "human barbecue" and told him without hesitation: "It tastes just like chicken. Especially the liver – just the same as chicken." In 2013, during the Syrian civil war, Syrian rebel Abu Sakkar was filmed eating parts of the lung or liver of a government soldier while declaring that "We will eat your hearts and your livers you soldiers of Bashar the dog".
Breasts, palms, and soles
Various accounts from around the world mention women's breasts as a
favourite body part. Also frequently mentioned are the palms of the hands and sometimes the soles of the feet, regardless of the victim's gender.
Jerome, in his treatise Against Jovinianus, claimed that the British Attacotti were cannibals who
regarded the buttocks of men and the breasts of women as delicacies.
During the Mongol invasion of Europe in the 13th century and their subsequent rule over China during the Yuan dynasty (1271–1368), some Mongol fighters practised cannibalism and both European and Chinese observers record a preference for women's breasts, which were considered "delicacies" and, if there were many corpses, sometimes the only part of a female body that was eaten (of men, only the thighs were said to be eaten in such circumstances).
After meeting a group of cannibals in West Africa in the 14th century, the Moroccan explorer Ibn Battuta recorded that, according to their preferences, "the tastiest part of women's flesh is the palms and the breast."
Centuries later, the anthropologist wrote that, in southern Nigeria, "the parts in greatest favour are the palms of the hands, the fingers and toes, and, of a woman, the breast."
Regarding the north of the country, his colleague Charles Kingsley Meek added: "Among all the cannibal tribes the palms of the hands and the soles of the feet were considered the tit-bits of the body."
Among the Apambia, a cannibalistic clan of the Azande people in Central Africa, palms and soles were considered the best parts of the human body, while their favourite dish was prepared with "fat from a woman's breast", according to the missionary and ethnographer F. Gero.
Similar preferences are on record throughout Melanesia. According to the anthropologists Bernard Deacon and Camilla Wedgwood, women were "specially fattened for eating" in Vanuatu, "the breasts being the great delicacy". A missionary confirmed that "a body of a female usually formed the principal part of the repast" at feasts for chiefs and warriors.
The ethnologist writes: "Apart from the breasts of women and the genitals of men, palms of hands and soles of feet were the most coveted morsels." He knew a chief on Ambae, one of the islands of Vanuatu, who, "according to fairly reliably sources", dined on a young girl's breasts every few days.
When visiting the Solomon Islands in the 1980s, anthropologist Michael Krieger met a former cannibal who told him that women's breasts had been considered the best part of the human body because they were so fatty, with fat being a rare and sought delicacy.
They were also considered among the best parts in New Guinea and the Bismarck Archipelago.
Modes of preparation
Based on theoretical considerations, the structuralist anthropologist Claude Lévi-Strauss suggested that human flesh was most typically boiled, with roasting also used to prepare the bodies of enemies and other outsiders in exocannibalism, but rarely in funerary endocannibalism (when eating deceased relatives).
But an analysis of 60 sufficiently detailed and credible descriptions of institutionalized cannibalism by anthropologist Paul Shankman failed to confirm this hypothesis. Shankman found that roasting and boiling together accounted for only about half of the cases, with roasting being slightly more common. In contrast to Lévi-Strauss's predictions, boiling was more often used in exocannibalism, while roasting was about equally common for both.
Shankman observed that various other "ways of preparing people" were repeatedly employed as well; in one third of all cases, two or more modes were used together (e.g. some bodies or body parts were boiled or baked, while others were roasted). Human flesh was baked in steam on preheated rocks or in earth ovens (a technique widely used in the Pacific), smoked (which allowed to preserve it for later consumption), or eaten raw. While these modes were used in both exo- and endocannibalism, another method that was only used in the latter and only in the Americas was to burn the bones or bodies of deceased relatives and then to consume the bone ash.
After analysing numerous accounts from China, Key Ray Chong similarly concludes that "a variety of methods for cooking human flesh" were used in this country. Most popular were "broiling, roasting, boiling and steaming", followed by "pickling in salt, wine, sauce and the like". Human flesh was also often "cooked into soup" or stewed in cauldrons. Eating human flesh raw was the "least popular" method, but a few cases are on record too. Chong notes that human flesh was typically cooked in the same way as "ordinary foodstuffs for daily consumption" – no principal distinction from the treatment of animal meat is detectable, and nearly any mode of preparation used for animals could also be used for people.
Whole-body roasting and baking
Though human corpses, like those of animals, were usually cut into pieces for further processing, reports of people being roasted or baked whole are on record throughout the world.
At the archaeological site of Herxheim, Germany, more than a thousand people were killed and eaten about 7000 years ago, and the evidence indicates that many of them were spit-roasted whole over open fires.
During severe famines in China and Egypt during the 12th and early 13th centuries, there was a black-market trade in corpses of little children that were roasted or boiled whole.
In China, human-flesh sellers advertised such corpses as good for being boiled or steamed whole, "including their bones", and praised their particular tenderness.
In Cairo, Egypt, the Arab physician Abd al-Latif al-Baghdadi repeatedly saw "little children, roasted or boiled", offered for sale in baskets on street corners during a heavy famine that started in 1200 CE.
Older children and possibly adults were sometimes prepared in the same way.
Once he saw "a child nearing the age of puberty, who had been found roasted"; two young people confessed to having killed and cooked the child.
Another time, remains were found of a person who had apparently been roasted and served whole, the legs tied like those of "a sheep trussed for cooking".
Only the skeleton was found, still undivided and in the trussed position, but "with all the flesh stripped off for food".
In some cases children were roasted and offered for sale by their own parents; other victims were street children, who had become very numerous and were often kidnapped and cooked by people looking for food or extra income.
The victims were so numerous that sometimes "two or three children, even more, would be found in a single cooking pot."
Al-Latif notes that, while initially people were shocked by such acts, they "eventually ... grew accustomed, and some conceived such a taste for these detestable meats that they made them their ordinary provender ... The horror people had felt at first vanished entirely".
After the end of the Mongol-led Yuan dynasty (1271–1368), a Chinese writer criticized in his recollections of the period that some Mongol soldiers ate human flesh because of its taste rather than (as had also occurred in other times) merely in cases of necessity. He added that they enjoyed torturing their victims (often children or women, whose flesh was preferred over that of men) by roasting them alive, in "large jars whose outside touched the fire [or] on an iron grate".
Other victims were placed "inside a double bag ... which was put into a large pot" and so boiled alive.
While not mentioning live roasting or boiling, European authors also complained about cannibalism and cruelty during the Mongol invasion of Europe, and a drawing in the Chronica Majora (compiled by Matthew Paris) shows Mongol fighters spit-roasting a human victim.
, who accompanied Christopher Columbus during his second voyage, afterwards stated "that he saw there with his own eyes several Indians skewered on spits being roasted over burning coals as a treat for the gluttonous."
Jean de Léry, who lived for several months among the Tupinambá in Brazil, writes that several of his companions reported "that they had seen not only a number of men and women cut in pieces and grilled on the boucans, but also little unweaned children roasted whole" after a successful attack on an enemy village.
According to German ethnologist Leo Frobenius, children captured by Songye slave raiders in the Central African Kasaï region that were too young to be sold with a profit were instead "skewered on long spears like rats and roasted over a quickly kindled large fire" for consumption by the raiders.
In the Solomon Islands in the 1870s, a British captain saw a "dead body, dressed and cooked whole" offered for sale in a canoe. A settler treated the scene as "an every-day occurrence" and told him "that he had seen as many as twenty bodies lying on the beach, dressed and cooked". Decades later, a missionary reported that whole bodies were still offered "up and down the coast in canoes for sale" after battles, since human flesh was eaten "for pleasure".
In Fiji, whole human bodies cooked in earth ovens were served in carefully pre-arranged postures, according to anthropologist Lorimer Fison and several other sources:
Within this archipelago, it was especially the Gau Islanders who "were famous for cooking bodies whole".
In New Caledonia, a missionary named Ta'unga from the Cook Islands repeatedly saw how whole human bodies were cooked in earth ovens: "They tie the hands together and bundle them up together with the intestines. The legs are bent up and bound with hibiscus bark. When it is completed they lay the body out flat on its back in the earth oven, then when it is baked ready they cut it up and eat it." Ta'unga commented: "One curious thing is that when a man is alive he has a human appearance, but after he is baked he looks more like a dog, as the lips are shriveled back and his teeth are bared."
Among the Māori in New Zealand, children captured in war campaigns were sometimes spit-roasted whole (after slitting open their bellies to remove the intestines), as various sources report. Enslaved children, including teenagers, could meet the same fate, and whole babies were sometimes served at the tables of chiefs.
In the Marquesas Islands, captives (preferably women) killed for consumption "were spitted on long poles that entered between their legs and emerged from their mouths" and then roasted whole. Similar customs had a long history: In Nuku Hiva, the largest of these islands, archaeologists found the partially consumed "remains of a young child" that had been roasted whole in an oven during the 14th century or earlier.
Medical aspects
A well-known case of mortuary cannibalism is that of the Fore tribe in New Guinea, which resulted in the spread of the prion disease kuru. Although the Fore's mortuary cannibalism was well-documented, the practice had ceased before the cause of the disease was recognized. However, some scholars argue that although post-mortem dismemberment was the practice during funeral rites, cannibalism was not. Marvin Harris theorizes that it happened during a famine period coincident with the arrival of Europeans and was rationalized as a religious rite.
In 2003, a publication in Science received a large amount of press attention when it suggested that early humans may have practised extensive cannibalism. According to this research, genetic markers commonly found in modern humans worldwide suggest that today many people carry a gene that evolved as protection against the brain diseases that can be spread by consuming human brain tissue. A 2006 reanalysis of the data questioned this hypothesis, because it claimed to have found a data collection bias, which led to an erroneous conclusion. This claimed bias came from incidents of cannibalism used in the analysis not being due to local cultures, but having been carried out by explorers, stranded seafarers or escaped convicts. The original authors published a subsequent paper in 2008 defending their conclusions.
Myths, legends and folklore
Cannibalism features in the folklore and legends of many cultures and is most often attributed to evil characters or as extreme retribution for some wrongdoing. Examples include the witch in "Hansel and Gretel", Lamia of Greek mythology, the witch Baba Yaga of Slavic folklore, and the Yama-uba in Japanese folklore.
A number of stories in Greek mythology involve cannibalism, in particular the eating of close family members, e.g., the stories of Thyestes, Tereus and especially Cronus, who became Saturn in the Roman pantheon. The story of Tantalus is another example, though here a family member is prepared for consumption by others.
The wendigo is a creature appearing in the legends of the Algonquian people. It is thought of variously as a malevolent cannibalistic spirit that could possess humans or a monster that humans could physically transform into. Those who indulged in cannibalism were at particular risk, and the legend appears to have reinforced this practice as taboo. The Zuni people tell the story of the Átahsaia – a giant who cannibalizes his fellow demons and seeks out human flesh.
The wechuge is a demonic cannibalistic creature that seeks out human flesh appearing in the mythology of the Athabaskan people. It is said to be half monster and half human-like; however, it has many shapes and forms.
In literature and popular culture
Cannibalism is depicted in literary and other imaginative works across history. Homer's Odyssey, Beowulf, Shakespeare's Titus Andronicus, Daniel Defoe's Robinson Crusoe, Herman Melville's Moby-Dick, and Gustave Flaubert's Salammbo are prominent examples. It also features in several classic Chinese novel, such as Romance of the Three Kingdoms and Water Margin.
One of the most famous satirical essays in the English language concerns cannibalism. A Modest Proposal for Preventing the Children of Poor People from Being a Burthen to Their Parents or Country, and for Making Them Beneficial to the Publick, commonly referred to as A Modest Proposal, is a Juvenalian satire published by Anglo-Irish writer and clergyman Jonathan Swift in 1729. It suggests that poor people in Ireland could ease their economic troubles by selling their young children as food to the elite, and describes in detail the various advantages this would ostensibly have. Among other satirical works depicting cannibalism are Mark Twain's short story "Cannibalism in the Cars" (1868) and Mo Yan's novel The Republic of Wine (1992).
Cannibalism is also a recurring theme in popular culture, especially within the horror genre, with cannibal films being a notable subgenre. One of the best known fictional serial killers is a cannibal: Hannibal Lecter, created by Thomas Harris. Survival cannibalism is a topic of films such as Society of the Snow (2023) and TV series such as Yellowjackets (2021–). Other works mention cannibalism in post-apocalyptic settings, among them Cormac McCarthy's novel The Road (2006) and its 2009 film adaptation. People who consume human flesh without knowing it are depicted in various films, among them the science fiction classic Soylent Green (1973) and the horror comedy The Rocky Horror Picture Show (1975).
Scepticism
William Arens, author of The Man-Eating Myth: Anthropology and Anthropophagy, questions the credibility of reports of cannibalism and argues that the description by one group of people of another people as cannibals is a consistent and demonstrable ideological and rhetorical device to establish perceived cultural superiority. Arens bases his thesis on a detailed analysis of various "classic" cases of cannibalism reported by explorers, missionaries, and anthropologists. He claims that all of them were steeped in racism, unsubstantiated, or based on second-hand or hearsay evidence. Though widely discussed, Arens's book generally failed to convince the academic community. Claude Lévi-Strauss observes that, in spite of his "brilliant but superficial book ... [n]o serious ethnologist disputes the reality of cannibalism". Shirley Lindenbaum notes that, while after "Arens['s] ... provocative suggestion ... many anthropologists ... reevaluated their data", the outcome was an improved and "more nuanced" understanding of where, why and under which circumstances cannibalism took place rather than a confirmation of his claims: "Anthropologists working in the Americas, Africa, and Melanesia now acknowledge that institutionalized cannibalism occurred in some places at some times. Archaeologists and evolutionary biologists are taking cannibalism seriously."
Lindenbaum and others point out that Arens displays a "strong ethnocentrism". His refusal to admit that institutionalized cannibalism ever existed seems to be motivated by the implied idea "that cannibalism is the worst thing of all" – worse than any other behaviour people engaged in, and therefore uniquely suited to vilifying others. Kajsa Ekholm Friedman calls this "a remarkable opinion in a culture [the European/American one] that has been capable of the most extreme cruelty and destructive behavior, both at home and in other parts of the world."
She observes that, contrary to European values and expectations, "in many parts of the Congo region there was no negative evaluation of cannibalism. On the contrary, people expressed their strong appreciation of this very special meat and could not understand the hysterical reactions from the white man's side." And why indeed, she goes on to ask, should they have had the same negative reactions to cannibalism as Arens and his contemporaries? Implicitly he assumes that everybody throughout human history must have shared the strong taboo placed by his own culture on cannibalism, but he never attempts to explain why this should be so, and "neither logic nor historical evidence justifies" this viewpoint, as Christian Siefkes commented.
Some have argued that it is the taboo against cannibalism, rather than its practice, that needs to be explained. Hubert Murray, the Lieutenant-Governor of Papua in the early 20th century, admitted that "I have never been able to give a convincing answer to a native who says to me, 'Why should I not eat human flesh? After observing that the Orokaiva people in New Guinea explained their cannibal customs as due to "a simple desire for good food", the Australian anthropologist F. E. Williams commented: "Anthropologically speaking the fact that we ourselves should persist in a superstitious, or at least sentimental, prejudice against human flesh is more puzzling than the fact that the Orokaiva, a born hunter, should see fit to enjoy perfectly good meat when he gets it."
Accusations of cannibalism could be used to characterize indigenous peoples as "uncivilized", "primitive", or even "inhuman." While this means that the reliability of reports of cannibal practices must be carefully evaluated especially if their wording suggests such a context, many actual accounts do not fit this pattern. The earliest firsthand account of cannibal customs in the Caribbean comes from Diego Álvarez Chanca, who accompanied Christopher Columbus on his second voyage. His description of the customs of the Caribs of Guadeloupe includes their cannibalism (men killed or captured in war were eaten, while captured boys were "castrated [and used as] servants until they gr[e]w up, when they [were] slaughtered" for consumption), but he nevertheless notes "that these people are more civilized than the other islanders" (who did not practice cannibalism). Nor was he an exception. Among the earliest reports of cannibalism in the Caribbean and the Americas, there are some (like those of Amerigo Vespucci) that seem to mostly consist of hearsay and "gross exaggerations", but others (by Chanca, Columbus himself, and other early travellers) show "genuine interest and respect for the natives" and include "numerous cases of sincere praise".
Reports of cannibalism from other continents follow similar patterns. Condescending remarks can be found, but many Europeans who described cannibal customs in Central Africa wrote about those who practised them in quite positive terms, calling them "splendid" and "the finest people" and not rarely, like Chanca, actually considering them as "far in advance of" and "intellectually and morally superior" to the non-cannibals around them. Writing from Melanesia, the missionary George Brown explicitly rejects the European prejudice of picturing cannibals as "particularly ferocious and repulsive", noting instead that many cannibals he met were "no more ferocious than" others and "indeed ... very nice people".
Reports or assertions of cannibal practices could nevertheless be used to promote the use of military force as a means of "civilizing" and "pacifying" the "savages". During the Spanish conquest of the Aztec Empire and its earlier conquests in the Caribbean there were widespread reports of cannibalism, and cannibals became exempted from Queen Isabella's prohibition on enslaving the indigenous. Another example of the sensationalism of cannibalism and its connection to imperialism occurred during Japan's 1874 expedition to Taiwan. As Robert Eskildsen describes, Japan's popular media "exaggerated the aborigines' violent nature", in some cases by wrongly accusing them of cannibalism.
This Horrid Practice: The Myth and Reality of Traditional Maori Cannibalism (2008) by New Zealand historian Paul Moon received a hostile reception by some Māori, who felt the book tarnished their whole people. However, the factual accuracy of the book was not seriously disputed and even critics such as Margaret Mutu grant that cannibalism was "definitely" practised and that it was "part of our [Māori] culture."
History
There is archaeological evidence that cannibalism has been practised for at least hundreds of thousands of years by early Homo sapiens and archaic hominins.
Among modern humans, cannibalism has been practised by various groups. An incomplete list of cases where it is documented to have occurred in institutionalized form includes prehistoric and early modern Europe, South America, Mesoamerica, Iroquoian peoples in North America, parts of Western and Central Africa, China and Sumatra, among pre-contact Aboriginal Australians, among Māori in New Zealand, on some other Polynesian islands as well as in New Guinea, the Solomon Islands, and Fiji. Evidence of cannibalism has also been found in ruins associated with the Ancestral Puebloans, at Cowboy Wash in the Southwestern United States.
After World War I, institutionalized cannibalism has become very rare, but cases were still reported during times of famine. Occasional cannibal acts committed by individual criminals also are documented throughout the 20th and 21st centuries.
The Americas
Africa
Europe
Asia
Oceania
See also
Autocannibalism, the practice of eating oneself (also called self-cannibalism)
Cannibal film
Cannibalism in Africa
Cannibalism in Asia
Cannibalism in Europe
Cannibalism in literature
Cannibalism in Oceania
Cannibalism in popular culture
Cannibalism in poultry
Cannibalism in the Americas
Cannibalization (marketing), a business strategy
Child cannibalism for children as victims of cannibalism (in myth and reality)
Custom of the sea, the practice of shipwrecked survivors drawing lots to see who would be killed and eaten so that the others might survive
Endocannibalism, the consumption of persons from the same community, often as a funerary rite
Exocannibalism, the consumption of persons from outside the community, often enemies killed or captured in war
Filial cannibalism, the consumption of one's own offspring
Homo antecessor, an extinct human species providing some of the earliest known evidence for human cannibalism
Human placentophagy, the consumption of the placenta (afterbirth)
Issei Sagawa, a Japanese man who became a minor celebrity after killing and eating another student
List of incidents of cannibalism
Medical cannibalism, the consumption of human body parts to treat or prevent diseases
Placentophagy, the act of mammals eating the placenta of their young after childbirth
Pleistocene human diet, the eating habits of human ancestors in the Pleistocene
Sexual cannibalism, behaviour of (usually female) animals that eat their mates during or after copulation
Transmissible spongiform encephalopathy, an incurable disease that can damage the brain and nervous system of many animals, including humans
Vorarephilia, a sexual fetish and paraphilia where arousal results from the idea of devouring others or being devoured
References
Bibliography
Further reading
Sahlins, Marshall. "Cannibalism: An Exchange." New York Review of Books 26, no. 4 (March 22, 1979).
Schutt, Bill. Cannibalism: A Perfectly Natural History. Chapel Hill: Algonquin Books 2017.
External links
The Straight Dope columns:
Víctor Montoya, Cannibalism (2007, translated by Elizabeth Gamble Miller) – a look at representations of cannibalism in art and myth, and why we tend to be so horrified by it
Rachael Bell, Cannibalism: The Ancient Taboo in Modern Times (2015) – from Crime Library
Alisa G. Woods, Cannibalism and the Resistant Brain (2015) – on how studies of kuru might lead to a better understanding of other diseases
Shirley Lindenbaum, Cannibalism (2021) – article from the Open Encyclopedia of Anthropology
Terry Madenholm, A Brief History of Cannibalism: Not Just a Matter of Taste (2022) – from Haaretz
Human activities | Human cannibalism | Biology | 9,629 |
22,076,861 | https://en.wikipedia.org/wiki/Choke%20pear%20%28plant%29 | A choke pear or chocky-pear is an astringent fruit. The term is used for the fruit of any variety of pear which has an astringent taste and is difficult to swallow.
Varieties
One variety of choke pear is poire d'Angoisse, a variety of pear that was grown in Angoisse, a commune in the Arrondissement of Nontron in Dordogne, France, in the Middle Ages, which was hard, bad tasting, and almost impossible to eat raw. In the words of L'Académie française, the pear is "si âpre et si revèche au goût qu'on a de la peine à l'avaler" ("so harsh and crabbed of taste that one can only with difficulty swallow it"). These qualities, and the common meaning of angoisse in French language ("anguish") apparently originated the French idiom avaler des poires d'angoisse ("swallow pears of Angoisse/anguish") meaning "to suffer great displeasures". Possibly because of this idiom, the names "choke pear" and "pear of anguish" have been used for a gagging device allegedly used in Europe, sometime before the 17th century.
Dalechamps has identified this with the species of pear that Pliny the Elder listed as "ampullaceum" in his Naturalis Historia. It, like most sour-tasting pear cultivars, was most likely used to make perry.
Similar fruits
Similarly named trees with astringent fruits include the choke cherry (the common name for several species of cherry tree that grow in North America and whose fruits are small and bitter tasting: Prunus virginiana, Prunus demissa, and Prunus serotina) and the choke plum.
References
External links
Pears
Roman cuisine
Plant common names | Choke pear (plant) | Biology | 382 |
24,144,656 | https://en.wikipedia.org/wiki/C21H30N2O | {{DISPLAYTITLE:C21H30N2O}}
The molecular formula C21H30N2O (molar mass: 326.484 g/mol, exact mass: 326.2358 u) may refer to:
Bunaftine
FT-104
Hydroxystenozole, also known as 17α-methylandrost-4-eno[3,2-c]pyrazol-17β-ol
Molecular formulas | C21H30N2O | Physics,Chemistry | 99 |
67,218,332 | https://en.wikipedia.org/wiki/Flying%20Laptop | The German Flying Laptop satellite, launched on 14 July 2017 on a Soyuz-2.1a launch vehicle from Baikonur Cosmodrome in Kazakhstan, hosts the OSIRISv1 laser communications experiment. The satellite has a total mass of 110 kg. It operates at a Sun-synchronous orbit with an inclination of 97.6 degrees.
The satellite is part of the Stuttgart Small Satellite Program, a program led by the German Space Agency.
Optical communications tests have been carried out with ground stations in Japan, Europe, and Canada, with a data rate of up to 200 Mbit/s, from orbit to ground only.
The two fixed lasers of OSIRISv1 are aimed at ground stations by 'body pointing', attitude control of the entire satellite, using four reaction wheels. The reaction wheels can be desaturated using three internal magnetorquers.
Flying Laptop carries a de-orbit mechanism called DOM2500 developed by Tohoku University and manufactured by Nakashimada Engineering Works, Ltd., which upon activation will unfurl a sail to increase atmospheric drag. The device will be used at the end of the satellite mission.
Further reading
OSIRISv1 on Flying Laptop: Measurement Results and Outlook Fuchs 2019
See also
Laser communication in space
References
Satellites of Germany
Spacecraft launched by Soyuz-2 rockets | Flying Laptop | Astronomy | 272 |
10,031,328 | https://en.wikipedia.org/wiki/Electric%20sail | An electric sail (also known as an electric solar wind sail or an E-sail) is a proposed form of spacecraft propulsion using the dynamic pressure of the solar wind as a source of thrust. It creates a "virtual" sail by using small wires to form an electric field that deflects solar wind protons and extracts their momentum. The idea was first conceptualised by Pekka Janhunen in 2006 at the Finnish Meteorological Institute.
Principles of operation and design
The electric sail consists of a number of thin, long and conducting tethers which are kept in a high positive potential by an onboard electron gun. The positively charged tethers deflect solar wind protons, thus extracting momentum from them. Simultaneously they attract electrons from the solar wind plasma, producing an electron current. The electron gun compensates for the arriving electric current.
One way to deploy the tethers is to rotate the spacecraft, using centrifugal force to keep them stretched. By fine-tuning the potentials of individual tethers and thus the solar wind force individually, the spacecraft's attitude can be controlled.
E-sail missions can be launched at almost any time with only minor variations in travel time. By contrast, conventional slingshot missions must wait for the planets to reach a particular alignment.
The electric solar wind sail has little in common with the traditional solar sail. The E-sail gets its momentum from the solar wind ions, whilst a photonic sail is propelled by photons. Thus, the available pressure is only about 1% of photon pressure; however, this may be compensated by the simplicity of scale-up. In the E-sail, the part of the sail is played by straightened conducting tethers (made of wires) which are placed radially around the host ship. The wires are electrically charged and thus an electric field is created around the wires. The electric field of the wires extends a few dozen metres into the surrounding solar wind plasma. The penetration distance depends on the solar wind plasma density and it scales as the plasma Debye length. Because the solar wind electrons affect the electric field (similarly to the photons on a traditional solar sail), the effective electric radius of the tethers is based on the electric field that is generated around the tether rather than the actual tether itself. This fact also makes it possible to manoeuvre by regulating the tethers' electric charge.
A full-sized sail would have 50–100 straightened tethers with a length of about each.
Compared to a reflective solar light sail, another propellantless deep space propulsion system, the electric solar wind sail could continue to accelerate at greater distances from the Sun, still developing thrust as it cruises toward the outer planets. By the time it reaches the ice giants, it may have accumulated as much as velocity, which is on par with the New Horizons probe, but without gravity assists.
In order to minimise damage to the thin tethers from micrometeoroids, the tethers would be formed from multiple strands, 25–50 micrometers in diameter, welded together at regular intervals. Thus, even if one wire were severed, a conducting path along the full length of the braided wire would remain in place. The feasibility of using ultrasonic welding was demonstrated at the University of Helsinki in January 2013.
Development history
Academy of Finland has been funding electric sail development since 2007.
To test the technology, a new European Union-backed electric sail study project was announced by the FMI in December 2010. The EU funding contribution was 1.7 million euros. Its goal was to build laboratory prototypes of the key components, it involved five European countries and ended in November 2013. In the EU evaluation, the project got the highest marks in its category. An attempt was made to test the working principles of the electric sail in low Earth orbit in the Estonian nanosatellite ESTCube-1 (2013-2015), but there was a technical failure and the attempt was unsuccessful. The piezoelectric motor used to unfurl the sail failed to turn the reel. In subsequent ground-based testing, a likely reason for the failure was found in a slipring contact which was likely physically damaged by launch vibration.
An international research team that includes Janhunen received funding through a 2015 NIAC Phase II solicitation for further development at NASA's Marshall Space Flight Center. Their research project is called 'Heliopause Electrostatic Rapid Transit System' (HERTS). The Heliopause Electrostatic Rapid Transit System (HERTS) concept is currently being tested. For HERTS, it might take only 10 to 15 years to make the trip of over 100 astronomical units (15 billion kilometers). In the HERTS concept, multiple, 20 kilometer or so long, 1 millimeter thin, positively charged wires would be extended from a rotating spacecraft.
A new satellite launched in June 2017, the Finnish Aalto-1 nanosatellite, currently in orbit, will test the electric sail for deorbiting in 2019.
In 2017, Academy of Finland granted Centre of Excellence funding for 2018–2025 to a team that includes Janhunen and members from universities, to establish a Finnish Centre of Excellence in Research of Sustainable Space.
Intrinsic limitations
Almost all Earth-orbiting satellites are inside Earth's magnetosphere. However, the electric sail cannot be used inside planetary magnetospheres because the solar wind does not penetrate them, allowing only slower plasma flows and magnetic fields. Instead, inside a planetary magnetosphere, the electric sail may function as a brake, allowing deorbiting of satellites.
Like for other solar sail technologies, while modest variation of the thrust direction can be achieved by inclining the sail, the thrust vector always points more or less radially outward from the Sun. It has been estimated that maximum operational inclination would be 60°, resulting in a thrusting angle of 30° from the outward radial direction. However, like with the sails of a ship, tacking could be used for changing the trajectory. Interstellar ships approaching a sun might use solar wind flow for braking.
Applications
Fast missions (> or ) out of the Solar System and heliosphere with small or modest payload
As a brake for a small interstellar probe which has been accelerated to high speed by some other means such as laser lightsail
Inward-spiralling missions to study the Sun at a closer distance
Two-way missions to inner Solar System objects such as asteroids
Off-Lagrange point solar wind monitoring spacecraft for predicting space weather with a longer warning time than 1 hour
Fast missions to planet Uranus
Janhunen et al. have proposed a mission to Uranus powered by an electric sail. The mission could reach its destination in about the same time that the earlier Galileo space probe required to arrive at Jupiter, just over one fourth as far away. Galileo took 6 years to reach Jupiter at a cost of $1.6 billion, while Cassini-Huygens took 7 years to get to Saturn and cost almost as much. The sail is expected to consume 540 watts, producing about 0.5 newtons accelerating the craft by about 1 mm/s2. The craft would reach a velocity of about by the time it reaches Uranus, 6 years after launch.
The downside is that the electric sail cannot be used as a brake, so the craft arrives at a speed of , limiting the missions to flybys or atmospheric entry missions. Braking would require a conventional chemical rocket.
The proposed craft has three parts: the E-sail module with solar panels and reels to hold the wires; the main body, including chemical thrusters for adjusting trajectory en route and at destination and communications equipment; and a research module to enter Uranus's atmosphere and make measurements for relay to Earth via the main body.
See also
Electrodynamic tether
Magnetic sail
References
External links
Heliopause Electrostatic Rapid Transit System
FMI's official E-sail page
List of original scientific publications
Finnish Meteorological Institute/Space Research
CubeSats
Finnish inventions
Spacecraft propulsion
Solar sailing
Interstellar travel | Electric sail | Astronomy | 1,639 |
60,478 | https://en.wikipedia.org/wiki/Abort%20%28computing%29 | In a computer or data transmission system, to abort means to terminate, usually in a controlled manner, a processing activity because it is impossible or undesirable for the activity to proceed or in conjunction with an error. Such an action may be accompanied by diagnostic information on the aborted process.
In addition to being a verb, abort also has two noun senses. In the most general case, the event of aborting can be referred to as an abort. Sometimes the event of aborting can be given a special name, as in the case of an abort involving a Unix kernel where it is known as a kernel panic. Specifically in the context of data transmission, an abort is a function invoked by a sending station to cause the recipient to discard or ignore all bit sequences transmitted by the sender since the preceding flag sequence.
In the C programming language, abort() is a standard library function that terminates the current application and returns an error code to the host environment.
Types of aborts
User-Initiated Aborts: Users can often abort tasks using keyboard shortcuts (like Ctrl + C in terminal applications) or commands to terminate processes. This is especially useful for stopping unresponsive programs or those taking longer than expected to execute.
Programmatic Aborts: Developers can implement abort logic in their code. For instance, when a program encounters an error or invalid input, it may call functions like abort() in C or C++ to terminate execution. This approach helps prevent further errors or potential data corruption.
System-Level Aborts: Operating systems might automatically abort processes under certain conditions, such as resource exhaustion or unresponsiveness. For example, a watchdog timer can terminate a process that remains idle beyond a specified time limit.
Database Transactions: In database management, aborting (often termed ‘rolling back’) a transaction is crucial for maintaining data integrity. If a transaction cannot be completed successfully, aborting it returns the database to its previous state, which ensures that incomplete transactions don’t leave the data inconsistent.
Aborts are typically logged, especially in critical systems, to facilitate troubleshooting and improve future runs.
See also
Abort, Retry, Fail?
Abnormal end
Crash
Hang
Reset
Reboot
References
Computing terminology | Abort (computing) | Technology | 474 |
60,936,591 | https://en.wikipedia.org/wiki/Mohammed%20Nasser%20Al%20Ahbabi | Mohammed Nasser Al Ahbabi is the Director General of United Arab Emirates Space Agency. Before joining the UAE Space Agency, Ahbabi was part of a UAE Armed Forces think tank project, where he worked alongside military and government stakeholders, on concepts and technologies in Smart Defense and Cyber Warfare, amongst others. He has an active role in ITU-R and has served as the head of YAHSAT MilSatCom Project.
Education
In 1998, Mohammed Nasser Al Ahbabi obtained a degree in electronic engineering from the University of California, United States of America. He obtained a masters degree in communications from the University of Southampton, United Kingdom in 2001, and a Ph.D in Laser and Fibre Optics from the same university in 2005.
Career
Mohammed Nasser Al Ahbabi initially served as a telecommunications officer for the UAE Armed Forces. Concurrently, he worked as a coordinator for Dubai Internet City. From 2005 to 2012, he was a telecommunications officer at Sharyan Al Doea Network, and a project manager in the military division at Al Yah Satellite Communications. He is a part of the Hope Mars Mission team, which plans to send the Hope Space exploration probe into Mars' orbit by 2020.
Recognition
Mohammed Nasser Al Ahbabi has been ranked 43 in the Top 100 Most Powerful Arabs 2018 list compiled by Gulf Business. He was ranked 13 in Richtopia's list of the world’s 100 most influential figures in the space exploration sector.
References
Living people
Year of birth missing (living people) | Mohammed Nasser Al Ahbabi | Astronomy | 304 |
17,782,532 | https://en.wikipedia.org/wiki/Susceptibility%20weighted%20imaging | Susceptibility weighted imaging (SWI), originally called BOLD venographic imaging, is an MRI sequence that is exquisitely sensitive to venous blood, hemorrhage and iron storage. SWI uses a fully flow compensated, long echo, gradient recalled echo (GRE) pulse sequence to acquire images. This method exploits the susceptibility differences between tissues and uses the phase image to detect these differences. The magnitude and phase data are combined to produce an enhanced contrast magnitude image. The imaging of venous blood with SWI is a blood-oxygen-level dependent (BOLD) technique which is why it was (and is sometimes still) referred to as BOLD venography. Due to its sensitivity to venous blood SWI is commonly used in traumatic brain injuries (TBI) and for high resolution brain venographies but has many other clinical applications. SWI is offered as a clinical package by Philips and Siemens but can be run on any manufacturer's machine at field strengths of 1.0 T, 1.5 T, 3.0 T and higher.
Acquisition and image processing
SWI uses a fully velocity compensated, RF spoiled, high-resolution, 3D gradient recalled echo (GRE) scan. Both the magnitude and phase images are saved, and the phase image is high pass (HP) filtered to remove unwanted artifacts. The magnitude image is then combined with the phase image to create an enhanced contrast magnitude image referred to as the susceptibility weighted (SW) image. It is also common to create minimum intensity projections (mIP) over 8 to 10 mm to better visualize vein connectivity. In this way four sets of images are generated, the original magnitude, HP filtered phase, susceptibility weighted, and mIPs over the susceptibility weighted images.
Phase filtering
The values in the phase images are constrained from -π to π so if the value goes above π it wraps to -π, inhomogeneities in the magnetic field cause low frequency background gradients. This causes all the phase values to slowly increase across the image which creates phase wrapping and obscures the image. This type of artifact can be removed by phase unwrapping or by high pass filtering the original complex data to remove the low frequency variations in the phase image.
Susceptibility weighted image creation
The susceptibility weighted image is created by combining the magnitude and filtered phase images. A mask is created from the phase image by mapping all values above 0 radians to be 1 and linearly mapping values from -π to 0 radians to range from 0 to 1, respectively. Alternatively, a power function (typically 4th degree) can be used instead of a linear mapping from -π to 0 to increase the effect of the mask. The magnitude image is then multiplied by this mask. In this way phase values above 0 radians have no effect and phase values below 0 radians darken the magnitude image. This increases the contrast in the magnitude image for objects with low phase values such as veins, iron, and hemorrhage.
Clinical applications
SWI is most commonly used to detect small amounts of hemorrhage or calcium. Clinical applications are under research in different fields of medicine.
Traumatic brain injury (TBI)
The detection of micro-hemorrhages, shearing, and diffuse axonal injury (DAI) in trauma patients is often difficult as the injuries tend to be relatively small in size and can be easily missed by low resolution scans. SWI is usually run at relatively high resolution (1 mm3) and is extremely sensitive to bleeding in the gray matter/white matter boundaries making it is possible to see very small lesions increasing the ability to detect more subtle injuries.
Stroke and hemorrhage
Diffusion weighted imaging offers a powerful means to detect acute stroke. Although it is well known that gradient echo imaging can detect hemorrhage, it is best detected with SWI. In the example shown here, the gradient echo image shows the region of likely cytotoxic edema whereas the SW image shows the likely localization of the stroke and the vascular territory affected (data acquired at 1.5 T).
The bright region in the gradient echo weighted image shows the area affected in this acute stroke example. The arrows in the SWI image may show the tissue at risk that has been affected by the stroke (A, B, C) and the location of the stroke itself (D). The reason that we are able to see the affected vascular territory could be because there is a reduced level of oxygen saturation in this tissue, suggesting that the flow to this region of the brain could be reduced post stroke. Another possible explanation is that there is an increase in local venous blood volume. In either case, this image suggests that the tissue associated with this vascular territory could be tissue at risk. Future stroke research will involve comparisons of perfusion weighted imaging and SWI to learn more about local flow and oxygen saturation.
Sturge–Weber disease
An SWI venogram of a neonate with Sturge–Weber syndrome who did not display neurological symptoms is shown to the right. The initial conventional MR imaging methods did not demonstrate any abnormality. The abnormal venous vasculature in the left occipital lobe extending between the posterior horn of the ventricle and the cortical surface is clearly visible in the venogram. Due to the high resolution even collaterals can be resolved.
Tumors
Part of the characterization of tumors lies in understanding the angiographic behavior of lesions both from the perspective of angiogenesis and micro-hemorrhages. Aggressive tumors tend to have rapidly growing vasculature and many micro-hemorrhages. Hence, the ability to detect these changes in the tumor could lead to a better determination of the tumor status. The enhanced sensitivity of SWI to venous blood and blood products due to their differences in susceptibility compared to normal tissue leads to better contrast in detecting tumor boundaries and tumor hemorrhage.
Multiple sclerosis
Multiple sclerosis (MS) is usually studied with FLAIR and contrast enhanced T1 imaging. SWI adds to this by revealing the venous connectivity in some lesions and presents evidence of iron in some lesions. This key new information may help understand the physiology of MS.
The magnetic resonance frequency measured with an SWI scan was shown to be sensitive to MS lesion formation. The frequency increases months before a new lesion appears on a contrast enhanced scan. At the time of contrast enhancement the frequency increases rapidly and remains elevated for at least six months.
Vascular dementia and cerebral amyloid angiopathy (CAA)
Gradient recalled echo (GRE) imaging is the conventional way to detect hemorrhage in CAA, however SWI is a much more sensitive technique that can reveal many micro-hemorrhages that are missed on GRE images. A conventional gradient echo T2*-weighted image (left, TE=20 ms) shows some low-signal foci associated with CAA. On the other hand, an SWI image (center, with a resolution of 0.5 mm x 0.5 mm x 2.0 mm, projected over 8mm) shows many more associated low-signal foci. Phase images were used to enhance the effect of the local hemosiderin build-up. An example phase image (right) with yet higher resolution of 0.25 mm x 0.25 mm x 2.0 mm shows a clear ability to localize multiple CAA-associated foci.
Pneumocephalus
Recent studies suggest that SWI might be suitable for monitorizing neurosurgical patients recovering from Pneumocephalus, as air can be easily detected with SWI.
High field SWI
SWI is uniquely suited to take advantage of higher field systems, as the contrast in the phase image is linearly proportional to echo time (TE) and field strength. Higher fields thus allow shorter echo times without a loss of contrast which can reduce scan time and motion related artifacts. The high signal-to-noise available at higher fields also increases scan quality and allows for higher resolution scans.
See also
Magnetic resonance angiography
Quantitative susceptibility mapping
Footnotes
References
External links
SWI information brochures, including SWI software
MRI-CCSVI Pilot Study with MRA and SWI
NICE MRI
MRI institute for biomedical research
Magnetic resonance imaging
Neuroimaging | Susceptibility weighted imaging | Chemistry | 1,725 |
31,258,710 | https://en.wikipedia.org/wiki/Mycetophagites | Mycetophagites is an extinct fungal genus of mycoparasitic in the order Hypocreales. A monotypic genus, it contains the single species Mycetophagites atrebora.
The genus is solely known from the Lower Cretaceous, Upper Albian stage (about 100 Ma), Burmese amber deposits in Myanmar. Mycetophagites is one of only two known instances of hyperfungal species known in the fossil record, and is the oldest to be described.
History and classification
The genus is known only from the single holotype, number "AB-368", hyphae parasitizing a single partial fruiting body specimen. When described the mushroom is part of in the private collection of Ron Buckley of Florence, Kentucky, USA. The collection has been sold and is now owned by Deniz Erin of Istanbul, Turkey. AB-368 was collected from one of the amber mines in the Hukawng Valley area southwest of Maingkhwan, Kachin Region, Northern Myanmar. It was first studied by a pair of researchers led by George Poinar from Oregon State University who worked with Ron Buckley. Poinar and Buckley published their 2007 type description in Mycological Research, journal of The British Mycological Society. The genus has been assigned the MycoBank number MB510323, with the species being assigned number MB510324.
The generic epithet Mycetophagites is Greek in derivation and is a combination of the words meaning "fungus" and which means "to eat" referencing the mycoparasitic nature of the species.
When published, Mycetophagites atrebora was the first known instance of hyperparasitism on mycoparasitism to be described in the fossil record, and the oldest. The fossil shows that this type of fungal parasitic relationship had been established by the Albian, 100 million years ago. An earlier instance of mycoparasitism is known from the extinct species Palaeoserenomyces allenbyensis and Cryptodidymosphaerites princetonensis described in 1998 from cherts found in British Columbia, Canada.
[[File:Palaeoagaricites antiquus.png|right|thumb|Palaeoagaricites antiquus host cap]]
Description
The holotype of Mycetophagites consists of mycelium in a lone, partly decomposed fruiting body without any associated structures. The fungi are preserved in a rectangular piece of yellow amber approximately by by . The pileus is in diameter and possess a convex shape with the flesh a bluish gray color and hairy. The mycelium is composed of thick, septate, hyphae 4–6 μm in diameter. The hyphae sport septate dark conidiophores that are simple or sparsely branched. Each of the conidiophores are born singly or as sparse clusters and are upright or almost upright. The 8–10 μm long conidia on the conidiophores are oriented in short chains or singally on the ends of conidiophores. The conidia are generally simple and ovoid.Mycetophagites presents the oldest evidence of fungal parasitism by other fungi in the fossil record. The fossil displays a complex interrelationship between three different fungal genera. The preserved Palaeoagaracites antiquus cap is host to both the mycoparasitic fungus and a hypermycoparasitic fungus. The surface of the gilled fungus Palaeoagaracites specimen hosts the Mycetophagites atrebora mycelia. The mycelia of Mycetophagites are found across the surface of the P. antiquus pileus, and the hyphae penetrate into the P. antiquus tissues themselves forming necrotic areas. Mycetophagites is in turn host to a hypermycoparasitic necrotrophic fungus species Entropezites patricii. Hyphae of Entropezites are preserved penetrating the Mycetophagites hyphae forming areas of decomposing tissues. Entropezites also displays a range of growth stages for probable zygospores.
The combined distinguishable characters of Mycetophagites were not enough for Poinar and Buckley to place the genus further than Hypocreales incertae sedis. Where the hyphae of Mycetophagites penetrate into the Palaeoagaracites cap, distinct areas of cell lysis appear to be present. The necrotrophic nature of the interaction is similar to the modern genus Sepedonium. However the details of the condia and mycelium are distinct from those found in Sepedonium''.
References
†Palaeoagaracites
†Palaeoagaracites
Parasitic fungi
Prehistoric fungi
Fossil taxa described in 2007
Cretaceous fungi
Natural history of Myanmar
Burmese amber
Fossils of Myanmar
Taxa named by George Poinar Jr. | Mycetophagites | Biology | 1,055 |
37,869,238 | https://en.wikipedia.org/wiki/C24H18O12 | {{DISPLAYTITLE:C24H18O12}}
The molecular formula C24H18O12 (molar mass: 498.39 g/mol, exact mass: 498.07982598 u) may refer to :
Tetrafucol A, a fucol-type phlorotannin
Tetraphlorethol C, a phlorethol-type phlorotannin
Molecular formulas | C24H18O12 | Physics,Chemistry | 95 |
66,980,580 | https://en.wikipedia.org/wiki/National%20Security%20Commission%20on%20Artificial%20Intelligence | The National Security Commission on Artificial Intelligence (NSCAI) was an independent commission of the United States of America established in 2018 to make recommendations to the President and Congress to "advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States". The commission's 15 members were nominated by Congress.
The NSCAI was dissolved on 1 October 2021.
History and reporting
The NSCAI began working in March 2019 and by November 2019 it had received more than 200 classified and unclassified briefings to help with the creation of its final report due in 2021.On 4 November 2019, the NSCAI shared its interim report with Congress, where it explained the 27 initial judgements to base its ongoing work.
In the interim report the commission also agreed on seven principles:
Global leadership in AI technology is a national security priority
AI adoption is an urgent imperative for national security
A shared sense of responsibility for the American peoples security must be created from government officials and private sector leaders.
It needs to find local AI talent and use it to attract the world’s best minds
Actions used for the protection of America’s AI leadership against foreign threats needs to follow the principles of free enterprise, free inquiry and free flow of ideas.
The technical limitations of AI are universally known, however, a strong desire remains for powerful, dependable, and secure AI systems.
United States used AI must follow American values including the rule of law
Fundamental areas of effort for the preservation of U.S. advantages were also agreed upon in the interim report of 2019.
The NSCAI released its first report of recommendations in March 2020, most of which were included in the 2021 National Defense Authorization Act. In July 2020, the commission published the second report to Congress. It identified 35 actions for both Executive and Legislative branches, which were focused on six fundamental areas. This report was available to the public. In January 2021, a draft of the final report was presented at a panel led by Schmidt. The report recommended the US to use AI technology for military use and development.
It issued its final report in March 2021, saying that the U.S. is not sufficiently prepared to defend or compete against China in the AI era. It was broken up into two parts, the first titled “Defending America in the AI Era”, and the second “Winning the Technology Competition”. The report spoke about China’s efforts and investments into integration and that it could very well take the lead in AI in the next few years. Additional suggestions were made to concentrate on AI in everything we do and to implement it into US national security on multiple levels, as well as focus on bringing in new talent to develop AI and to introduce it to the working force on both civilian and military levels. Another recommendation of the NSCAI report was to develop and provide China and Russia with alternative models that are based on norms and democratic values. The final report also included a proposed $40 billion budget for government spending. On 14 April 2021, NSCAI executive director Ylli Bajraktari and director of Research and Analysis Justin Lynch participated in an event held by the Center for Security and Emerging Technology (CSET) to discuss the final report findings.
In October 2021, NSCAI chair Eric Schmidt founded the bipartisan, non-profit Special Competitive Studies Project (SCSP) through his family led non-profit Eric & Wendy Schmidt Fund for Strategic Innovation in order to carry on the NSCAI’s efforts and expand beyond national security.
The Foundation for Defense of Democracies held an event in June 2023, called “Thinking Forward After the NSCAI and CSC: A Discussion on AI and Cyber Policy”, with former members of NSCAI on the moderation panel, including Eric Schmidt and Ylli Bajraktari.
Members
Here is a list of members from the National Security Commission on Artificial Intelligence:
Eric Schmidt (chair), former CEO of Google
Robert Work (Vice Chair), former Deputy Secretary of Defense
Mignon Clyburn, former Commissioner of the Federal Communications Commission
Chris Darby, CEO of In-Q-Tel
Kenneth M. Ford, CEO of the Florida Institute for Human and Machine Cognition
Jose-Marie Griffiths, President of Dakota State University
Eric Horvitz, Technical Fellow at Microsoft
Katrina G. McFarland, former Assistant Secretary of Defense for Acquisition
Jason Matheny, Director of the Center for Security and Emerging Technology at Georgetown University
Gilman Louie, partner at Alsop Louie Partners
William Mark, vice president at SRI International
Andy Jassy, CEO of Amazon Web Services (AWS)
Safra Catz, CEO of Oracle
Steve Chien, Technical Fellow at Jet Propulsion Laboratory (JPL)
Andrew Moore, Google/Alphabet
Recommendations
The report's recommendations include:
dramatically increasing non-defense federal spending on AI research and development, doubling every year from $2 billion in 2022, to $32 billion in 2026. That would bring it up to a level similar to spending on biomedical research
a dramatic increase in undergraduate scholarship and graduate studies fellowships in AI
creation of a Digital Corps to bring skilled tech workers into government
founding of a Digital Service Academy: an accredited university providing subsidized education in exchange for a commitment to work for a time in government
include civil rights and civil liberty reports for new AI systems or major updates to existing systems
expanding allocations of employment-based green cards, and giving them to every AI PhD graduate from an accredited U.S. university
reforming the acquisition management system Department of Defense to make it faster and easier to introduce new technologies.
Transparency
In December 2019, a ruling was made under the Freedom of Information Act (FOIA) that the NSCAI must also provide historical documents upon request. The Electronic Privacy Information Center (EPIC) filed the lawsuit against the NSCAI in September 2019 after being refused information about the upcoming meetings and prepared records of the commission under FOIA and the Federal Advisory Committee Act (FACA). The U.S. District Court for the District of Columbia ruled in June 2020 that the NSCAI must comply with FACA and therefore hold open meetings and provide records to the public. The lawsuit was also filed by EPIC.
References
External links
United States national commissions
Scientific organizations established in 2018
Regulation of artificial intelligence
Political organizations established in 2018 | National Security Commission on Artificial Intelligence | Technology | 1,281 |
21,727,808 | https://en.wikipedia.org/wiki/TeLQAS | TeLQAS (Telecommunication Literature Question Answering System) is an experimental question answering system developed for answering English questions in the telecommunications domain.
Architecture
TeLQAS includes three main subsystems: an online subsystem, an offline subsystem, and an ontology. The online subsystem answers questions submitted by users in real time. During the online process, TeLQAS processes the question using a natural language processing component that implements part-of-speech tagging and simple syntactic parsing. The online subsystem also utilizes an inference engine in order to carry out necessary inference on small elements of knowledge. The offline subsystem automatically indexes documents collected by a focused web crawler from the web. An ontology server along with its API is used for knowledge representation. The main concepts and classes of the ontology are created by domain experts. Some of these classes, however, can be instantiated automatically by the offline components.
References
Computational linguistics
Information retrieval systems
Natural language processing software | TeLQAS | Technology | 210 |
230,944 | https://en.wikipedia.org/wiki/Unsprung%20mass | The unsprung mass (colloquially unsprung weight) of a vehicle is the mass of the suspension, wheels or tracks (as applicable), and other components directly connected to them. This contrasts with the sprung mass (or weight) supported by the suspension, which includes the body and other components within or attached to it. Components of the unsprung mass include the wheel axles, wheel bearings, wheel hubs, tires, and a portion of the weight of driveshafts, springs, shock absorbers, and suspension links. Brakes that are mounted inboard (i.e. as on the drive shaft, and not part of the wheel or its hub) are part of a vehicle's sprung mass.
Effects
The unsprung mass of a typical wheel/tire combination represents a trade-off between the pair's bump-absorbing/road-tracking ability and vibration isolation. Bumps and surface imperfections in the road cause tire compression, inducing a force on the unsprung mass. The unsprung mass then reacts to this force with movement of its own. The motion amplitude for small duration and amplitude bumps is inversely proportional to the weight. A lighter wheel which readily rebounds from road bumps will have more grip and more constant grip when tracking over an imperfect road. For this reason, lighter wheels are sought especially for high-performance applications. However, the lighter wheel will soak up less vibration. The irregularities of the road surface will transfer to the cabin through the suspension and hence ride quality and road noise are worse. For longer duration bumps that the wheels follow, greater unsprung mass causes more energy to be absorbed by the wheels and makes the ride worse.
Pneumatic or elastic tires help by restoring some spring to the (otherwise) unsprung mass, but the damping possible from tire flexibility is limited by considerations of fuel economy and overheating. The shock absorbers, if any, also damp the spring motion and must be less stiff than would optimally damp the wheel bounce. So the wheels still vibrate after each bump before coming to rest. On dirt roads and on some softly paved roads, the induced motion generates small bumps, known as corrugations, washboarding or "corduroy" because they resemble smaller versions of the bumps in roads made of logs. These cause sustained wheel bounce in subsequent axles, enlarging the bumps.
High unsprung mass also exacerbates wheel control issues under hard acceleration or braking. If the vehicle does not have adequate wheel location in the vertical plane (such as a rear-wheel drive car with Hotchkiss drive, a live axle supported by simple leaf springs), vertical forces exerted by acceleration or hard braking combined with high unsprung mass can lead to severe wheel hop, compromising traction and steering control.
A beneficial effect of unsprung mass is that high frequency road irregularities, such as the gravel in an asphalt or concrete road surface, are isolated from the body more completely because the tires and springs act as separate filter stages, with the unsprung mass tending to uncouple them.
Likewise, sound and vibration isolation is improved (at the expense of handling), in production automobiles, by the use of rubber bushings between the frame and suspension, by any flexibility in the frame or body work, and by the flexibility of the seats.
Unsprung mass and vehicle design
Unsprung mass is a consideration in the design of a vehicle's suspension and the materials chosen for its components. Beam axle suspensions, in which wheels on opposite sides are connected as a rigid unit, generally have greater unsprung mass than independent suspension systems, in which the wheels are suspended and allowed to move separately. Heavy components such as the differential can be made part of the sprung mass by connecting them directly to the body (as in a de Dion tube rear suspension). Lightweight materials, such as aluminium, plastic, carbon fiber, and/or hollow components can provide further weight reductions at the expense of greater cost and/or fragility.
The term "unsprung mass" was coined by the mathematician Albert Healey of the Dunlop tyre company. He presented one of the first lectures taking a rigid analytical approach to suspension design, "The Tyre as a part of the Suspension System", to the Institution of Automobile Engineers in November 1924. This lecture was published as a 100-page paper.
Inboard brakes can significantly reduce unsprung mass, but put more load on half axles and (constant velocity) universal joints, and require space that may not be easily accommodated. If located next to a differential or transaxle, waste heat from the brakes may overheat the differential or vice versa, particularly in hard use, such as racing. They also make anti-dive suspension characteristics harder to achieve because the moment created by braking does not act on the suspension arms.
The Chapman strut used the driveshafts as suspension arms, thus requiring only the weight of one component rather than two. Jaguar independent rear suspension (IRS) similarly reduced unsprung mass by replacing the upper wishbone arms of the suspension with the drive shafts, as well as mounting the brakes inboard in some versions.
Scooter-type motorcycles use an integrated engine-gearbox-final drive system that pivots as part of the rear suspension and hence is partly unsprung. This arrangement is linked to the use of quite small wheels, further affecting their poor reputation for road-holding.
See also
Sprung mass
Notes
External links
Mass | Unsprung mass | Physics,Mathematics | 1,142 |
63,686,596 | https://en.wikipedia.org/wiki/Pipe%20plug | A pipe plug is a tool or material for the temporary sealing of pipelines in sewerage and other liquid and gas transportation systems; typically for maintenance or non-pressurized line testing. A pipe plug is also known as an inflatable plug, mechanical pipe plug, pipe test plug, pipeline isolation plug, expandable plug, pipe bung, pipe stopper, pipe packer, pneumatic pipe plug or pipe balloon depending on the region where it is used.
History
The origin is debated, but the earliest patents related with plugging the pipes date back to the 1890s. The first patent for a pipe plug as we know today is by Oscar F. Anderson, published in 1952., and the first patent for inflatable plugs was published in 1965
Usage
Pipe plugs are often confused with relatively smaller plumbing accessories. However, as an industrial tool, pipe plugs are used in larger infrastructure pipelines. Pipe plugs provide a trench-less method for the maintenance of drains and sewers, and construction and testing of non-pressurized gravity pipelines.
There are three main purposes of pipe plugs. These are temporary sealing or stopping the fluid flow in a pipeline, leak testing and by-passing the flow. They are also used for blocking the ends of pipes to prevent the entry of dirt and other contaminants during construction, maintenance or repair of pipelines.
The leak tests of gravity pipelines using the pipe plug are performed with respecting the requirements of the European Standard EN1610 for both water and air tests.
The inflatable pipe plugs have a wide variety of types each for different purpose:
Pipe plug
Pipe test plug
Conical plug
Pipe packer
High pressure pipe plug
Oil and gas pipe plug
Steam process plug
Pipe joint tester
Back pressure
Back pressure is a major issue for the users of pipe plugs on site. It refers to the force that a pipe plug holds during the process. Pipe plugs are usually subject to huge amount of back pressure that occurs in the pipeline, so the back pressure must be calculated accurately in order to prevent the pipe plug to slip inside the pipe. Slipping of the pipe plug may cause in hazardous results. Though Mechanical and Inflatable Pipe Plugs can rely upon the seals for restraint, typically, secondary mechanical restraint is required to prevent slippage – in the form of friction screw dogs that engage to the pipe, by utilizing strong back supports, anchors, or other user added blocking methods.
Formula of back pressure calculation
Accessories
Pipe plugs are used with supplementary accessories such as air and water hoses, air and pressure control devices, gauges, adapters and chains depending on the type of the pipe plug and the process.
Auxiliary equipment like compressors for inflating the pipe plugs, water tanks for filling the pipeline and pumps for some cases must be used.
Maintenance
For a longer life cycle, pipe plugs should be cleaned with soap and water before and after each use. Chemical solvents, hydrocarbons, petroleum fluids or other aggressive substances shouldn't be used while cleaning, since they may damage or destroy the rubber of the pipe plug. After cleaning, pipe plugs should be flushed with clean water and left to dry at room temperature before using in the pipelines.
Storing conditions are determined by the ISO 2230 standard. Pipe plugs are to be stored in a dry space at 15-25 °C away from direct sun light and circulating air. Long term contact with liquids, metals and other rubber materials should be avoided.
References
External links
BS EN 1610:2015 Construction and testing of drains and sewers
Oscar F Anderson
Pneumatic
Plumbing | Pipe plug | Engineering | 730 |
2,976,291 | https://en.wikipedia.org/wiki/M1%20mortar | The M1 mortar is an American 81 millimeter caliber mortar. It was based on the French Brandt mortar. The M1 mortar was used from before World War II until the 1950s when it was replaced by the lighter and longer ranged M29 mortar.
General data
Weight:
Tube 44.5 lb (20 kg)
Mount 46.5 lb (21 kg)
Base plate 45 lb (20 kg)
Total
Ammunition
M43A1 light HE: 7.05 lb (3.20 kg); HE filling 1.22 lb (0.55 kg); range min 100 yd (91 m); range max 3300 yd (zone 7, 3018 m); 80% frag radius 25 yd (23 m) (compared favorably with the 75 mm howitzer). M52 superquick fuze (explode on surface).
M43A1 light training: an empty version of the M43A1 light HE with an inert fuze. It was used as a training shell until it was replaced by the M68 training practice shell.
M45 heavy HE: 15.10 lb (6.85 kg); HE filling 4.48 lb (2.03 kg); range max 1275 yd (zone 5, 1166 m); bursting radius comparable to the 105 mm howitzer. Equipped with M45 (super quick/delay action selective) or M53 (delay action only) P.D. fuze.
M56 heavy HE: 10.77 lb (4.86 kg); HE filling 4.31 lb (1.96 kg); range max 2655 yd (zone 5, 2428 m), standard for issue and manufacture shell replacing M45. It used the M53 fuze back in 1944, but it was at some point replaced by the M77 Timed Super Quick (TSQ) fuze.
M57 WP (white phosphorus) "bursting smoke": 10.74 lb (4.87 kg); range max 2470 yd (2260 m); designed to lay down screening smoke, but had definite anti-personnel and incendiary applications.
M57 FS (a solution of sulfur trioxide in chlorosulfonic acid) chemical smoke: 10.74 lb (4.87 kg), range max 2470 yd (2260 m); laid down dense white fog consisting of small droplets of hydrochloric and sulfuric acids. In moderate concentrations, it is highly irritating to the eyes, nose, and skin.
M68 training practice: 9.50 lb to 10.10 lb. An inert teardrop-shaped cast iron shell without provision for a fuze well that was used to simulate the M43 light HE shell. The casing on early models was painted black but post-World War 2 versions are painted blue. It came in 9 different weights (engraved on the shell) to allow it to simulate shell firing with and without booster charges. Weight zone one (9.5 lbs.) simulated a shell with the maximum of 8 booster charges and weight zone nine (10.10 lbs.) simulated the shell being fired without booster charges.
M301 illuminating shell: range max 2200 yd (2012 m); attached to parachute; burned brightly (275,000 candelas) for about 60 seconds, illuminating an area of about 150 yards (137 m) diameter. It used the M84 time fuze, which was adjustable from 5 to 25 seconds before priming charge detonated, releasing the illuminator and chute.
Fuzes
The M1 mortar's shells sometimes used the same fuzes as the shells for the M2 60 mm mortar. An adapter collar was added to the smaller fuzes to allow them to fit the larger shells.
M43 mechanical timing (MT) fuze: clockwork timed delay fuze. Models M43A5.
M45 point detonating (PD) fuze: selective fuze that could be set for time delay or super-quick (less than a second) detonation on impact. Replaced by the M52 and M53 fuzes.
M48 point detonating (PD) fuze: selective powder train burning fuze that can be set to super quick or delay ignition on impact. The factory pre-set delay time was stamped on the shell body. If the super-quick flash ignition failed, the delay fuse kicked in. If set on delay, the super-quick flash igniter mechanism was immobilized to prevent premature ignition. Models: M48, M48A1, M48A2 (either 0.05 or 0.15 second Delay), & M48A3 (0.05 second delay).
M51 point detonating (PD) fuze: selective powder train burning fuze that can be set to super quick or delay ignition after impact. It is a modification of the M48 fuze with the addition of a booster charge. Models: M51A4, M51A5 (M48A3 Fuze with M21A4 booster).
M52 point detonating super-quick (PDSQ) fuze: super-quick fuze that activates less than a second after impact. The pre-war M52 was made of aluminum, the M52B1 model was made of Bakelite, and the M52B2 model had a Bakelite body and an aluminum head; the suffix would be added to the shell designation.
M53 point detonating delay (PDD) fuze: delay fuze that activates after impact.
M54 time and super-quick (TSQ) fuze: powder train burning fuze that can be set for time delay (slow burn) or super-quick (flash ignition) detonation on impact.
M77 time and super quick (TSQ) fuze: powder train burning fuze that can be set for time delay (slow burn) or super-quick (flash ignition) detonation on impact.
M78 concrete penetrating (CP) fuze: delay fuze that was set off after the shell had impacted and buried itself to increase the damage done.
M84 mechanical timing (MT) fuze: clockwork fuze that can be set from 0 to 25 seconds in 1-second intervals; seconds were indicated by vertical lines and 5-second intervals were indicated by metal bosses to allow it to be set in low-light or night-time conditions.
M84A1 mechanical timing (MT) fuze: clockwork fuze that can be set from 0 to 50 seconds in 2-second intervals.
Users
It may be found in nearly all the non-Communist countries, including:
: used on M21 mortar motor carriage
: made under license
:M-43
: The Armed Forces was equipped with 386 M1s before the Korean War, and 822 were in service with the Army by the end of the war. Began replacing with M29A1 or KM29A1 in 1970s.
See also
M2 Mortar
List of U.S. Army weapons by supply catalog designation SNL A-33
M3 Half-track
Weapons of comparable role, performance and era
Ordnance ML 3 inch Mortar British equivalent
8 cm Granatwerfer 34 German equivalent
References
FM 23-90
TM 9-1260
SNL A-33
External links
90th Infantry Division Preservation Group - page on 81 mm mortars and equipment
Popular Science, August 1943, Pill Boxes Destroyer article on M1 81mm mortar
Infantry mortars
World War II infantry weapons of the United States
World War II mortars
Mortars of the United States
Chemical weapons of the United States
Chemical weapon delivery systems
81mm mortars
Military equipment introduced in the 1930s | M1 mortar | Chemistry | 1,573 |
43,974,356 | https://en.wikipedia.org/wiki/Manual%20of%20the%20Higher%20Plants%20of%20Oregon | Manual of the Higher Plants of Oregon was an early flora of plants of Oregon written by Morton Peck.
It was praised for its format, portability, phylogenetic keys, having new species recorded, and completeness of both descriptions of plants and physiographic provinces. The second edition (1961) includes many changes to the first edition (1941) which, according to Peck, is "now decidedly out of date."
See also
Flora of Oregon
Flora (publication)
References
Florae (publication)
.
Botany in North America | Manual of the Higher Plants of Oregon | Biology | 106 |
1,561,771 | https://en.wikipedia.org/wiki/79%20Ceti | 79 Ceti, also known as HD 16141, is a binary star system located 123 light-years from the Sun in the southern constellation of Cetus. It has an apparent visual magnitude of +6.83, which puts it below the normal limit for visibility with the average naked eye. The star is drifting closer to the Earth with a heliocentric radial velocity of −51 km/s.
Harlan (1974) assigned this star a stellar classification of G2V, matching an ordinary G-type main-sequence star that is undergoing core hydrogen fusion. However, Houk and Swift (1999) found a class of G8IV, which suggests it has exhausted the supply of hydrogen at its core and begun to evolve off the main sequence. Eventually the outer layers of the star will expand and cool and the star will become a red giant. Estimates of the star's age range from 6.0 to 9.4 billion years old. It has an estimated 1.06 times the mass of the Sun and 1.48 times the Sun's radius. The star is radiating twice luminosity of the Sun from its photosphere at an effective temperature of 5,806 K. The discrepancy was later found to be due to an additional red dwarf star in the system at a projected separation 220 AUs.
Planetary system
On March 29, 2000, a planet orbiting primary star was announced, it was discovered using the radial velocity method. This object has a minimum 0.26 times the mass of Jupiter and is orbiting its host star every 75.5 days.
See also
81 Ceti
94 Ceti
Lists of exoplanets
References
External links
SIMBAD: HD 16141 -- High proper-motion Star
SolStation: 79 Ceti
Extrasolar Planets Encyclopaedia: HD 16141
G-type subgiants
G-type main-sequence stars
Planetary systems with one confirmed planet
Cetus
Durchmusterung objects
Ceti, 79
9085
016141
012048
J02351994-0333376 | 79 Ceti | Astronomy | 422 |
5,065,696 | https://en.wikipedia.org/wiki/HD%2084810 | HD 84810, also known as l Carinae (l Car), is a star in the southern constellation of Carina. Its apparent magnitude varies from about 3.4 to 4.1, making it readily visible to the naked eye and one of the brightest members of Carina. Based upon parallax measurements, it is approximately from Earth.
From the characteristics of its spectrum, l Carinae has a stellar classification of G5 Iab/Ib. This indicates the star has reached a stage in its evolution where it has expanded to become a supergiant with 169 times the radius of the Sun. As this is a massive star with 8.7 times the mass of the Sun, it rapidly burns through its supply of nuclear fuel and has become a supergiant in roughly , after spending as a main sequence star.
l Carinae is classified as a Cepheid variable star and its brightness varies over an amplitude range of 0.725 in magnitude with a long period of 35.560 days. This unusually long period makes it essential for calibrating the period-luminosity relation for Cepheid variables; additionally, it is one of the nearest Cepheid variables, making it relatively easy to observe. The radial velocity of the star likewise varies by 39 km/s during each pulsation cycle. Its radius varies by about as it pulsates, reaching maximum size as its brightness is decreasing towards minimum.
It has a compact circumstellar envelope that can be discerned using interferometry. The envelope has been resolved at an infrared wavelength of 10μm, showing a radius of 10–100 AU at a mean temperature of 100 K. The material for this envelope was supplied by mass ejected from the central star.
The period of l Carinae is calculated to be slowly increasing and it is thought to be crossing the instability strip for the third time, cooling as it evolves towards a red supergiant after a blue loop.
References
Carinae, l
Carina (constellation)
Classical Cepheid variables
084810
G-type supergiants
3884
047854
Durchmusterung objects
F-type supergiants
K-type supergiants | HD 84810 | Astronomy | 456 |
7,666,034 | https://en.wikipedia.org/wiki/Prosorba%20column | The Prosorba Column is a plasma filtering device used to treat severe cases of rheumatoid arthritis or psoriatic arthritis. Its active element is Protein A bonded to a diatomaceous earth/clay bead. The effect of the Protein A is to remove circulating immune complexes responsible for the autoimmune joint deterioration process.
The device was originally manufactured by Imre Corp and approved by the FDA in 1987. The Prosorba Column went out of production at the end of 2006.
References
External links
http://arthritis.about.com/od/prosorba/a/prosorbafda.htm
Medical equipment | Prosorba column | Biology | 135 |
3,792,917 | https://en.wikipedia.org/wiki/Philosophy%20of%20technology | The philosophy of technology is a sub-field of philosophy that studies the nature of technology and its social effects.
Philosophical discussion of questions relating to technology (or its Greek ancestor techne) dates back to the very dawn of Western philosophy. The phrase "philosophy of technology" was first used in the late 19th century by German-born philosopher and geographer Ernst Kapp, who published a book titled Elements of a Philosophy of Technology (German title: Grundlinien einer Philosophie der Technik).
History
Greek philosophy
The western term 'technology' comes from the Greek term techne (τέχνη) (art, or craft knowledge) and philosophical views on technology can be traced to the very roots of Western philosophy. A common theme in the Greek view of techne is that it arises as an imitation of nature (for example, weaving developed out of watching spiders). Greek philosophers such as Heraclitus and Democritus endorsed this view. In his Physics, Aristotle agreed that this imitation was often the case, but also argued that techne can go beyond nature and complete "what nature cannot bring to a finish." Aristotle also argued that nature (physis) and techne are ontologically distinct because natural things have an inner principle of generation and motion, as well as an inner teleological final cause. While techne is shaped by an outside cause and an outside telos (goal or end) which shapes it. Natural things strive for some end and reproduce themselves, while techne does not. In Plato's Timaeus, the world is depicted as being the work of a divine craftsman (Demiurge) who created the world in accordance with eternal forms as an artisan makes things using blueprints. Moreover, Plato argues in the Laws, that what a craftsman does is imitate this divine craftsman.
Middle ages to 19th century
During the period of the Roman empire and late antiquity authors produced practical works such as Vitruvius' (1st century BC) and Agricola's De Re Metallica (1556). Medieval Scholastic philosophy generally upheld the traditional view of technology as imitation of nature. During the Renaissance, Francis Bacon became one of the first modern authors to reflect on the impact of technology on society. In his utopian work New Atlantis (1627), Bacon put forth an optimistic worldview in which a fictional institution (Salomon's House) uses natural philosophy and technology to extend man's power over nature – for the betterment of society, through works which improve living conditions. The goal of this fictional foundation is "...the knowledge of causes, and secret motions of things; and the enlarging of the bounds of human empire, to the effecting of all things possible".
19th century
The native German philosopher and geographer Ernst Kapp, who was based in Texas, published the fundamental book "Grundlinien einer Philosophie der Technik" in 1877. Kapp was deeply inspired by the philosophy of Hegel and regarded technique as a projection of human organs. In the European context, Kapp is referred to as the founder of the philosophy of technology.
Another, more materialistic position on technology which became very influential in the 20th-century philosophy of technology was centered on the ideas of Benjamin Franklin and Karl Marx.
20th century to present
Five early and prominent 20th-century philosophers to directly address the effects of modern technology on humanity include John Dewey, Martin Heidegger, Herbert Marcuse, Günther Anders and Hannah Arendt. They all saw technology as central to modern life, although Heidegger, Anders, Arendt and Marcuse were more ambivalent and critical than Dewey. The problem for Heidegger was the hidden nature of technology's essence, Gestell or Enframing which posed for humans what he called its greatest danger and thus its greatest possibility. Heidegger's major work on technology is found in The Question Concerning Technology.
Technological determinists such as Jaques Ellul have argued that modern technology constitutes a unified monolithic and deterministic force, and that the notion of technology being simply a tool is a serious error. Ellul views the modern technological world-system as being motivated by the needs of its own efficiency and power, not the welfare of the human race or the integrity of the biosphere.
While a number of important individual works were published in the second half of the twentieth century, Paul Durbin has identified two books published at the turn of the century as marking the development of the philosophy of technology as an academic subdiscipline with canonical texts. Those were Technology and the Good Life (2000), edited by Eric Higgs, Andrew Light, and David Strong and American Philosophy of Technology (2001) by Hans Achterhuis. Several collected volumes with topics in philosophy of technology have come out over the past decade and the journals Techne: Research in Philosophy and Technology (the journal of the Society for Philosophy and Technology, published by the Philosophy Documentation Center) and Philosophy & Technology (Springer) publish exclusively works in philosophy of technology. Philosophers of technology reflect broadly and work in the area and include interest on diverse topics of geoengineering, internet data and privacy, our understandings of internet cats, technological function and epistemology of technology, computer ethics, biotechnology and its implications, transcendence in space, and technological ethics more broadly.
Bernard Stiegler argued in his Technics and Time, as well as in his other works, that the question of technology has been repressed (in the sense of Freud) by the history of philosophy. Instead, Stiegler showed how the question of technology constitutes the fundamental question of philosophy. Stiegler shows, for example in Plato's Meno, that technology is that which makes anamnesis, namely the access to truth, possible. Stiegler's deconstruction of the history of philosophy through technology as the supplement opens a different path to understand the place of technology in philosophy than the established field of philosophy of technology. In the same vein, philosophers – such as Alexander Galloway, Eugene Thacker, and McKenzie Wark in their book Excommunication – argue that advances in and the pervasiveness of digital technologies transform the philosophy of technology into a new 'first philosophy'. Citing examples such as the analysis of writing and speech in Plato's dialogue The Phaedrus, Galloway et al. suggest that instead of considering technology as a secondary to ontology, technology be understood as prior to the very possibility of philosophy: "Does everything that exists, exist to me presented and represented, to be mediated and remediated, to be communicated and translated? There are mediative situations in which heresy, exile, or banishment carry the day, not repetition, communion, or integration. There are certain kinds of messages that state 'there will be no more messages'. Hence for every communication there is a correlative excommunication."
There has been additional reflection focusing on the philosophy of engineering, as a sub-field within philosophy of technology. Ibo van de Poel and David E. Goldberg edited a volume, Philosophy and Engineering: An Emerging Agenda (2010) which contains a number of research articles focused on design, epistemology, ontology and ethics in engineering.
Technology and neutrality
Technological determinism is the idea that "features of technology [determine] its use and the role of a progressive society was to adapt to [and benefit from] technological change." The alternative perspective would be social determinism which looks upon society being at fault for the "development and deployment" of technologies. Lelia Green used recent gun massacres such as the Port Arthur Massacre and the Dunblane Massacre to selectively show technological determinism and social determinism. According to Green, a technology can be thought of as a neutral entity only when the sociocultural context and issues circulating the specific technology are removed. It will be then visible to us that there lies a relationship of social groups and power provided through the possession of technologies. A compatibilist position between these two positions is the interactional stance on technology proposed by Batya Friedman that states that social forces and technology co-construct and co-vary with one another.
References
External links
Journals
Philosophy & Technology
Ethics and Information Technology
Techné: Research in Philosophy and Technology
International Journal of Technoethics
Technology in Society
Science and Engineering Ethics
Websites
Institute of Philosophy and Technology
Society for Philosophy and Technology
Essays on the Philosophy of Technology compiled by Frank Edler
Filozofia techniki: problematyka, nurty, trudności Rafal Lizut
Study programmes
MA programme Philosophy of Science, Technology, and Society at the University of Twente in the Netherlands
Science and technology studies
Media studies | Philosophy of technology | Technology | 1,798 |
2,620,525 | https://en.wikipedia.org/wiki/Brocard%27s%20conjecture | In number theory, Brocard's conjecture is the conjecture that there are at least four prime numbers between (pn)2 and (pn+1)2, where pn is the nth prime number, for every n ≥ 2. The conjecture is named after Henri Brocard. It is widely believed that this conjecture is true. However, it remains unproven as of 2024.
The number of primes between prime squares is 2, 5, 6, 15, 9, 22, 11, 27, ... .
Legendre's conjecture that there is a prime between consecutive integer squares directly implies that there are at least two primes between prime squares for pn ≥ 3 since pn+1 − pn ≥ 2.
See also
Prime-counting function
Notes
Conjectures about prime numbers
Unsolved problems in number theory
Squares in number theory | Brocard's conjecture | Mathematics | 179 |
10,282,619 | https://en.wikipedia.org/wiki/List%20of%20electronics%20brands | This list of electronics brands is specialized as the list of brands of companies that provide electronics equipment.
Categories
Electronics equipment includes the following categories (abbreviations used in parentheses):
audio system (AS) (includes home audio)
avionics (AV)
car audio (CA)
car navigation (CN)
copy machine (CM)
computer (CP) (except personal computer (PC))
digital camera (DC)
display device (DD)
digital video camera (DVC)
digital video player (DVP)
digital video recorder (DVR)
fax (FAX)
global positioning system (GPS)
hard disk drive (HDD)
multifunction printer (MFP)
mechatronics (MN)
mobile phone (MP)
list of video game companies (VG/Electronics)
network device (NW)
personal computer (PC)
portable media player (PMP)
printer (PR)
semiconductor (SC)
video cassette recorder (VHS)
video game (VG)
video game developer (VGD)
video game publisher (VGP)
indie game developer (IGD)
transportation electronics system (TES)
television (TV)
wireless devices (WD)
other electronics equipment (OEE)
Other indications:
( )company name
(( ))parent company name
< >previous company name
<< >>company name in local language
Asia
Bangladesh
BMTF
Doel
Jamuna
Walton
Minister
Marcel
Rangs
Transcom
Butterfly Group
Fair Group
China
Aigo (Beijing Huaqi Information Digital Technology Co. Ltd.)
Amoi
BYD Electronic
Changhong
Gionee
Haier
Hasee
Hisense
Huawei
Konka Group
Meizu
Ningbo Bird
Oppo
Panda
Proscenic
Realme
Skyworth
TCL
TP-Link/intex
Vivo Electronics
Zopo Mobile
ZTE
Xiaomi
OnePlus
Hong Kong
Lenovo
India
Amkette
Beetel
Bharat Electronics
BPL
Celkon
Electronics Corporation of India
Godrej
HCL
Havells
IBALL
Intex
Karbonn
Micromax
Moser Baer
Notion Ink
Onida
Surya Roshni Limited
Simmtronics
Sterlite Technologies
Voltas
Videocon
Videotex
Wipro
Indonesia
Axioo
Maspion
Nexian
Zyrex
Iran
Maadiran Group
Snowa
Japan
Allied TelesisNW, OEE
AlpineCA, CN
Atari
Brother IndustriesCM, CP, FAX, MFP, PR, OEE
Buffalo (Melco)HDD, NW, OEE
CanonCM, DC, DVC, FAX, MFP, PR, OEE
CasioDC, MP, OEE
ClarionCA, CN
Eclipse (Fujitsu Ten) ((Fujitsu))CA, CN, OEE
Eizo (Eizo Nanao Co.)DD
EpsonCM, FAX, MFP, PR, OEE
Fuji ElectricMN, TES, OEE
Fuji XeroxCM, MFP, OEE
FujifilmDC
FujitsuCA, CN, CP, MP, NW, PC, SC, OEE
FunaiDVP, DVR, TV, OEE
HitachiCP, HDD, SC, TV, TES
IiyamaDD
JVC (Victor Company of Japan, Ltd) ((JVC Kenwood Holdings))AS, CA, CN, DVC, DVP, DVR
Kenwood ((JVC Kenwood Holdings))AS, CA, CN, WD
Konica MinoltaMFP, OEE
KyoceraSC, OEE
Marantz ((D&M Holdings))AS, WD, OEE
Mitsubishi (Mitsubishi Electric) ((Mitsubishi Group))DD, DVP, DPR, TES, OEE
NECCP, MP, NW, PC, SC
NikonDC, DVC
NintendoVG
OkiCP, TES
OlympusDC, DVC
Orion (Orion Electric Co.)DVP, DVR, TV
PanasonicCA, CN, DC, DD, DVC, DVP, DVR, FAX, MP, NW, PC, PMP, SC, TV, WD, OEE
Pentax ((Hoya))DC
PioneerCA, CN, WD, OEE
RenesasSC
RicohDC, CP, MFP
Sansui Electric
Sega Corporation VG
SharpDD, DVC, DVP, DVR, FAX, MP, PC, SC, TV
SII (Seiko Instruments Inc.)OEE
SNK Corporation VG
SonyCA, CN, DC, DD, DVC, DVP, DVR, GPS, PC, PMP, SC, TV, VG, WD, OED
TDKSC, OEE
ToshibaCP, DD, DVC, DVP, DVR, PC, SC, TES, TV, OED
Victor (Victor Company of Japan, Ltd) ((JVC Kenwood Holdings))Same as JVC
Yaesu (Vertex Standard)same as Vertex Standard
Currently, not providing electronics products
Akai (repair service)
Denon ((D&M Holdings)) (repair service)
Defunct
Aiwa (acquired by Sony)
Sanyo (merged into Panasonic)
National (merged into Panasonic)
Korea, South
Cowon
Daewoo Electronics
Hansol
Iriver
LG
Pantech
Samsung
Hyundai
Malaysia
Pensonic
Philippines
Cherry Mobile
Fukuda Inc.
Starmobile
Pakistan
PEL
Pakistan Aeronautical Complex
QMobile
Dawlance
Wi-Tribe
Singapore
Creative
Taiwan
Acer
AOC (AOC International) ((TPV Technology Limited))
Aopen
Asus
BenQ
D-Link
ECS
Elsa
EPoX
Foxconn
Gigabyte
HTC
Lite-On
MediaTek
MSI (Micro-Star International)
Realtek
Silicon Power
Soyo
Surya
Transcend (Transcend Information)
TSMC
VIA Technologies
Thailand
Samart
True
Turkey
Arçelik
ASELSAN
Beko
Canovate
Geliyoo
Vestel
Europe
Croatia
KONČAR Group
Finland
Nokia
France
Alcatel-Lucent
Thomson Broadcast
Germany
Blaupunkt
Bosch
Braun (company)
Gigaset
Grundig
Loewe AG
Medion
Metz (company)
Miele
Siemens
Sennheiser
Severin Elektro
TechniSat
Telefunken
Wortmann
Hungary
Orion (Orion Electronics Ltd)
Videoton
Italy
Brionvega
Bolva
Brondi
Cinemeccanica
Eurotech (company)
Hidis (owner of Q.Bell and miia brands)
Olivetti S.p.A.
Radio Marconi
Termozeta
Netherlands
Philips
Trust
Norway
Kongsberg Gruppen
Nordic Semiconductor
Russia
Almaz-Antey
Angstrem (company)
General Satellite
MCST
NPO “Digital Television Systems”
Rovercomputers
Sitronics
Sozvezdie
Yota
Slovenia
Gorenje
Sweden
Electrolux
Ericsson
Husqvarna
Paradox Interactive
Switzerland
Revox
Ukraine
EKTA
United Kingdom
Alba
Amstrad
BAE Systems
Binatone
BT
Bush
Cello Electronics
Dyson
EMI
Ferranti
KEF
Marconi
Marshall
Mitchell & Brown
Morphy Richards
Pace
Pure
Pye
Sinclair Research
Russell Hobbs
Texet
Thorn
Uniross
Vax
North America
Canada
Mexico
Alfa
Kyoto (Kyoto Electronics)
Lanix
Mabe
Meebox
Satmex
Zonda (Zonda Telecom)
United States
3M
Alienware
Amazon
AMD
Analog Devices
Apple
Audiovox
Avaya
Averatec
Bose
Cisco Systems
Crucial Technology
Dell
eMachines
Emerson Electric
Emerson Radio
Fitbit
Gateway
Google
Hewlett-Packard
HP
IBM
Intel
JBL
Kingston
Koss
Magnavox
Micron Technology
Microsoft
Motorola Mobility
Nvidia
Packard Bell
Plantronics
Polycom
Qualcomm
RCA
Sandisk
Seagate
SGI
Summit Electric Supply
Sun Microsystems
Sonos
Texas Instruments
Unisonic Products Corporation
Unisys
Vizio
Viewsonic
Western Digital
Westinghouse Electric Corporation
Xerox
Zenith
Oceania
Australia
A.G. Healing
ADInstruments
Amalgamated Wireless (Australasia)
Blackmagic Design
CEA Technologies
Clarinox Technologies Pty Ltd
Codan
Dog & Bone
Dynalite
Fairlight (company)
PowerLab
Q-MAC Electronics
Radio Rentals
Redarc Electronics
Røde Microphones
Telectronics
Vix Technology
Winradio
South America
Argentina
AeroDreams
Cicaré
CITEFA
FAdeA
INVAP
Nostromo
Brazil
Avibras
Embraer
Gradiente
Itautec
Mectron
Positivo Informatica
WEG Industries
Colombia
Indumil
Venezuela
Siragon
VIT
See also
Electronics companies by country (category)
Electronics industry
List of compact disc player manufacturers
List of microphone manufacturers
Market share of personal computer vendors
References
Brands
Electronics | List of electronics brands | Engineering | 1,755 |
25,208,580 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20May%2011%2C%202078 | A total solar eclipse will occur at the Moon's ascending node of orbit on Wednesday, May 11, 2078, with a magnitude of 1.0701. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Occurring about 16 hours after perigee (on May 11, 2078, at 2:10 UTC), the Moon's apparent diameter will be larger.
The path of totality will be visible from parts of Kiribati, Mexico, Texas, Louisiana, Mississippi, Alabama, the western Florida panhandle, Georgia, South Carolina, North Carolina, and Virginia, in the United States, and the eastern Canary Islands. A partial solar eclipse will also be visible for parts of Oceania, North America, Central America, the Caribbean, northern South America, Western Europe, and Northwest Africa.
Path description
The path of totality will begin over the Pacific Ocean near Caroline Island, Kiribati. From there, it will track northeast towards North America, making landfall on the Mexican coast. In Mexico, totality will be visible in the cities of Manzanillo, Guadalajara, Aguascalientes, Zacatecas, San Luis Potosí, Ciudad Victoria, and Matamoros, Tamaulipas. The path then briefly crosses into the United States in southern Texas, including McAllen and Brownsville before crossing the Gulf of Mexico. It then re-enters the United States, passing through Louisiana (including New Orleans and Baton Rouge), Mississippi (including Biloxi), Alabama (including Mobile and Montgomery), far northwestern Florida, Georgia (including Atlanta, Athens, and Augusta), South Carolina (including Columbia and Greenville), North Carolina (including Charlotte and Raleigh), and Virginia (including Virginia Beach). It then passes over the Atlantic Ocean and ends near the Canary Islands.
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2078
A penumbral lunar eclipse on April 27.
A total solar eclipse on May 11.
A penumbral lunar eclipse on October 21.
An annular solar eclipse on November 4.
A penumbral lunar eclipse on November 19.
Metonic
Preceded by: Solar eclipse of July 24, 2074
Followed by: Solar eclipse of February 27, 2082
Tzolkinex
Preceded by: Solar eclipse of March 31, 2071
Followed by: Solar eclipse of June 22, 2085
Half-Saros
Preceded by: Lunar eclipse of May 6, 2069
Followed by: Lunar eclipse of May 17, 2087
Tritos
Preceded by: Solar eclipse of June 11, 2067
Followed by: Solar eclipse of April 10, 2089
Solar Saros 139
Preceded by: Solar eclipse of April 30, 2060
Followed by: Solar eclipse of May 22, 2096
Inex
Preceded by: Solar eclipse of May 31, 2049
Followed by: Solar eclipse of April 23, 2107
Triad
Preceded by: Solar eclipse of July 11, 1991
Followed by: Solar eclipse of March 12, 2165
Solar eclipses of 2076–2079
Saros 139
Metonic series
Tritos series
Inex series
Notes
References
2078 05 11
2078 in science
2078 05 11
2078 05 11 | Solar eclipse of May 11, 2078 | Astronomy | 860 |
50,670,667 | https://en.wikipedia.org/wiki/USBKill | USBKill is anti-forensic software distributed via GitHub, written in Python for the BSD, Linux, and OS X operating systems. It is designed to serve as a kill switch if the computer on which it is installed should fall under the control of individuals or entities against the desires of the owner. It is free software, available under the GNU General Public License.
The program's developer, who goes by the online name Hephaest0s, created it in response to the circumstances of the arrest of Silk Road founder Ross Ulbricht, during which U.S. federal agents were able to get access to incriminating evidence on his laptop without needing his cooperation by copying data from its flash drive after distracting him. It maintains a whitelist of devices allowed to connect to the computer's USB ports; if a device not on that whitelist connects, it can take actions ranging from merely returning to the lock screen to encrypting the hard drive, or wiping all data on the computer. However, it can also be used as part of a computer security regimen to prevent the surreptitious installation of malware or spyware or the clandestine duplication of files, according to its creator.
Background
When law enforcement agencies began making computer crime arrests in the 1990s, they would often ask judges for no knock search warrants, to deny their targets time to delete incriminating evidence from computers or storage media. In more extreme circumstances where it was likely that the targets could get advance notice of arriving police, judges would grant "power-off" warrants, allowing utilities to turn off the electricity to the location of the raid shortly beforehand, further forestalling any efforts to destroy evidence before it could be seized. These methods were effective against criminals who produced and distributed pirated software and movies, which was the primary large-scale computer crime of the era.
By the 2010s, the circumstances of computer crime had changed along with legitimate computer use. Criminals were more likely to use the Internet to facilitate their crimes, so they needed to remain online most of the time. To do so, and still keep their activities discreet, they used computer security features like lock screens and password protection.
For those reasons, law enforcement now attempts to apprehend suspected cybercriminals with their computers on and in use, all accounts both on the computer and online open and logged in, and thus easily searchable. If they fail to seize the computer in that condition, there are some methods available to bypass password protection, but these may take more time than police have available. It might be legally impossible to compel the suspect to relinquish their password; in the United States, where many computer-crime investigations take place, courts have distinguished between forcing a suspect to use material means of protecting data such as a thumbprint, retinal scan, or key, as opposed to a password or passcode, which is purely the product of the suspect's mental processes and is thus protected from compelled disclosure by the Fifth Amendment.
The usual technique for authorities—either public entities such as law enforcement or private organizations like companies—seizing a computer (usually a laptop) that they believe is being used improperly is first to physically separate the suspect user from the computer enough that they cannot touch it, to prevent them from closing its lid, unplugging it, or typing a command. Once they have done so, they often install a device in the USB port that spoofs minor actions of a mouse, touchpad, or keyboard, preventing the computer from going into sleep mode, from which it would usually return to a lock screen which would require a password.
Agents with the U.S. Federal Bureau of Investigation (FBI) investigating Ross Ulbricht, founder of the online black market Silk Road, learned that he often ran the site from his laptop, using the wireless networks available at branches of the San Francisco Public Library. When they had enough evidence to arrest him, they planned to catch him in the act of running Silk Road, with his computer on and logged in. They needed to ensure he was unable to trigger encryption or delete evidence when they did.
In October 2013, a male and female agent pretended to have a lovers' quarrel near where Ulbricht was working at the Glen Park branch. According to Business Insider, Ulbricht was distracted and got up to see what the problem was, whereupon the female agent grabbed his laptop while the male agent restrained Ulbricht. The female agent was then able to insert a flash drive into one of the laptop's USB ports, with software that copied key files. According to Joshuah Bearman of Wired, a third agent grabbed the laptop while Ulbricht was distracted by the apparent lovers' fight and handed it to agent Tom Kiernan.
Use
In response to the circumstances of Ulbricht's arrest, a programmer known as Hephaest0s developed the USBKill code in Python and uploaded it to GitHub in 2014. It is available as free software under the GNU General Public License and currently runs under both Linux and OS X.
The program, when installed, prompts the user to create a whitelist of devices that are allowed to connect to the computer via its USB ports, which it checks at an adjustable sample rate. The user may also choose what actions the computer will take if it detects a USB device not on the whitelist (by default, it shuts down and erases data from the RAM and swap file). Users need to be logged in as root. Hephaest0s cautions users that they must be using at least partial disk encryption along with USBKill to fully prevent attackers from gaining access; Gizmodo suggests using a virtual machine that will not be present when the computer reboots.
It can also be used in reverse, with a whitelisted flash drive in the USB port attached to the user's wrist via a lanyard serving as a key. In this instance, if the flash drive is forcibly removed, the program will initiate the desired routines. "[It] is designed to do one thing," wrote Aaron Grothe in a short article on USBKill in 2600, "and it does it pretty well." As a further precaution, he suggests users rename it to something innocuous once they have loaded it on their computers, in case someone might be looking for it on a seized computer to disable it.
In addition to its designed purpose, Hephaest0s suggests other uses unconnected to a user's desire to frustrate police and prosecutors. As part of a general security regimen, it could be used to prevent the surreptitious installation of malware or spyware on, or copying of files from, a protected computer. It is also recommended for general use as part of a robust security practice, even when there are no threats to be feared.
Variations and modifications
With his 2600 article, Grothe shared a patch that included a feature that allowed the program to shut down a network when a non-whitelisted USB is inserted into any terminal. Nate Brune, another programmer, created Silk Guardian, a version of USBKill that takes the form of a loadable kernel module, he "remade this project as a Linux kernel driver for fun and to learn." In the issue of 2600 following Grothe's article, another writer, going by the name Jack D. Ripper, explained how Ninja OS, an operating system designed for live flash drives, handles the issue. It uses a bash script resident in memory based watchdog timer that cycles a loop through the boot device (i.e., the flash drive) three times a second to see if it is still mounted and reboots the computer if it is not.
See also
BusKill
List of data-erasing software
List of free and open-source software packages
References
External links
Anti-forensic software
Software using the GNU General Public License
Computer security software
USB
2014 establishments | USBKill | Engineering | 1,633 |
35,656,106 | https://en.wikipedia.org/wiki/Erwin%20Gabathuler | Erwin Gabathuler (16 November 1933 – 29 August 2016) was a particle physicist from Northern Ireland.
Early life
Erwin Gabathuler was born in Maghera, County Londonderry, Northern Ireland on 16 November 1933, a son of the manager of the Swiss embroidery factory. He attended Rainey Endowed School, Magherafelt, and then Queen's University Belfast. There he studied physics, and was awarded BSc in 1956, and MSc in 1957 for a thesis on "Electron Collision Cross-sections of Atmospheric Gases". He then moved to the University of Glasgow to work at the 300 MeV synchrotron there, and was awarded a PhD in 1961.
Career
He researched at Cornell University, USA from 1961–1964, then Daresbury Laboratory, Cheshire 1965-1974. He began work at CERN in 1974, as a scientific attaché from the Rutherford Laboratory, Didcot, eventually becoming a direct employee at CERN in 1978 for a 4-year appointment as Head of the Experimental Physics Division, taking over from Emilio Picasso. He spent 1983-2002 at the University of Liverpool as professor of Physics and head of the particle physics group, maintaining his connections with CERN. When he retired, the University of Liverpool organised an "ErwinFest" to celebrate his career.
Awards and decorations
He was elected to the Royal Society on 15 March 1990 and received the Rutherford Medal and Prize in 1992 (with Terry Sloan) from the Institute of Physics. He was made an Officer of the Order of the British Empire in 2001 for services to physics.
He received two honorary degrees, an honorary doctorate from the Faculty of Mathematics and Science at Uppsala University, Sweden in 1982., and a D.Sc. from Queen's University, Belfast in 1997.
Research and achievements
According to INSPIRE-HEP, Gabathuler co-authored more than 1200 published papers.
He was one of the founding fathers of the European Muon Collaboration at CERN.
Academic Papers published 1958-1977
Photoproduction of strange particles, Proceedings of International Conference on High Energy Physics at CERN, p. 266-269, 1962.
Photoproduction of K+Σ° in hydrogen, Bull. Am. Phys. Soc., 9, 22, 1964 - complete results given here only.
Photoproduction of K+ meson from hydrogen, Proceedings of International Conference on Electron and Photon Interactions at High Energies, p. 203-206, Hamburg, 1965.
Evidence for the ω-2π Decay by ρ-ω Interference in Vector Meson Photoproduction. Invited Paper at International Conference on Experimental Meson Spectroscopy. - Experimental Meson Spectroscopy, Columbia University Press p. 645-655, 1970.
Interference Effects in High Energy Vector Meson Photoproduction. Review Paper in Vector Meson Production and Omega-Rho Interference . p. 115-138, DNPL/R7, June 1970.
Experimental Programme at the Daresbury Laboratory. Invited Talk Photon and Lepton Physics in Europe — DNPL R.11, 1972.
Experimental Utilisation at the NINA Booster - DNPL R13 Vol. II, 1972.
A High Intensity Muon Beam at the S.P.S., Vol. I. Proceedings of the Tirrenia Study Week, p. 208, CERN/ECFA/72/4, 1972.
Total Cross—Sections — Rapporteur Talk at the Proceedings of the 6th International Symposium on Electron and Photon Interactions at High Energies, Bonn 1973.
References
External links
Scientific publications of Erwin Gabathuler on INSPIRE-HEP
1933 births
People associated with CERN
People from Maghera
Officers of the Order of the British Empire
Fellows of the Royal Society
Alumni of Queen's University Belfast
Alumni of the University of Glasgow
Cornell University people
2016 deaths
British people of Swiss descent
Scientists from County Londonderry
20th-century physicists from Northern Ireland
Particle physicists
21st-century physicists from Northern Ireland | Erwin Gabathuler | Physics | 799 |
42,783,108 | https://en.wikipedia.org/wiki/Weighted%20projective%20space | In algebraic geometry, a weighted projective space P(a0,...,an) is the projective variety Proj(k[x0,...,xn]) associated to the graded ring k[x0,...,xn] where the variable xk has degree ak.
Properties
If d is a positive integer then P(a0,a1,...,an) is isomorphic to P(da0,da1,...,dan). This is a property of the Proj construction; geometrically it corresponds to the d-tuple Veronese embedding. So without loss of generality one may assume that the degrees ai have no common factor.
Suppose that a0,a1,...,an have no common factor, and that d is a common factor of all the ai with i≠j, then P(a0,a1,...,an) is isomorphic to P(a0/d,...,aj-1/d,aj,aj+1/d,...,an/d) (note that d is coprime to aj; otherwise the isomorphism does not hold). So one may further assume that any set of n variables ai have no common factor. In this case the weighted projective space is called well-formed.
The only singularities of weighted projective space are cyclic quotient singularities.
A weighted projective space is a Q-Fano variety and a toric variety.
The weighted projective space P(a0,a1,...,an) is isomorphic to the quotient of projective space by the group that is the product of the groups of roots of unity of orders a0,a1,...,an acting diagonally.
References
Algebraic geometry | Weighted projective space | Mathematics | 380 |
54,513,789 | https://en.wikipedia.org/wiki/J%C3%BCrgen%20Gehrels | Jürgen Carlos Gehrels FIET (born 24 July 1935) is a German businessman, and a former Chief Executive and Chairman of Siemens UK (Siemens Holdings plc).
Early life
He was the son of Dr Hans Gehrels and Ursula da Rocha. He attended the Technical University of Munich and Technische Universität Berlin.
Career
Siemens
From 1965-79 he worked for Siemens AG in Germany.
Siemens UK
He became Chief Executive of Siemens UK in 1986. In 1995 he was responsible for opening the Siemens Semiconductors plant on North Tyneside, a £1.1bn inward investment; the largest-ever inward investment in the UK. The plant was opened by the Queen in May 1997. The site is now the Cobalt Business Park, off the A19.
He left as Chairman of Siemens UK in September 2007. At the time, Siemens employed around 20,000 people in the UK, turning over around £3.5bn.
Personal life
He lives at Porlezza in Italy. He married Sigrid Kausch in 1963, and they had a son and a daughter. He is an Anglophile.
References
External links
2005 photography
Siemens UK
1935 births
Fellows of the Institution of Engineering and Technology
German chief executives
German electronics engineers
Siemens people
Technische Universität Berlin alumni
Technical University of Munich alumni
Living people
Honorary Knights Commander of the Order of the British Empire | Jürgen Gehrels | Engineering | 279 |
546,635 | https://en.wikipedia.org/wiki/Uncapping | Uncapping, in the context of cable modems, refers to a number of activities performed to alter an Internet service provider's modem settings. It is sometimes done for the sake of bandwidth (i.e. by buying a 512 kbit/s access modem and then altering it to 10 Mbit/s), pluggable interfaces (as by using more than one public ID), or any configurable options a DOCSIS modem can offer. However, uncapping may be considered an illegal activity, such as theft of service.
Methods
There are several methods used to uncap a cable modem, by hardware, software, tricks, alterations, and modifications.
One of the most popular modifications is used on Motorola modems (such as the SB3100, SB4100, and SB4200 models); by spoofing the Internet service provider's TFTP server, the modem is made to accept a different configuration file than the one provided by the TFTP server. This configuration file tells the modem the download and upload caps it should enforce. An example of spoofing would be to edit the configuration file, which requires a DOCSIS editor, or replacing the configuration file with one obtained from a faster modem (e.g. through a Gnutella network).
An alternate method employs DHCPforce. By flooding a modem with faked DHCP packets (which contain configuration filename, TFTP, IP, etc.), one can convince the modem to accept any desired configuration file, even one from one's own server (provided the server is routed, of course).
Another more advanced method is to attach a TTL to the modem's RS-232 adapter, and get access to the modem's console directly to make it download new firmware, which can then be configured via a simple web interface. Examples include SIGMA, a firmware add-on that expands the current features of an underlying firmware, and others.
See also
Bandwidth cap
References
External links
Fibercoax Group
TCNiSO Embedded Development Assembly Experts
Broadband
Internet terminology | Uncapping | Technology | 441 |
47,330,621 | https://en.wikipedia.org/wiki/Suillus%20caerulescens | Suillus caerulescens, commonly known as the douglas-fir suillus is an edible species of bolete fungus in the family Suillaceae. It was first described scientifically by American mycologists Alexander H. Smith and Harry D. Thiers in 1964. It can be found growing with Douglas fir trees. Its stem bruises blue, which sometimes takes a few minutes.
The cap is yellowish to reddish brown, sometimes with streaks from its darker center. It ranges from in diameter, shaped convex to flat, and viscid when wet, sometimes with veil remnants on the edge. The flesh is yellowish, as are the pores. The stalk is yellowish to brown, darkening with age, 2–8 cm tall and 1–3 cm wide, and bruises bluish at the base; it sometimes has a faint ring.
While edible, it is considered of poor quality.
Suillus lakei is fairly similar.
See also
List of North American boletes
References
External links
caerulescens
Edible fungi
Fungi described in 1964
Fungi of North America
Taxa named by Harry Delbert Thiers
Taxa named by Alexander H. Smith
Fungus species | Suillus caerulescens | Biology | 232 |
22,842,718 | https://en.wikipedia.org/wiki/Urban%20theory | Urban theory describes the economic, political, and social processes which affect the formation and development of cities.
Overview
Theoretical discourse has often polarized between economic determinism and cultural determinism with scientific or technological determinism adding another contentious issue of reification. Studies across eastern and western nations have suggested that certain cultural values promote economic development and that the economy in turn changes cultural values. Urban historians were among the first to acknowledge the importance of technology in the city development. Technology embeds the single most dominant characteristic of a city, and the networked character of the city is perpetuated by information technology. Regardless of the deterministic stance (economic, cultural or technological), in the context of globalization, there is a mandate to mold the city to complement the global economic structure and
Political processes
Lewis Mumford described monumental architecture as an "expression of power" seeking to produce "respectful terror". Gigantism, geometry, and order are characteristic of cities such as Washington, D.C., New Delhi, Beijing and Brasília.
Economic capital and globalization
The Industrial Revolution was accompanied by urbanization in Europe and the United States in the 19th century. Friedrich Engels studied Manchester, which was being transformed by the cotton industry. He noted how the city was divided between wealthy areas and working class areas, which were physically separated from one another (and the people living in those areas could not see each other easily). The city was therefore a function of capital.
Georg Simmel studied the effect of the urban environment on the individuals living in cities, arguing in The Metropolis and Mental Life that the increase in human interaction affected relationships. The activity and anonymity of the city led to a 'blasé attitude' with reservations and aloofness by urban denizens. This was also driven by the market economy of the city, which corroded traditional norms. However, people in cities were also more tolerant and sophisticated.
Henri Lefebvre argued in the 1960s and 1970s that urban space is dominated by the interests of corporations and capitalism. Private places such as shopping centres and office buildings dominated the public space. The economic relations could be seen in the city itself, with wealthy areas being far more opulent than the run-down areas inhabited by poor people. To fix this, a right to the city needed to be asserted to give everyone a say on urban space.
Economic sustainability
Urbanomics can spill over beyond the city parameters. The process of globalization extends its territories into global city regions. Essentially, they are territorial platforms (metropolitan extensions from key cities, chain of cities linked within a state territory or across inter-state boundaries and arguably; networked cities and/or regions cutting across national boundaries) interconnected in the globalized economy. Some see global city-regions, rather than global cities, as the nodes of a global network.
The rules of engagement are built on economic sustainability – the ability to continuously generate wealth. The cornerstones of this economic framework are the following ‘4C’ attributes: (1) currency flow for trading, (2) commoditization of products and services in supply chain management, (3) command centre function in orchestrating interdependency and monitoring executions, and (4) consumerization. Unless, decoupling the economy from these attributes can be demonstrated; symbolic capital expressions, as legitimate as they may be, must accept the domineering status of urbanomics.
Revisiting economic measurements
Arguably, the culprit of this economic entrapment is the high-consumption lifestyle synonymous with wealth. The resolve may well be that ‘less is more’ and that true welfare lies not in a rise in production and income. As such, Gross Domestic Product (GDP) is increasingly being questioned and considered inaccurate and inadequate. GDP includes things that do not contribute to sustainable growth, and excludes non-monetary benefits that improve the welfare of the people. In response, alternative measures have been proposed, including the Genuine Progress Indicator (GPI) and the Index of Sustainable Economic Welfare (ISEW).
See also
MONU (magazine) - publication about urbanism
Rural economics
Urban economics
Urban decay
Urban development
Urban planning
Urban studies
Urban vitality
References
External links
MA Theories of Urban Practice program in New York City
Notes
Papageorgiou, Y. & Pines, D. (1999)An Essay on Urban Economic Theory, London: Kluwer Academic Publishers
Steingart, G. (2008) The War for Wealth. The True Story of Globalization or Why the Flat World is Broken, New York: McGraw Hill
Aseem Inam, Designing Urban Transformation New York and London: Routledge, 2013. .
Urban planning
Urban economics | Urban theory | Engineering | 944 |
36,734,773 | https://en.wikipedia.org/wiki/Miner%27s%20habit | A miner's habit ( or Bergmannshabit) is the traditional dress of miners in Europe. The actual form varies depending on the region, the actual mining function, and whether it is used for work or for ceremonial occasions.
Elements
At work, the miner of the Middle Ages in Europe wore the normal costume for his local region – pit trousers (Grubenhose), shoes and miner's jacket (Bergkittel).
Only gradually was the typical miner's uniform created by the addition of unmistakable elements of miner's apparel such as the miner's apron (Arschleder), knee pads (Kniebügel), miner's cap (Fahrhaube or Fahrkappe, later pit hat (Schachthut), the mining tools needed for work in the pit, such as hammers (Fäustel), chisels (Eisen), wedges, picks (Keilhauen), hoes (Kratze), shovels, crowbars, pikes (Brechstangen) or miner's chisels (Bergeisen), mallets (Schlägel) or carpenter's hatchets, the miners' safety lamps (often a Froschlampe), and the Tzscherper bag (for the miner's knife (Tzscherpermesser) and lamp accessories like rape oil, flint and tinder).
There were specific accoutrements for the individual trade groups. The mining foreman or Steiger, for example, carried the Steigerhäckel, a simple hewer (Häuer) bore a miner's hatchet (Grubenbeil). Able miners (Doppelhäuer) carried a miner's axe (Bergbarte or Bergparte), which was simultaneously a tool and a weapon. The smelters (Hüttenleute) wore the leather apron as a pinafore (Schürze) in front of them (i.e. "back to front") and carried various implements: the Firke or Furkel, the rake (Rechen) and the tapping bar (Stecheisen or Abstichlanze).
In 1769 in Saxony, the Marienberg Bergmeister, von Trebra, introduced the wearing of the black mining habit.
The variety of mining habits may still be seen in the mining processions typical of the old mining regions even today.
Gallery
See also
Miner's apron
Miner's cap
Mooskappe – miner's cap from the Harz
Literature
References
External links
German mining terminology
Mining and Ore Mountain terms
History of mining in Germany
Mining culture and traditions
Uniforms
Miners' clothing | Miner's habit | Engineering | 560 |
530,691 | https://en.wikipedia.org/wiki/Glucocorticoid | Glucocorticoids (or, less commonly, glucocorticosteroids) are a class of corticosteroids, which are a class of steroid hormones. Glucocorticoids are corticosteroids that bind to the glucocorticoid receptor that is present in almost every vertebrate animal cell. The name "glucocorticoid" is a portmanteau (glucose + cortex + steroid) and is composed from its role in regulation of glucose metabolism, synthesis in the adrenal cortex, and its steroidal structure (see structure below).
Glucocorticoids are part of the feedback mechanism in the immune system, which reduces certain aspects of immune function, such as inflammation. They are therefore used in medicine to treat diseases caused by an overactive immune system, such as allergies, asthma, autoimmune diseases, and sepsis. Glucocorticoids have many diverse effects such as pleiotropy, including potentially harmful side effects. They also interfere with some of the abnormal mechanisms in cancer cells, so they are used in high doses to treat cancer. This includes inhibitory effects on lymphocyte proliferation, as in the treatment of lymphomas and leukemias, and the mitigation of side effects of anticancer drugs.
Glucocorticoids affect cells by binding to the glucocorticoid receptor. The activated glucocorticoid receptor-glucocorticoid complex up-regulates the expression of anti-inflammatory proteins in the nucleus (a process known as transactivation) and represses the expression of pro-inflammatory proteins in the cytosol by preventing the translocation of other transcription factors from the cytosol into the nucleus (transrepression).
Glucocorticoids are distinguished from mineralocorticoids and sex steroids by their specific receptors, target cells, and effects. In technical terms, "corticosteroid" refers to both glucocorticoids and mineralocorticoids (as both are mimics of hormones produced by the adrenal cortex), but is often used as a synonym for "glucocorticoid". Glucocorticoids are chiefly produced in the zona fasciculata of the adrenal cortex, whereas mineralocorticoids are synthesized in the zona glomerulosa.
Cortisol (or hydrocortisone) is the most important human glucocorticoid. It is essential for life, and it regulates or supports a variety of important cardiovascular, metabolic, immunologic, and homeostatic functions. Increases in glucocorticoid concentrations are an integral part of stress response and are the most commonly used biomarkers to measure stress. Glucocorticoids have numerous non-stress-related functions as well, and glucocorticoid concentrations can increase in response to pleasure or excitement. Various synthetic glucocorticoids are available; these are widely utilized in general medical practice and numerous specialties, either as replacement therapy in glucocorticoid deficiency or to suppress the body's immune system.
Effects
Glucocorticoid effects may be broadly classified into two major categories: immunological and metabolic. In addition, glucocorticoids play important roles in fetal development and body fluid homeostasis.
Immune
Glucocorticoids function via interaction with the glucocorticoid receptor:
Upregulate the expression of anti-inflammatory proteins.
Downregulate the expression of proinflammatory proteins.
Glucocorticoids are also shown to play a role in the development and homeostasis of T lymphocytes. This has been shown in transgenic mice with either increased or decreased sensitivity of T cell lineage to glucocorticoids.
Metabolic
The name "glucocorticoid" derives from early observations that these hormones were involved in glucose metabolism. In the fasted state, cortisol stimulates several processes that collectively serve to increase and maintain normal concentrations of glucose in the blood.
Metabolic effects:
Stimulation of gluconeogenesis, in particular, in the liver: This pathway results in the synthesis of glucose from non-hexose substrates, such as amino acids and glycerol from triglyceride breakdown, and is particularly important in carnivores and certain herbivores. Enhancing the expression of enzymes involved in gluconeogenesis is probably the best-known metabolic function of glucocorticoids.
Mobilization of amino acids from extrahepatic tissues: These serve as substrates for gluconeogenesis.
Inhibition of glucose uptake in muscle and adipose tissue: A mechanism to conserve glucose
Stimulation of fat breakdown in adipose tissue: The fatty acids released by lipolysis are used for production of energy in tissues like muscle, and the released glycerol provide another substrate for gluconeogenesis.
Increase in sodium retention and potassium excretion leads to hypernatremia and hypokalemia
Increase in hemoglobin concentration, likely due to hindrance of the ingestion of red blood cell by macrophage or other phagocyte.
Increased urinary uric acid
Increased urinary calcium and hypocalcemia
Alkalosis
Leukocytosis
Excessive glucocorticoid levels resulting from administration as a drug or hyperadrenocorticism have effects on many systems. Some examples include inhibition of bone formation, suppression of calcium absorption (both of which can lead to osteoporosis), delayed wound healing, muscle weakness, and increased risk of infection. These observations suggest a multitude of less-dramatic physiologic roles for glucocorticoids.
Developmental
Glucocorticoids have multiple effects on fetal development. An important example is their role in promoting maturation of the lung and production of the surfactant necessary for extrauterine lung function. Mice with homozygous disruptions in the corticotropin-releasing hormone gene (see below) die at birth due to pulmonary immaturity. In addition, glucocorticoids are necessary for normal brain development, by initiating terminal maturation, remodeling axons and dendrites, and affecting cell survival and may also play a role in hippocampal development. Glucocorticoids stimulate the maturation of the Na+/K+/ATPase, nutrient transporters, and digestion enzymes, promoting the development of a functioning gastro-intestinal system. Glucocorticoids also support the development of the neonate's renal system by increasing glomerular filtration.
Arousal and cognition
Glucocorticoids act on the hippocampus, amygdala, and frontal lobes. Along with adrenaline, these enhance the formation of flashbulb memories of events associated with strong emotions, both positive and negative. This has been confirmed in studies, whereby blockade of either glucocorticoids or noradrenaline activity impaired the recall of emotionally relevant information. Additional sources have shown subjects whose fear learning was accompanied by high cortisol levels had better consolidation of this memory (this effect was more important in men). The effect that glucocorticoids have on memory may be due to damage specifically to the CA1 area of the hippocampal formation.
In multiple animal studies, prolonged stress (causing prolonged increases in glucocorticoid levels) have shown destruction of the neurons in the hippocampus area of the brain, which has been connected to lower memory performance.
Glucocorticoids have also been shown to have a significant impact on vigilance (attention deficit disorder) and cognition (memory). This appears to follow the Yerkes-Dodson curve, as studies have shown circulating levels of glucocorticoids vs. memory performance follow an upside-down U pattern, much like the Yerkes-Dodson curve. For example, long-term potentiation (LTP; the process of forming long-term memories) is optimal when glucocorticoid levels are mildly elevated, whereas significant decreases of LTP are observed after adrenalectomy (low-glucocorticoid state) or after exogenous glucocorticoid administration (high-glucocorticoid state). Elevated levels of glucocorticoids enhance memory for emotionally arousing events, but lead more often than not to poor memory for material unrelated to the source of stress/emotional arousal. In contrast to the dose-dependent enhancing effects of glucocorticoids on memory consolidation, these stress hormones have been shown to inhibit the retrieval of already stored information. Long-term exposure to glucocorticoid medications, such as asthma and anti-inflammatory medication, has been shown to create deficits in memory and attention both during and, to a lesser extent, after treatment, a condition known as "steroid dementia".
Body fluid homeostasis
Glucocorticoids could act centrally, as well as peripherally, to assist in the normalization of extracellular fluid volume by regulating body's action to atrial natriuretic peptide (ANP). Centrally, glucocorticoids could inhibit dehydration-induced water intake; peripherally, glucocorticoids could induce a potent diuresis.
Mechanism of action
Transactivation
Glucocorticoids bind to the cytosolic glucocorticoid receptor, a type of nuclear receptor that is activated by ligand binding. After a hormone binds to the corresponding receptor, the newly formed complex translocates itself into the cell nucleus, where it binds to glucocorticoid response elements in the promoter region of the target genes resulting in the regulation of gene expression. This process is commonly referred to as transcriptional activation, or transactivation.
The proteins encoded by these up-regulated genes have a wide range of effects, including, for example:
Anti-inflammatory – lipocortin I, p11/calpactin binding protein, secretory leukocyte protease inhibitor 1 (SLPI), and Mitogen-activated protein kinase phosphatase (MAPK phosphatase)
Increased gluconeogenesis – glucose 6-phosphatase and tyrosine aminotransferase
Transrepression
The opposite mechanism is called transcriptional repression, or transrepression. The classical understanding of this mechanism is that activated glucocorticoid receptor binds to DNA in the same site where another transcription factor would bind, which prevents the transcription of genes that are transcribed via the activity of that factor. While this does occur, the results are not consistent for all cell types and conditions; there is no generally accepted, general mechanism for transrepression.
New mechanisms are being discovered where transcription is repressed, but the activated glucocorticoid receptor is not interacting with DNA, but rather with another transcription factor directly, thus interfering with it, or with other proteins that interfere with the function of other transcription factors. This latter mechanism appears to be the most likely way that activated glucocorticoid receptor interferes with NF-κB - namely by recruiting histone deacetylase, which deacetylate the DNA in the promoter region leading to closing of the chromatin structure where NF-κB needs to bind.
Nongenomic effects
Activated glucocorticoid receptor has effects that have been experimentally shown to be independent of any effects on transcription and can only be due to direct binding of activated glucocorticoid receptor with other proteins or with mRNA.
For example, Src kinase which binds to inactive glucocorticoid receptor, is released when a glucocorticoid binds to glucocorticoid receptor, and phosphorylates a protein that in turn displaces an adaptor protein from a receptor important in inflammation, epidermal growth factor, reducing its activity, which in turn results in reduced creation of arachidonic acid – a key proinflammatory molecule. This is one mechanism by which glucocorticoids have an anti-inflammatory effect.
Pharmacology
A variety of synthetic glucocorticoids, some far more potent than cortisol, have been created for therapeutic use. They differ in both pharmacokinetics (absorption factor, half-life, volume of distribution, clearance) and pharmacodynamics (for example the capacity of mineralocorticoid activity: retention of sodium (Na) and water; renal physiology). Because they permeate the intestines easily, they are administered primarily per os (by mouth), but also by other methods, such as topically on skin. More than 90% of them bind different plasma proteins, though with a different binding specificity. Endogenous glucocorticoids and some synthetic corticoids have high affinity to the protein transcortin (also called corticosteroid-binding globulin), whereas all of them bind albumin. In the liver, they quickly metabolize by conjugation with a sulfate or glucuronic acid, and are secreted in the urine.
Glucocorticoid potency, duration of effect, and the overlapping mineralocorticoid potency vary. Cortisol is the standard of comparison for glucocorticoid potency. Hydrocortisone is the name used for pharmaceutical preparations of cortisol.
The data below refer to oral administration. Oral potency may be less than parenteral potency because significant amounts (up to 50% in some cases) may not reach the circulation. Fludrocortisone acetate and deoxycorticosterone acetate are, by definition, mineralocorticoids rather than glucocorticoids, but they do have minor glucocorticoid potency and are included in this table to provide perspective on mineralocorticoid potency.
Therapeutic use
Glucocorticoids may be used in low doses in adrenal insufficiency. In much higher doses, oral or inhaled glucocorticoids are used to suppress various allergic, inflammatory, and autoimmune disorders. Inhaled glucocorticoids are the second-line treatment for asthma. They are also administered as post-transplantory immunosuppressants to prevent the acute transplant rejection and the graft-versus-host disease. Nevertheless, they do not prevent an infection and also inhibit later reparative processes. Newly emerging evidence showed that glucocorticoids could be used in the treatment of heart failure to increase the renal responsiveness to diuretics and natriuretic peptides. Glucocorticoids are historically used for pain relief in inflammatory conditions. However, corticosteroids show limited efficacy in pain relief and potential adverse events for their use in tendinopathies.
Replacement
Any glucocorticoid can be given in a dose that provides approximately the same glucocorticoid effects as normal cortisol production; this is referred to as physiologic, replacement, or maintenance dosing. This is approximately 6–12 mg/m2/day of hydrocortisone (m2 refers to body surface area (BSA), and is a measure of body size; an average man's BSA is 1.9 m2).
Therapeutic immunosuppression
Glucocorticoids cause immunosuppression, and the therapeutic component of this effect is mainly the decreases in the function and numbers of lymphocytes, including both B cells and T cells.
The major mechanism for this immunosuppression is through inhibition of nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB). NF-κB is a critical transcription factor involved in the synthesis of many mediators (i.e., cytokines) and proteins (i.e., adhesion proteins) that promote the immune response. Inhibition of this transcription factor, therefore, blunts the capacity of the immune system to mount a response.
Glucocorticoids suppress cell-mediated immunity by inhibiting genes that code for the cytokines IL-1, IL-2, IL-3, IL-4, IL-5, IL-6, IL-8 and IFN-γ, the most important of which is IL-2. Smaller cytokine production reduces the T cell proliferation.
Glucocorticoids, however, not only reduce T cell proliferation, but also lead to another well known effect - glucocorticoid-induced apoptosis. The effect is more prominent in immature T cells still inside in the thymus, but peripheral T cells are also affected. The exact mechanism regulating this glucocorticoid sensitivity lies in the Bcl-2 gene.
Glucocorticoids also suppress the humoral immunity, thereby causing a humoral immune deficiency. Glucocorticoids cause B cells to express smaller amounts of IL-2 and of IL-2 receptors. This diminishes both B cell clone expansion and antibody synthesis. The diminished amounts of IL-2 also cause fewer T lymphocyte cells to be activated.
The effect of glucocorticoids on Fc receptor expression in immune cells is complicated. Dexamethasone decreases IFN-gamma stimulated Fc gamma RI expression in neutrophils while conversely causing an increase in monocytes. Glucocorticoids may also decrease the expression of Fc receptors in macrophages, but the evidence supporting this regulation in earlier studies has been questioned. The effect of Fc receptor expression in macrophages is important since it is necessary for the phagocytosis of opsonised cells. This is because Fc receptors bind antibodies attached to cells targeted for destruction by macrophages.
Anti-inflammatory
Glucocorticoids are potent anti-inflammatories, regardless of the inflammation's cause; their primary anti-inflammatory mechanism is lipocortin-1 (annexin-1) synthesis. Lipocortin-1 both suppresses phospholipase A2, thereby blocking eicosanoid production, and inhibits various leukocyte inflammatory events (epithelial adhesion, emigration, chemotaxis, phagocytosis, respiratory burst, etc.). In other words, glucocorticoids not only suppress immune response, but also inhibit the two main products of inflammation, prostaglandins and leukotrienes. They inhibit prostaglandin synthesis at the level of phospholipase A2 as well as at the level of cyclooxygenase/PGE isomerase (COX-1 and COX-2), the latter effect being much like that of NSAIDs, thus potentiating the anti-inflammatory effect.
In addition, glucocorticoids also suppress cyclooxygenase expression.
Glucocorticoids marketed as anti-inflammatories are often topical formulations, such as nasal sprays for rhinitis or inhalers for asthma. These preparations have the advantage of only affecting the targeted area, thereby reducing side effects or potential interactions. In this case, the main compounds used are beclometasone, budesonide, fluticasone, mometasone and ciclesonide. In rhinitis, sprays are used. For asthma, glucocorticoids are administered as inhalants with a metered-dose or dry powder inhaler. In rare cases, symptoms of radiation induced thyroiditis has been treated with oral glucocorticoids.
Hyperaldosteronism
Glucocorticoids can be used in the management of familial hyperaldosteronism type 1. They are not effective, however, for use in the type 2 condition.
Heart failure
Glucocorticoids could be used in the treatment of decompensated heart failure to potentiate renal responsiveness to diuretics, especially in heart failure patients with refractory diuretic resistance with large doses of loop diuretics.
Resistance
Resistance to the therapeutic uses of glucocorticoids can present difficulty; for instance, 25% of cases of severe asthma may be unresponsive to steroids. This may be the result of genetic predisposition, ongoing exposure to the cause of the inflammation (such as allergens), immunological phenomena that bypass glucocorticoids, pharmacokinetic disturbances (incomplete absorption or accelerated excretion or metabolism) and viral and/or bacterial respiratory infections.
Side effects
Glucocorticoid drugs currently being used act nonselectively, so in the long run they may impair many healthy anabolic processes. To prevent this, much research has been focused recently on the elaboration of selectively acting glucocorticoid drugs. Side effects include:
Immunodeficiency (see section below)
Hyperglycemia due to increased gluconeogenesis, insulin resistance, and impaired glucose tolerance ("steroid diabetes"); caution in those with diabetes mellitus
Increased skin fragility, easy bruising
Negative calcium balance due to reduced intestinal calcium absorption
Steroid-induced osteoporosis: reduced bone density (osteoporosis, osteonecrosis, higher fracture risk, slower fracture repair)
Weight gain due to increased visceral and truncal fat deposition (central obesity) and appetite stimulation; see corticosteroid-induced lipodystrophy
Hypercortisolemia with prolonged or excessive use (also known as, exogenous Cushing's syndrome)
Impaired memory and attention deficits See steroid dementia syndrome.
Adrenal insufficiency (if used for long time and stopped suddenly without a taper)
Muscle and tendon breakdown (proteolysis), weakness, reduced muscle mass and repair
Expansion of malar fat pads and dilation of small blood vessels in skin
Lipomatosis within the epidural space
Excitatory effect on central nervous system (euphoria, psychosis)
Anovulation, irregularity of menstrual periods
Growth failure, delayed puberty
Increased plasma amino acids, increased urea formation, negative nitrogen balance
Glaucoma due to increased ocular pressure
Cataracts
Topical steroid withdrawal
In high doses, hydrocortisone (cortisol) and those glucocorticoids with appreciable mineralocorticoid potency can exert a mineralocorticoid effect as well, although in physiologic doses this is prevented by rapid degradation of cortisol by 11β-hydroxysteroid dehydrogenase isoenzyme 2 (11β-HSD2) in mineralocorticoid target tissues. Mineralocorticoid effects can include salt and water retention, extracellular fluid volume expansion, hypertension, potassium depletion, and metabolic alkalosis.
Immunodeficiency
Glucocorticoids cause immunosuppression, decreasing the function and/or numbers of neutrophils, lymphocytes (including both B cells and T cells), monocytes, macrophages, and the anatomical barrier function of the skin. This suppression, if large enough, can cause manifestations of immunodeficiency, including T cell deficiency, humoral immune deficiency and neutropenia.
Withdrawal
In addition to the effects listed above, use of high-dose glucocorticoids for only a few days begins to produce suppression of the patient's adrenal glands suppressing hypothalamic corticotropin-releasing hormone (CRH) leading to suppressed production of adrenocorticotropic hormone (ACTH) by the anterior pituitary. With prolonged suppression, the adrenal glands atrophy (physically shrink), and can take months to recover full function after discontinuation of the exogenous glucocorticoid.
During this recovery time, the patient is vulnerable to adrenal insufficiency during times of stress, such as illness. While suppressive dose and time for adrenal recovery vary widely, clinical guidelines have been devised to estimate potential adrenal suppression and recovery, to reduce risk to the patient. The following is one example:
If patients have been receiving daily high doses for five days or less, they can be abruptly stopped (or reduced to physiologic replacement if patients are adrenal-deficient). Full adrenal recovery can be assumed to occur by a week afterward.
If high doses were used for six to 10 days, reduce to replacement dose immediately and taper over four more days. Adrenal recovery can be assumed to occur within two to four weeks of completion of steroids.
If high doses were used for 11–30 days, cut immediately to twice replacement, and then by 25% every four days. Stop entirely when dose is less than half of replacement. Full adrenal recovery should occur within one to three months of completion of withdrawal.
If high doses were used more than 30 days, cut dose immediately to twice replacement, and reduce by 25% each week until replacement is reached. Then change to oral hydrocortisone or cortisone as a single morning dose, and gradually decrease by 2.5 mg each week. When the morning dose is less than replacement, the return of normal basal adrenal function may be documented by checking 0800 cortisol levels prior to the morning dose; stop drugs when 0800 cortisol is 10 μg/dl. Predicting the time to full adrenal recovery after prolonged suppressive exogenous steroids is difficult; some people may take nearly a year.
Flare-up of the underlying condition for which steroids are given may require a more gradual taper than outlined above.
See also
List of corticosteroids
List of corticosteroid cyclic ketals
List of corticosteroid esters
Aminoglutethimide blocks glucocorticoid secretion
GITR (glucocorticoid-induced TNF receptor)
Glucocorticoid receptor
Immunosuppressive drug
Membrane glucocorticoid receptor
Metyrapone blocks glucocorticoid secretion
Selective glucocorticoid receptor agonist
Topical glucocorticoids
Topical steroid
Steroid atrophy
Topical steroid withdrawal
Non-steroidal anti-inflammatory drug (NSAID)
References
Further reading
External links
Chemical substances for emergency medicine
Corticosteroids
Glucocorticoids
Hepatotoxins | Glucocorticoid | Chemistry | 5,620 |
41,498,619 | https://en.wikipedia.org/wiki/Albatron%20Technology | Albatron Technology Co. Ltd. () is a Taiwan-based company, primarily known for being a major manufacturer of graphics cards and motherboards based on NVIDIA chipsets in the 2000s that were marketed under the brand Albatron.
History
The company began in 1984 as Chun Yun Electronics, TVs and markets. It was renamed "Albatron Technology" in 2002 and expanded its range of products. The company started making generic "noname" branded technology hardware, and started to brand its products after the name change to Albatron.
The company once had a small market share in consumer computer hardware aftermarket, but now only manufactures peripherals and accessories for consumers as their graphics cards and motherboards manufacturing is now aimed at industrial users only.
Location
The company's headquarters are located in New Taipei, Taiwan since 2002. The trading company has more than 60 distributors worldwide, and 4% of its revenues were invested in research and development.
See also
Elitegroup Computer Systems (ECS)
Gigabyte Technology
Micro-Star International (MSI)
ASRock
References
External links
Computer companies of Taiwan
Computer hardware companies
Manufacturing companies established in 1986
Graphics hardware companies
Motherboard companies
Electronics companies of Taiwan
Taiwanese brands
Manufacturing companies based in New Taipei
Taiwanese companies established in 1984 | Albatron Technology | Technology | 252 |
329,877 | https://en.wikipedia.org/wiki/Taijitu | In Chinese philosophy, a taijitu () is a symbol or diagram () representing taiji () in both its monist (wuji) and its dualist (yin and yang) forms in application is a deductive and inductive theoretical model. Such a diagram was first introduced by Neo-Confucian philosopher Zhou Dunyi of the Song Dynasty in his Taijitu shuo ().
The Daozang, a Taoist canon compiled during the Ming dynasty, has at least half a dozen variants of the taijitu. The two most similar are the Taiji Xiantiandao and wujitu () diagrams, both of which have been extensively studied since the Qing period for their possible connection with Zhou Dunyi's taijitu.
Ming period author Lai Zhide simplified the taijitu to a design of two interlocking spirals with two black-and-white dots superimposed on them, became synonymous with the Yellow River Map. This version was represented in Western literature and popular culture in the late 19th century as the "Great Monad", this depiction became known in English as the "yin-yang symbol" since the 1960s. The contemporary Chinese term for the modern symbol is referred to as "the two-part Taiji diagram" ().
Ornamental patterns with visual similarity to the "yin yang symbol" are found in archaeological artefacts of European prehistory; such designs are sometimes descriptively dubbed "yin yang symbols" in archaeological literature by modern scholars.
Structure
The taijitu consists of five parts. Strictly speaking, the "yin and yang symbol", itself popularly called taijitu, represents the second of these five parts of the diagram.
At the top, an empty circle depicts the absolute (wuji). According to Zhou, wuji is also a synonym for taiji.
A second circle represents the Taiji as harboring Dualism, yin and yang, represented by filling the circle in a black-and-white pattern. In some diagrams, there is a smaller empty circle at the center of this, representing Emptiness as the foundation of duality.
Below this second circle is a five-part diagram representing the Five Agents (Wuxing), representing a further stage in the differentiation of Unity into Multiplicity. The Five Agents are connected by lines indicating their proper sequence, Wood () → Fire () → Earth () → Metal () → Water ().
The circle below the Five Agents represents the conjunction of Heaven and Earth, which in turn gives rise to the "ten thousand things". This stage is also represented by the bagua.
The final circle represents the state of multiplicity, glossed "The ten thousand things are born by transformation" (; simplified )
History
The term taijitu in modern Chinese is commonly used to mean the simple "divided circle" form (), but it may refer to any of several schematic diagrams that contain at least one circle with an inner pattern of symmetry representing yin and yang.
Song and Yuan eras
While the concept of yin and yang dates to Chinese antiquity, the interest in "diagrams" ( tú) is an intellectual fashion of Neo-Confucianism during the Song period (11th century), and it declined again in the Ming period, by the 16th century. During the Mongol Empire and Yuan dynasty, Taoist traditions and diagrams were compiled and published in the encyclopedia Shilin Guangji by Chen Yuanjing.
The original description of a taijitu is due to Song era philosopher Zhou Dunyi (1017–1073), author of the Taijitu shuo (; "Explanation of the Diagram of the Supreme Ultimate"), which became the cornerstone of Neo-Confucianist cosmology. His brief text synthesized aspects of Chinese Buddhism and Taoism with metaphysical discussions in the Yijing.
Zhou's key terms Wuji and Taiji appear in the opening line , which Adler notes could also be translated "The Supreme Polarity that is Non-Polar".
Non-polar (wuji) and yet Supreme Polarity (taiji)! The Supreme Polarity in activity generates yang; yet at the limit of activity it is still. In stillness it generates yin; yet at the limit of stillness it is also active. Activity and stillness alternate; each is the basis of the other. In distinguishing yin and yang, the Two Modes are thereby established. The alternation and combination of yang and yin generate water, fire, wood, metal, and earth. With these five [phases of] qi harmoniously arranged, the Four Seasons proceed through them. The Five Phases are simply yin and yang; yin and yang are simply the Supreme Polarity; the Supreme Polarity is fundamentally Non-polar. [Yet] in the generation of the Five Phases, each one has its nature.
Instead of usual Taiji translations "Supreme Ultimate" or "Supreme Pole", Adler uses "Supreme Polarity" (see Robinet 1990) because Zhu Xi describes it as the alternating principle of yin and yang, and:
insists that taiji is not a thing (hence "Supreme Pole" will not do). Thus, for both Zhou and Zhu, taiji is the yin-yang principle of bipolarity, which is the most fundamental ordering principle, the cosmic "first principle." Wuji as "non-polar" follows from this.
Since the 12th century, there has been a vigorous discussion in Chinese philosophy regarding the ultimate origin of Zhou Dunyi's diagram.
Zhu Xi (12th century) insists that Zhou Dunyi had composed the diagram himself, against the prevailing view that he had received it from Daoist sources. Zhu Xi could not accept a Daoist origin of the design, because it would have undermined the claim of uniqueness attached to the Neo-Confucian concept of dao.
Ming and Qing eras
While Zhou Dunyi (1017–1073) popularized the circular diagram, the introduction of "swirling" patterns first appears in the Ming period and representative of transformation.
Zhao Huiqian (, 1351–1395) was the first to introduce the "swirling" variant of the taijitu in his Liushu benyi (, 1370s). The diagram is combined with the eight trigrams (bagua) and called the "River Chart spontaneously generated by Heaven and Earth".
By the end of the Ming period, this diagram had become a widespread representation of Chinese cosmology.
The dots were introduced in the later Ming period (replacing the droplet-shapes used earlier, in the 16th century) and are encountered more frequently in the Qing period. The dots represent the seed of yin within yang and the seed of yang within yin; the idea that neither can exist without the other and are never absolute.
Lai Zhide's design is similar to the gakyil (dga' 'khyil or "wheel of joy") symbols of Tibetan Buddhism; but while the Tibetan designs have three or four swirls (representing the Three Jewels or the Four Noble Truths, i.e. as a triskele and a tetraskelion design), Lai Zhide's taijitu has two swirls, terminating in a central circle.
Modern yin-yang symbol
The Ming-era design of the taijitu of two interlocking spirals was a common yin-yang symbol in the first half of the 20th century. The flag of South Korea, originally introduced as the flag of Joseon era Korea in 1882, shows this symbol in red and blue. This was a modernisation of the older (early 19th century) form of the Bat Quai Do used as the Joseon royal standard.
The symbol is referred to as taijitu, simply taiji (or the Supreme Ultimate in English), hetu or "river diagram", "the yin-yang circle", or wuji, as wuji was viewed synonymously with the artistic and philosophical concept of taiji by some Taoists, including Zhou. Zhou viewed the dualistic and paradoxical relationship between the concepts of taiji and wuji, which were and are often thought to be opposite concepts, as a cosmic riddle important for the "beginning...and ending" of a life.
The names of the taijitu are highly subjective and some interpretations of the texts they appear in would only call the principle of taiji those names rather than the symbol.
Since the 1960s, the He tu symbol, which combines the two interlocking spirals with two dots, has more commonly been used as a yin-yang symbol.
compare with
In the standard form of the contemporary symbol, one draws on the diameter of a circle two non-overlapping circles each of which has a diameter equal to the radius of the outer circle. One keeps the line that forms an "S", and one erases or obscures the other line. In 2008 the design was also described by Isabelle Robinet as a "pair of fishes nestling head to tail against each other".
The Soyombo symbol of Mongolia may be prior to 1686.
It combines several abstract shapes, including a Taiji symbol illustrating the mutual complement of man and woman. In socialist times, it was alternatively interpreted as two fish symbolizing vigilance, because fish never close their eyes.
The modern symbol has also been widely used in martial arts, particularly tai chi, and Jeet Kune Do, since the 1970s.
In this context, it is generally used to represent the interplay between hard and soft techniques.
The dots in the modern "yin-yang symbol" have been given the additional interpretation of "intense interaction" between the complementary principles, i.e. a flux or flow to achieve harmony and balance.
Similar symbols
Similarities can be seen in Neolithic–Eneolithic era Cucuteni–Trypillia culture on the territory of current Ukraine and Romania. Patterns containing ornament looking like Taijitu from archeological artifacts of that culture were displayed in the Ukraine pavilion at the Expo 2010 in Shanghai, China.
The interlocking design is found in artifacts of the European Iron Age. Similar interlocking designs are found in the Americas: Xicalcoliuhqui.
While this design appears to become a standard ornamental motif in Iron-Age Celtic culture by the 3rd century BC, found on a wide variety of artifacts, it is not clear what symbolic value was attached to it. Unlike the Chinese symbol, the Celtic yin-yang lack the element of mutual penetration, and the two halves are not always portrayed in different colors. Comparable designs are also found in Etruscan art.
In computing
Unicode features the he tu symbol in the Miscellaneous Symbols block, at code point . The related "double body symbol" is included at U+0FCA (TIBETAN SYMBOL NOR BU NYIS -KHYIL ࿊), in the Tibetan block.
The Soyombo symbol, which includes a taijitu, is available in Unicode as the sequence U+11A9E 𑪞 + U+11A9F 𑪟 + U+11AA0 𑪠.
See also
Gankyil
Koru
Lauburu
Taegeuk
Three hares
Tomoe
Triskelion
References
Sources
External links
Where does the Chinese Yin Yang Symbol Come From? – chinesefortunecalendar.com
Chart of the Great Ultimate (Taiji tu) – goldenelixir.com)
Iconography
Ornaments
Rotational symmetry
Symbols
Taoist cosmology
Visual motifs
eo:Jino kaj Jango#Tajĝifiguro | Taijitu | Physics,Mathematics | 2,346 |
3,036,289 | https://en.wikipedia.org/wiki/Weil%27s%20conjecture%20on%20Tamagawa%20numbers | In mathematics, the Weil conjecture on Tamagawa numbers is the statement that the Tamagawa number of a simply connected simple algebraic group defined over a number field is 1. In this case, simply connected means "not having a proper algebraic covering" in the algebraic group theory sense, which is not always the topologists' meaning.
History
calculated the Tamagawa number in many cases of classical groups and observed that it is an integer in all considered cases and that it was equal to 1 in the cases when the group is simply connected. The first observation does not hold for all groups: found examples where the Tamagawa numbers are not integers. The second observation, that the Tamagawa numbers of simply connected semisimple groups seem to be 1, became known as the Weil conjecture.
Robert Langlands (1966) introduced harmonic analysis methods to show it for Chevalley groups. K. F. Lai (1980) extended the class of known cases to quasisplit reductive groups. proved it for all groups satisfying the Hasse principle, which at the time was known for all groups without E8 factors. V. I. Chernousov (1989) removed this restriction, by proving the Hasse principle for the resistant E8 case (see strong approximation in algebraic groups), thus completing the proof of Weil's conjecture. In 2011, Jacob Lurie and Dennis Gaitsgory announced a proof of the conjecture for algebraic groups over function fields over finite fields, formally published in , and a future proof using a version of the Grothendieck-Lefschetz trace formula will be published in a second volume.
Applications
used the Weil conjecture to calculate the Tamagawa numbers of all semisimple algebraic groups.
For spin groups, the conjecture implies the known Smith–Minkowski–Siegel mass formula.
See also
Tamagawa number
References
.
Further reading
Aravind Asok, Brent Doran and Frances Kirwan, "Yang-Mills theory and Tamagawa Numbers: the fascination of unexpected links in mathematics", February 22, 2013
J. Lurie, The Siegel Mass Formula, Tamagawa Numbers, and Nonabelian Poincaré Duality posted June 8, 2012.
Conjectures
Theorems in group theory
Algebraic groups
Diophantine geometry | Weil's conjecture on Tamagawa numbers | Mathematics | 460 |
14,822 | https://en.wikipedia.org/wiki/Irreducible%20fraction | An irreducible fraction (or fraction in lowest terms, simplest form or reduced fraction) is a fraction in which the numerator and denominator are integers that have no other common divisors than 1 (and −1, when negative numbers are considered). In other words, a fraction is irreducible if and only if a and b are coprime, that is, if a and b have a greatest common divisor of 1. In higher mathematics, "irreducible fraction" may also refer to rational fractions such that the numerator and the denominator are coprime polynomials. Every rational number can be represented as an irreducible fraction with positive denominator in exactly one way.
An equivalent definition is sometimes useful: if a and b are integers, then the fraction is irreducible if and only if there is no other equal fraction such that or , where means the absolute value of a. (Two fractions and are equal or equivalent if and only if ad = bc.)
For example, , , and are all irreducible fractions. On the other hand, is reducible since it is equal in value to , and the numerator of is less than the numerator of .
A fraction that is reducible can be reduced by dividing both the numerator and denominator by a common factor. It can be fully reduced to lowest terms if both are divided by their greatest common divisor. In order to find the greatest common divisor, the Euclidean algorithm or prime factorization can be used. The Euclidean algorithm is commonly preferred because it allows one to reduce fractions with numerators and denominators too large to be easily factored.
Examples
In the first step both numbers were divided by 10, which is a factor common to both 120 and 90. In the second step, they were divided by 3. The final result, , is an irreducible fraction because 4 and 3 have no common factors other than 1.
The original fraction could have also been reduced in a single step by using the greatest common divisor of 90 and 120, which is 30. As , and , one gets
Which method is faster "by hand" depends on the fraction and the ease with which common factors are spotted. In case a denominator and numerator remain that are too large to ensure they are coprime by inspection, a greatest common divisor computation is needed anyway to ensure the fraction is actually irreducible.
Uniqueness
Every rational number has a unique representation as an irreducible fraction with a positive denominator (however = although both are irreducible). Uniqueness is a consequence of the unique prime factorization of integers, since implies ad = bc, and so both sides of the latter must share the same prime factorization, yet a and b share no prime factors so the set of prime factors of a (with multiplicity) is a subset of those of c and vice versa, meaning a = c and by the same argument b = d.
Applications
The fact that any rational number has a unique representation as an irreducible fraction is utilized in various proofs of the irrationality of the square root of 2 and of other irrational numbers. For example, one proof notes that if could be represented as a ratio of integers, then it would have in particular the fully reduced representation where a and b are the smallest possible; but given that equals so does (since cross-multiplying this with shows that they are equal). Since a > b (because is greater than 1), the latter is a ratio of two smaller integers. This is a contradiction, so the premise that the square root of two has a representation as the ratio of two integers is false.
Generalization
The notion of irreducible fraction generalizes to the field of fractions of any unique factorization domain: any element of such a field can be written as a fraction in which denominator and numerator are coprime, by dividing both by their greatest common divisor. This applies notably to rational expressions over a field. The irreducible fraction for a given element is unique up to multiplication of denominator and numerator by the same invertible element. In the case of the rational numbers this means that any number has two irreducible fractions, related by a change of sign of both numerator and denominator; this ambiguity can be removed by requiring the denominator to be positive. In the case of rational functions the denominator could similarly be required to be a monic polynomial.
See also
Anomalous cancellation, an erroneous arithmetic procedure that produces the correct irreducible fraction by cancelling digits of the original unreduced form.
Diophantine approximation, the approximation of real numbers by rational numbers.
References
External links
Fractions (mathematics)
Elementary arithmetic | Irreducible fraction | Mathematics | 1,004 |
56,127,644 | https://en.wikipedia.org/wiki/Starlight%20%28interstellar%20probe%29 | Project Starlight is a research project of the University of California, Santa Barbara to develop a fleet of laser beam-propelled interstellar probes and sending them to a star neighboring the Solar System, potentially Alpha Centauri. The project aims to send organisms on board the probe.
Overview
Starlight aims to accelerate the spacecraft with powerful lasers, a method the project refers to as DEEP-IN (Directed Energy Propulsion for Interstellar Exploration), thus allowing them to reach stars near the Solar System in a matter of years, in contrast to traditional propulsion methods which would require thousands of years. Each spacecraft would be the size of a DVD disc and would be powered by plutonium. They would fly at one-fifth of the speed of light, and in the case of Alpha Centauri, they would arrive after traveling more than twenty years from Earth.
History
Starlight is a program of the Experimental Cosmology Group of University of California, Santa Barbara (UCSB), and has received funding from NASA. In 2015, the NASA Innovative Advanced Concepts (NIAC) selected DEEP-IN as a phase-1 project.
Terrestrial biomes in space
One goal of Starlight is to send terrestrial organisms along with the spacecraft, and observe how the interstellar environment and extreme acceleration affects them. This effort is known as Terrestrial Biomes in Space, and the lead candidate is Caenorhabditis elegans, a minuscule nematode. The organism will spend most of the voyage in a frozen state, and once the spacecraft approaches its target they will be thawed by heat from the onboard plutonium. Following their revival, the organisms will be monitored by various sensors, and the data they produce will be sent back to Earth. C. elegans have been used extensively in biological research as a model organism, as the worm is one of those having the fewest cells for an animal possessing a nervous system. A backup option for C. elegans are tardigrades, micro-animals that are known for their resilience to various conditions lethal to other animals, such as the vacuum environment of space and strong doses of ionizing radiation.
Planetary protection
NASA's funding does not cover the Terrestrial Biome in Space portion of Starlight, as the experiment may potentially contaminate exoplanets.
See also
, a similar initiative to Starlight
Interstellar probe
Interstellar travel
, proposed in 2016 for NASA
References
Interstellar travel
Proposed space probes
Alpha Centauri
University of California, Santa Barbara | Starlight (interstellar probe) | Astronomy | 509 |
12,744,480 | https://en.wikipedia.org/wiki/Kamil%20Hornoch | Kamil Hornoch (; born 5 December 1972) is a Czech astronomer who discovered dozens of novae in nearby galaxies. The main belt asteroid 14124 Kamil is named in his honour.
Astronomy
Kamil Hornoch became interested in astronomy in 1984. One year later he started with his scientific observations of comets, meteors, the solar photosphere, variable stars, the planets and occultations. He lives in Lelekovice, a village in the South Moravian Region, the Czech Republic.
Hornoch makes most of his observations from a small observatory in Lelekovice with a reflector telescope. Although he uses a CCD camera, he is also good in visual estimates of star luminosity, reaching an accuracy of 0.03 magnitude (confirmed by separate photoelectric measurements). He has also been acknowledged for his numerous estimates and measurements of brightness and positions of comets and variable stars, as well as position measurements of minor planets.
In 1993 he co-discovered (with Jan Kyselý) new variable star ES UMa.
He is a member of the Czech Astronomical Society. Since 2007 he works as a professional astronomer at the Ondřejov Observatory.
Novae
In the summer of 2002, Hornoch took a series of pictures of the surrounding of the core of the Andromeda Galaxy and discovered his first extragalactic nova. Since that time, galaxy M31 has become his favourite place to observe; he has made 43 nova discoveries and co-discoveries in that region (as of May 2007).
He also discovered two novae in the galaxy M81 in pictures taken by Czech astronomer Pavel Cagaš with his reflector telescope on 8 and 11 April 2007. No nova discovery had been made with such a small telescope in such a distance before. One of the novae, M81 2007 3, grew very luminous, reaching the quite extraordinary absolute magnitude of −10 and apparent magnitude of 17.6 six days later. It was the brightest nova ever discovered in M81.
Awards
In 1996 Kamil Hornoch became the first person to receive the newly established Zdeněk Kvíz Award of the Czech Astronomical Society for his activities in the field of interplanetary matter.
In 2001 Lenka Šarounová named the main belt asteroid 14124 Kamil in his honour.
In 2003 he received the Jindřich Šilhán Award of the Variable Star Observer of the Year () of the Czech Astronomical Society.
In 2006 Kamil Hornoch won the Amateur Achievement Award of the Astronomical Society of the Pacific.
See also
References
1972 births
20th-century astronomers
21st-century astronomers
Amateur astronomers
Czech astronomers
Discoverers of minor planets
Living people | Kamil Hornoch | Astronomy | 538 |
2,069,663 | https://en.wikipedia.org/wiki/SUNMOS | SUNMOS (Sandia/UNM Operating System) is an operating system jointly developed by Sandia National Laboratories and the Computer Science Department at the University of New Mexico. The goal of the project, started in 1991, is to develop a highly portable, yet efficient, operating system for massively parallel-distributed memory systems.
SUNMOS uses a single-tasking kernel and does not provide demand paging. It takes control of all nodes in the distributed system. Once an application is loaded and running, it can manage all the available memory on a node and use the full resources provided by the hardware. Applications are started and controlled from a process called yod that runs on the host node. Yod runs on a Sun frontend for the nCUBE 2, and on a service node on the Intel Paragon.
SUNMOS was developed as a reaction to the heavy weight version of OSF/1 that ran as a single-system image on the Paragon and consumed 8-12 MB of the 16 MB available on each node, leaving little memory available for the compute applications. In comparison, SUNMOS used 250 KB of memory per node. Additionally, the overhead of OSF/1 limited the network bandwidth to 35 MB/s, while SUNMOS was able to use 170 MB/s of the peak 200 MB/s available.
The ideas in SUNMOS inspired PUMA, a multitasking variant that only ran on the i860 Paragon. Among the extensions in PUMA was the Portals API, a scalable, high performance message passing API. Intel ported PUMA and Portals to the Pentium Pro based ASCI Red system and named it Cougar. Cray ported Cougar to the Opteron based Cray XT3 and renamed it Catamount. A version of Catamount was released to the public named OpenCatamount.
In 2009, the Catamount lightweight kernel was selected for an R&D 100 Award.
See also
Compute Node Linux
CNK operating system
References
External links
SUNMOS FTP site
A humorous field guide to differences between SUNMOS and OSF
OpenCatamount.
Supercomputer operating systems
Sandia National Laboratories | SUNMOS | Technology | 444 |
6,563,205 | https://en.wikipedia.org/wiki/Alexander%20A.%20Gurshtein | Alexander Aronovich Gurshtein (, Aleksandr Aronovich Gurshteyn; February 21, 1937 – April 3, 2020) was a Soviet/Russian astronomer and historian of science.
Early life
Gurshtein was born Jewish in Moscow (former USSR) on February 21, 1937. His father, Aron Sheftelevich Gurshtein, was a literary critic and writer, while his mother, Yelena Vasilievna Resnikova, was a journalist and editor at a Moskovan radio station.
In 1941, when Gurshtein was four years old, his father was killed in the Great Patriotic War during the Battle of Moscow. Following his father's death, he was raised by his mother and grandmother.
When Alexander was thirteen, he began visiting the Moscow Planetarium, which brought on his pursuit of astronomy.
Education and career
Gurshtein attended Moscow State Institute of Geodesy and Cartography, and graduated with a degree in astrometry in 1959. Following his graduation, he worked at the Russian Academy of Sciences and in the Soviet Space program during the Space Race of the Cold War.
Gurshtein earned his Candidate of Science from Sternberg State Astronomical Institute, Moscow in 1966 and a Doctor of Science degree in Physics & Mathematics from Pulkovo Astronomical Observatory in Saint Petersburg in 1980.
Gurshtein was active as an astronomer in the space program and held a number of offices in professional organizations, including Head of Council for Astronomical Education and Vice Director of the Institute for History of Science & Technology, both for the Russian Ministry of Education. As a historian of science, he served as editor-in-chief of the Annual on History of Science, published by the Russian Academy of Sciences, and Deputy Editor-in-Chief for the academic monthly, Nature, published by the Russian Academy of Sciences. He was also the author of several books and articles on planetology, holder of five patents, and contributor to many international forums.
In 1995, he took a leave of absence from the Russian Academy and accepted a position as Visiting Professor of Astronomy & History of Science at Mesa State College in Grand Junction, Colorado. In later years he developed a concept of history of constellations and the zodiac which was published in American Scientist, Sky & Telescope, and other professional journals.
References
External links
"Did the pre-Indo-Europeans influence the formation of the Western Zodiac?" (abstract)
"Gurshtein's gradualist concept of constellation origins and zodiacal development"
Гурштейн А.А. Минувшие цивилизации в зеркале Зодиака // Природа. 1991. No. 10. С. 57–71.
Гурштейн А.А. Реконструкция происхождения зодиакальных созвездий // На рубежах познания Вселенной (Историко-астрономическиеисследования). Вып. 23. М., 1992. С. 19–63;
Gurshtein A.A. //On the Origin of the Zodiacal Constellations // Vistas in Astronomy. 1993. V. 36. P. 171–190.
Gurshtein A.A. The Zodiac and the Roots of European Civilization. Стара Загора, сентябрь 1993 г.5. Доклад на Международной конференции по археоастрономии Оксфорд-4:
Гурштейн А.А. Археоастрономическое досье: когда родился Зодиак?, ж. "Земля и Вселенная" No. 5,2011. с.48-61. Gurshtein, A.A., Archaeoastronomical Dossier: When Zodiac was born? Zemlya I Vselennaya No. 5, 2011. s.48-61
Гурштейн, А. А. Московский астроном на заре космического века : автобиогр. заметки / А. А. Гурштейн. - М.: НЦССХ им. А. Н. Бакулева РАМН, 2012. - 675 с. "
H.J.Smith, A.A.Gurshtein, W.Mendell, International Manned Lunar Base: Beginning the 21st century in Space, Science & Global Security, vol. 2, pp. 209–233, 1991
Gurshtein, A.A., Did the Pre-Indo-Europeans Influence the Formation of the Western Zodiac? Journal of Indo-European Studies. Volume 33, Number 1 & 2, Spring/Summer 2005.
Russian astronomers
Historians of astronomy
1937 births
2020 deaths
Soviet astronomers
Russian Jews | Alexander A. Gurshtein | Astronomy | 1,204 |
53,043,810 | https://en.wikipedia.org/wiki/Silicon%20Mountain%20%28Denver%29 | Silicon Mountain, also known as the "Silicon Flatirons" is a nickname given to the tech hub in the Denver, Colorado metropolitan area and Colorado Springs, Colorado metropolitan area. The name is analogous to Silicon Valley, but refers to the Rocky Mountains beyond the skyline. Denver startups raised $401 million in 2015, and Boulder startups raised $183 million in 2015.
Startups
SolidFire
Zayo Group
Dot Hill Systems
AlchemyAPI
Venture capital
Incubators
The Founder Institute
Techstars
Innovation Pavilion
Fortune 500 Companies
Ball Corporation
CH2M
DaVita
Dish Network
Envision Healthcare
Level 3 Communications
Newmont
Qurate Retail Group
Western Union
See also
Denver Tech Center
Northern Colorado Economic Development Corporation
List of companies with Denver area operations
List of places with "Silicon" names
References
High-technology business districts in the United States
Information technology places | Silicon Mountain (Denver) | Technology | 169 |
41,411,624 | https://en.wikipedia.org/wiki/Men%2C%20Women%20%26%20Children%20%28film%29 | Men, Women & Children is a 2014 American comedy-drama film directed by Jason Reitman and co-written with Erin Cressida Wilson, based on a novel of the same name written by Chad Kultgen that deals with online addiction. The film stars Rosemarie DeWitt, Jennifer Garner, Judy Greer, Dean Norris, Adam Sandler, Ansel Elgort, Kaitlyn Dever, and Timothée Chalamet in his film debut.
The film premiered at the 2014 Toronto International Film Festival on September 6, 2014. The film was released on October 3, 2014 by, Paramount Pictures.
Plot
Set in a small town in Texas, this film follows several teens and their parents as they struggle in today's technology-obsessed world. Their communication, self-images, and relationships are all affected by the technological age, compounding the usual social difficulties people already encounter.
Donald and Helen Truby are a sexually dissatisfied married couple. Helen starts having affairs through the social media website Ashley Madison, while Donald regularly sees escorts through another site. Donald accidentally catches sight of his wife's Ashley Madison account, then shows up where she's meeting her latest affair. The next day, both admit to having lapses in judgement and agree to ignore that the affairs ever happened.
The Trubys' teenage son Chris, a football player, is addicted to porn, and has found himself only able to become aroused by material not deemed socially normal. Hoping to a achieve arousal through "traditional means", Chris tries to seduce classmate and cheerleader Hannah. However, as they start to initiate sex, he fails to become aroused. Hannah breaks up with him, subsequently telling everyone that they did in fact have sex.
Hannah longs to be famous, and her mother, Donna, aids her in this goal by running a website based around Hannah. At the mall, the two come across auditions for a television series one day; however, Hannah is disqualified due to the provocative photographs her mother had taken of her and posted to the site. Later on, Donna takes the site down, realizing how damaging it is to Hannah.
Football player Tim quits sports following his parents' divorce, preferring to spend most of his time playing a MMORPG. He later is pulled out of his depression when he begins dating the introverted Brandy Beltmeyer, whose overprotective mother Patricia obsessively monitors Brandy's online activity; Brandy has taken to expressing herself on a secret Tumblr account in retaliation. When the account and her conversations with Tim are discovered, Patricia completely revokes her daughter's internet privileges. Tim's father Kent confronts him, stating that Tim's mother abandoned both of them before deleting the game and demanding Tim continue football next year. Patricia then poses as Brandy and tells Tim that she is uninterested. Dejected, Tim overdoses on his antidepressants and nearly dies. Patricia realizes her protectiveness of her daughter has gone too far, and deactivates the surveillance devices she used to monitor Brandy.
Donna goes to content awareness meetings run by Patricia to learn about what is legally allowed on her daughter's website. There, she meets Kent and starts a relationship with him. After Donna informs him about the website, he wants to end their relationship. However, after reconciling with Tim and realizing how difficult it is to be a single parent, Kent reconnects with her.
Hannah's co-cheerleader Allison Doss has been starving herself for months over the summer, with the support of an online group. Her crush of several years, football player Brandon Lender, finally notices her. She shares her first kiss with him, and later has sex with him upon his insistence, which he treats casually and with disinterest. Allison develops an ectopic pregnancy, culminating in a miscarriage due to malnutrition. When she tells Brandon the news, his only concern is that others will discover they had sex. Realizing how selfish Brandon is, Allison throws a rock through his window in the middle of the night.
The movie ends with the narrator's message that humans should remember to be kind to one another and cherish the earth.
Cast
Emma Thompson (voice) as the narrator
Rosemarie DeWitt as Helen Truby
Jennifer Garner as Patricia Beltmeyer
Judy Greer as Donna Clint
Dean Norris as Kent Mooney
Adam Sandler as Don Truby
Ansel Elgort as Tim Mooney
Kaitlyn Dever as Brandy Beltmeyer
J. K. Simmons as Mr. Doss
David Denman as Jim Vance
Jason Douglas as Ray Beltmeyer
Shane Lynch as Angelique
Dennis Haysbert as Secretluvur
Phil LaMarr as Shrink
Olivia Crocicchia as Hannah Clint
Elena Kampouris as Allison Doss
Travis Tope as Chris Truby
Tina Parker as Mrs. Doss
Will Peltz as Brandon Lender
Kurt Krakowian as Teacher
Timothée Chalamet as Danny Vance
Katherine Hughes as Brooke Benton
Intern AJ as Football Player (uncredited)
Production
By September 4, 2013, Jason Reitman had cast Adam Sandler, Rosemarie DeWitt and Jennifer Garner in the lead roles. By December 16, Emma Thompson, Judy Greer and Dean Norris were cast.
The young cast includes Ansel Elgort, Kaitlyn Dever, Elena Kampouris, Travis Tope, Katherine Hughes, Olivia Crocicchia, and Timothée Chalamet. Other stars are David Denman, Jason Douglas, Dennis Haysbert, Shane Lynch, and J. K. Simmons. Will Peltz also joined the cast, on December 17.
Principal photography began on December 16, 2013, in and around Austin, Texas.
Reception
Box office
Men, Women & Children premiered at the 2014 Toronto International Film Festival on September 6, 2014. The film opened in limited release on October 3, 2014 in 17 theaters and grossed $48,024 with an average of $2,825 per theater and ranking #48 at the box office. In its wide release on October 17 in 608 theaters the film grossed $306,367 with an average of $504 per theater and ranking #23, making it the fifth lowest opening in a release of 600 theaters or more. The film ultimately earned $705,908 in the United States and $1,534,627 internationally for a total of $2,240,535 worldwide, well below its $16 million production budget.
Critical response
The film received a "rotten" score of 33% on Rotten Tomatoes based on 139 reviews with an average rating of 4.90/10. The critical consensus states: "Men, Women & Children is timely, but director Jason Reitman's overbearing approach to its themes blunts the movie's impact." The film also has a score of 38 out of 100 on Metacritic based on 36 critics, indicating "generally unfavorable reviews". Film critic Robbie Collin felt Men, Women & Children "played like a spoof" with others agreeing the film was "mawkish and clichéd".
References
External links
Men Women and Children at I Love Film
2014 films
2014 comedy-drama films
American comedy-drama films
Films about computing
Films about the Internet
Films about social media
Films about sexuality
2010s English-language films
Films directed by Jason Reitman
Films based on American novels
Films set in 2014
Films shot in Austin, Texas
Paramount Pictures films
Films produced by Mason Novick
Films produced by Jason Reitman
Films with screenplays by Jason Reitman
2010s American films
English-language comedy-drama films | Men, Women & Children (film) | Technology | 1,542 |
2,026,889 | https://en.wikipedia.org/wiki/Garbage%20disposal%20unit | A garbage disposal unit (also known as a waste disposal unit, food waste disposer (FWD), in-sink macerator, garbage disposer, or garburator) is a device, usually electrically powered, installed under a kitchen sink between the sink's drain and the trap. The device shreds food waste into pieces small enough—generally less than in diameter—to pass through plumbing.
History
The garbage disposal unit was invented in 1927 by John W. Hammes, an architect working in Racine, Wisconsin. He applied for a patent in 1933 that was issued in 1935. His InSinkErator company put his disposer on the market in 1940.
Hammes' claim is disputed, as General Electric introduced a garbage disposal unit in 1935, known as the Disposall.
In many cities in the United States in the 1930s and the 1940s, the municipal sewage system had regulations prohibiting placing food waste (garbage) into the system. InSinkErator spent considerable effort, and was highly successful in convincing many localities to rescind these prohibitions.
Many localities in the United States prohibited the use of disposers. For many years, garbage disposers were illegal in New York City because of a perceived threat of damage to the city's sewer system. After a 21-month study with the NYC Department of Environmental Protection, the ban was rescinded in 1997 by local law 1997/071, which amended section 24-518.1, NYC Administrative Code.
In 2008, the city of Raleigh, North Carolina attempted a ban on the replacement and installation of garbage disposers, which also extended to outlying towns sharing the city's municipal sewage system, but rescinded the ban one month later.
Adoption and bans
In the United States, 50% of homes had disposal units as of 2009, compared with only 6% in the United Kingdom and 3% in Canada.
In Britain, Worcestershire County Council and Herefordshire Council started to subsidize the purchase of garbage disposal units in 2005, in order to reduce the amount of waste going to landfill and the carbon footprint of garbage runs. However, the use of macerators was banned for non-household premises in Scotland in 2016 in non-rural areas where food waste collection is available, and banned in Northern Ireland in 2017. They are expected to be banned for businesses in England and Wales in 2023. The intention is to reduce water use.
Many other countries in Europe have banned or intend to ban macerators. The intention is to realise the resource value of food waste, and reduce sewer blockages.
Rationale
Food scraps range from 10% to 20% of household waste, and are a problematic component of municipal waste, creating public health, sanitation and environmental problems at each step, beginning with internal storage and followed by truck-based collection. Burned in waste-to-energy facilities, the high water-content of food scraps means that their heating and burning consumes more energy than it generates; buried in landfills, food scraps decompose and generate methane gas, a greenhouse gas that contributes to climate change.
The premise behind the proper use of a disposer is to effectively regard food scraps as liquid (averaging 70% water, like human waste), and use existing infrastructure (underground sewers and wastewater treatment plants) for its management. Modern wastewater plants are effective at processing organic solids into fertilizer products (known as biosolids), with advanced anaerobic digestion facilities also capturing methane (biogas) for energy production.
Operation
A high-torque, insulated electric motor, usually rated at for a domestic unit, spins a circular turntable mounted horizontally above it. Induction motors rotate at 1,400–2,800 rpm and have a range of starting torques, depending on the method of starting used. The added weight and size of induction motors may be of concern, depending on the available installation space and construction of the sink bowl. Universal motors, also known as series-wound motors, rotate at higher speeds, have high starting torque, and are usually lighter, but are noisier than induction motors, partially due to the higher speeds and partially because the commutator brushes rub on the slotted commutator.
Inside the grinding chamber there is a rotating metal turntable onto which the food waste drops. Two swiveling and two fixed metal impellers mounted on top of the plate near the edge then fling the food waste against the grind ring repeatedly. Sharp cutting edges in the grind ring break down the waste until it is small enough to pass through openings in the ring. Sometimes the waste goes through a third stage where an undercutter disc further chops it, whereupon it is flushed down the drain.
Usually, there is a partial rubber closure, known as a splashguard, on the top of the disposal unit to prevent food waste from flying back up out of the grinding chamber. It may also be used to attenuate noise from the grinding chamber for quieter operation.
There are two main types of garbage disposers—continuous feed and batch feed. Continuous feed models are used by feeding in waste after being started and are more common. Batch feed units are used by placing waste inside the unit before being started. These types of units are started by placing a specially designed cover over the opening. Some covers manipulate a mechanical switch while others allow magnets in the cover to align with magnets in the unit. Small slits in the cover allow water to flow through. Batch feed models are considered safer, since the top of the disposal is covered during operation, preventing foreign objects from falling in.
Waste disposal units may jam, but can usually be cleared either by forcing the turntable round from above or by turning the motor using a hex-key wrench inserted into the motor shaft from below. Especially hard objects accidentally or deliberately introduced, such as metal cutlery, can damage the waste disposal unit and become damaged themselves, although recent advances, such as swivel impellers, have been made to minimize such damage.
Some higher-end units have an automatic reversing jam clearing feature. By using a slightly more complicated centrifugal starting switch, the split-phase motor rotates in the opposite direction from the previous run each time it is started. This can clear minor jams, but is claimed to be unnecessary by some manufacturers: Since the early 1960s, many disposal units have utilized swivel impellers which make reversing unnecessary.
Some other kinds of garbage disposal units are powered by water pressure, rather than electricity. Instead of the turntable and grind ring described above, this alternative design has a water-powered unit with an oscillating piston with blades attached to chop the waste into fine pieces. Because of this cutting action, they can handle fibrous waste. Water-powered units take longer than electric ones for a given amount of waste and need fairly high water pressure to function properly.
Environmental impact
Kitchen waste disposal units increase the load of organic matter that reaches the water treatment plant, which in turn increases the consumption of oxygen. Metcalf and Eddy quantified this impact as of biochemical oxygen demand per person per day where disposers are used. An Australian study that compared in-sink food processing to composting alternatives via a life-cycle assessment found that while the in-sink disposer performed well with respect to climate change, acidification, and energy usage, it did contribute to eutrophication and toxicity potentials.
This may result in higher costs for energy needed to supply oxygen in secondary operations. However, if the waste water treatment is finely controlled, the organic carbon in the food may help to keep the bacterial decomposition running, as carbon may be deficient in that process. This increased carbon serves as an inexpensive and continuous source of carbon necessary for biologic nutrient removal.
One result is larger amounts of solid residue from the waste-water treatment process. According to a study at the East Bay Municipal Utility District's wastewater treatment plant funded by the EPA, food waste produces three times the biogas as compared to municipal sewage sludge. The value of the biogas produced from anaerobic digestion of food waste appears to exceed the cost of processing the food waste and disposing of the residual biosolids (based on a LAX Airport proposal to divert 8,000 tons/year of bulk food waste).
In a study at the Hyperion sewage treatment plant in Los Angeles, disposer use showed minimal to no impact on the total biosolids byproduct from sewage treatment and similarly minimal impact on handling processes as the high volatile solids destruction (VSD) from food waste yield a minimum amount of solids in residue.
Power usage is typically 500–1,500 W, comparable to an electric iron, but only for a very short time, totaling approximately 3–4 kWh of electricity per household per year. Daily water usage varies, but is typically of water per person per day, comparable to an additional toilet flush. One survey of these food processing units found a slight increase in household water use.
References
20th-century inventions
American inventions
Food waste
Home appliances
Products introduced in 1935
Waste treatment technology | Garbage disposal unit | Physics,Chemistry,Technology,Engineering | 1,862 |
25,120,907 | https://en.wikipedia.org/wiki/Birch-bark%20roof | A birch-bark roof (in Finnish: malkakatto or tuohikatto) is a roof construction traditional in Finland and Norway for farmhouses and farm buildings built from logs. The birch-bark roof was the prevailing roof type in rural Finland up until the 1860s, when it was replaced by the use of other materials such as metal sheeting and later roofing felt. The tradition of birch-bark roofs has been revived in recent years as a craft in connection with the restoration of old farm buildings that have been converted into open-air museums.
Construction
The birch bark itself does not form the top layer of the roof. Once the main log frame of the building is constructed, the main horizontal roof poles are laid down; after this comes thin timber slats placed at right-angles to the base roof poles. On top of these then come the layers of birch-bark, each row overlapping the next. The number of layers could vary from 2 to 6 depending on the building. On top of the birch-bark layers were then placed long heavy wooden poles (usually de-barked young trees).
The poles on either side of the pitched roof would be interlocked at the roof ridge. The poles were held in place at the eaves by an eaves board. The poles nearest the end gable would further be fixed with tree-root bindings. Furthermore, rocks were often placed on the roof to further add weight. Alternatively, though less common, turf would be placed above the birch bark.
Benefits
The main reason for using birch bark was that when added in several layers, it acts as an efficient water- and damp-proof course. The birch trees would normally be de-barked, using a knife, during the summer.
See also
Sod roof
Board roof
References
External links
Elite Roofing Company
National Board of Antiquities
Roofs
Woodworking
Vernacular architecture
Roofing materials | Birch-bark roof | Technology,Engineering | 379 |
29,199,909 | https://en.wikipedia.org/wiki/Stf0%20sulfotransferase | Stf0 sulfotransferases are essential for the biosynthesis of sulfolipid-1 in prokaryotes. They adopt a structure that belongs to the sulfotransferase superfamily, consisting of a single domain with a core four-stranded parallel beta-sheet flanked by alpha-helices.
References
EC 2.8.2
Protein families | Stf0 sulfotransferase | Biology | 79 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.