id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
155,650 | https://en.wikipedia.org/wiki/Fluoride | Fluoride () is an inorganic, monatomic anion of fluorine, with the chemical formula (also written ), whose salts are typically white or colorless. Fluoride salts typically have distinctive bitter tastes, and are odorless. Its salts and minerals are important chemical reagents and industrial chemicals, mainly used in the production of hydrogen fluoride for fluorocarbons. Fluoride is classified as a weak base since it only partially associates in solution, but concentrated fluoride is corrosive and can attack the skin.
Fluoride is the simplest fluorine anion. In terms of charge and size, the fluoride ion resembles the hydroxide ion. Fluoride ions occur on Earth in several minerals, particularly fluorite, but are present only in trace quantities in bodies of water in nature.
Nomenclature
Fluorides include compounds that contain ionic fluoride and those in which fluoride does not dissociate. The nomenclature does not distinguish these situations. For example, sulfur hexafluoride and carbon tetrafluoride are not sources of fluoride ions under ordinary conditions.
The systematic name fluoride, the valid IUPAC name, is determined according to the additive nomenclature. However, the name fluoride is also used in compositional IUPAC nomenclature which does not take the nature of bonding involved into account.
Fluoride is also used non-systematically, to describe compounds which release fluoride upon dissolving. Hydrogen fluoride is itself an example of a non-systematic name of this nature. However, it is also a trivial name, and the preferred IUPAC name for fluorane.
Occurrence
Fluorine is estimated to be the 13th-most abundant element in Earth's crust and is widely dispersed in nature, entirely in the form of fluorides. The vast majority is held in mineral deposits, the most commercially important of which is fluorite (CaF2). Natural weathering of some kinds of rocks, as well as human activities, releases fluorides into the biosphere through what is sometimes called the fluorine cycle.
In water
Fluoride is naturally present in groundwater, fresh and saltwater sources, as well as in rainwater, particularly in urban areas. Seawater fluoride levels are usually in the range of 0.86 to 1.4 mg/L, and average 1.1 mg/L (milligrams per litre). For comparison, chloride concentration in seawater is about 19 g/L. The low concentration of fluoride reflects the insolubility of the alkaline earth fluorides, e.g., CaF2.
Concentrations in fresh water vary more significantly. Surface water such as rivers or lakes generally contains between 0.01 and 0.3 mg/L. Groundwater (well water) concentrations vary even more, depending on the presence of local fluoride-containing minerals. For example, natural levels of under 0.05 mg/L have been detected in parts of Canada but up to 8 mg/L in parts of China; in general levels rarely exceed 10 mg/litre
In parts of Asia the groundwater can contain dangerously high levels of fluoride, leading to serious health problems.
Worldwide, 50 million people receive water from water supplies that naturally have close to the "optimal level".
In other locations the level of fluoride is very low, sometimes leading to fluoridation of public water supplies to bring the level to around 0.7–1.2 ppm.
Mining can increase local fluoride levels
Fluoride can be present in rain, with its concentration increasing significantly upon exposure to volcanic activity or atmospheric pollution derived from burning fossil fuels or other sorts of industry, particularly aluminium smelters.
In plants
All vegetation contains some fluoride, which is absorbed from soil and water. Some plants concentrate fluoride from their environment more than others. All tea leaves contain fluoride; however, mature leaves contain as much as 10 to 20 times the fluoride levels of young leaves from the same plant.
Chemical properties
Basicity
Fluoride can act as a base. It can combine with a proton ():
This neutralization reaction forms hydrogen fluoride (HF), the conjugate acid of fluoride.
In aqueous solution, fluoride has a pKb value of 10.8. It is therefore a weak base, and tends to remain as the fluoride ion rather than generating a substantial amount of hydrogen fluoride. That is, the following equilibrium favours the left-hand side in water:
However, upon prolonged contact with moisture, soluble fluoride salts will decompose to their respective hydroxides or oxides, as the hydrogen fluoride escapes. Fluoride is distinct in this regard among the halides. The identity of the solvent can have a dramatic effect on the equilibrium shifting it to the right-hand side, greatly increasing the rate of decomposition.
Structure of fluoride salts
Salts containing fluoride are numerous and adopt myriad structures. Typically the fluoride anion is surrounded by four or six cations, as is typical for other halides. Sodium fluoride and sodium chloride adopt the same structure. For compounds containing more than one fluoride per cation, the structures often deviate from those of the chlorides, as illustrated by the main fluoride mineral fluorite (CaF2) where the Ca2+ ions are surrounded by eight F− centers. In CaCl2, each Ca2+ ion is surrounded by six Cl− centers. The difluorides of the transition metals often adopt the rutile structure whereas the dichlorides have cadmium chloride structures.
Inorganic chemistry
Upon treatment with a standard acid, fluoride salts convert to hydrogen fluoride and metal salts. With strong acids, it can be doubly protonated to give . Oxidation of fluoride gives fluorine. Solutions of inorganic fluorides in water contain F− and bifluoride . Few inorganic fluorides are soluble in water without undergoing significant hydrolysis. In terms of its reactivity, fluoride differs significantly from chloride and other halides, and is more strongly solvated in protic solvents due to its smaller radius/charge ratio. Its closest chemical relative is hydroxide, since both have similar geometries.
Naked fluoride
Most fluoride salts dissolve to give the bifluoride () anion. Sources of true F− anions are rare because the highly basic fluoride anion abstracts protons from many, even adventitious, sources. Relative unsolvated fluoride, which does exist in aprotic solvents, is called "naked". Naked fluoride is a strong Lewis base, and a powerful nucleophile. Some quaternary ammonium salts of naked fluoride include tetramethylammonium fluoride and tetrabutylammonium fluoride. Cobaltocenium fluoride is another example. However, they all lack structural characterization in aprotic solvents. Because of their high basicity, many so-called naked fluoride sources are in fact bifluoride salts. In late 2016 imidazolium fluoride was synthesized that is the closest approximation of a thermodynamically stable and structurally characterized example of a "naked" fluoride source in an aprotic solvent (acetonitrile). The sterically demanding imidazolium cation stabilizes the discrete anions and protects them from polymerization.
Biochemistry
At physiological pHs, hydrogen fluoride is usually fully ionised to fluoride. In biochemistry, fluoride and hydrogen fluoride are equivalent. Fluorine, in the form of fluoride, is considered to be a micronutrient for human health, necessary to prevent dental cavities, and to promote healthy bone growth. The tea plant (Camellia sinensis L.) is a known accumulator of fluorine compounds, released upon forming infusions such as the common beverage. The fluorine compounds decompose into products including fluoride ions. Fluoride is the most bioavailable form of fluorine, and as such, tea is potentially a vehicle for fluoride dosing. Approximately, 50% of absorbed fluoride is excreted renally with a twenty-four-hour period. The remainder can be retained in the oral cavity, and lower digestive tract. Fasting dramatically increases the rate of fluoride absorption to near 100%, from a 60% to 80% when taken with food. Per a 2013 study, it was found that consumption of one litre of tea a day, can potentially supply the daily recommended intake of 4 mg per day. Some lower quality brands can supply up to a 120% of this amount. Fasting can increase this to 150%. The study indicates that tea drinking communities are at an increased risk of dental and skeletal fluorosis, in the case where water fluoridation is in effect. Fluoride ion in low doses in the mouth reduces tooth decay. For this reason, it is used in toothpaste and water fluoridation. At much higher doses and frequent exposure, fluoride causes health complications and can be toxic.
Applications
Fluoride salts and hydrofluoric acid are the main fluorides of industrial value.
Organofluorine chemistry
Organofluorine compounds are pervasive. Many drugs, many polymers, refrigerants, and many inorganic compounds are made from fluoride-containing reagents. Often fluorides are converted to hydrogen fluoride, which is a major reagent and precursor to reagents. Hydrofluoric acid and its anhydrous form, hydrogen fluoride, are particularly important.
Production of metals and their compounds
The main uses of fluoride, in terms of volume, are in the production of cryolite, Na3AlF6. It is used in aluminium smelting. Formerly, it was mined, but now it is derived from hydrogen fluoride. Fluorite is used on a large scale to separate slag in steel-making. Mined fluorite (CaF2) is a commodity chemical used in steel-making. Uranium hexafluoride is employed in the purification of uranium isotopes.
Cavity prevention
Fluoride-containing compounds, such as sodium fluoride or sodium monofluorophosphate are used in topical and systemic fluoride therapy for preventing tooth decay. They are used for water fluoridation and in many products associated with oral hygiene. Originally, sodium fluoride was used to fluoridate water; hexafluorosilicic acid (H2SiF6) and its salt sodium hexafluorosilicate (Na2SiF6) are more commonly used additives, especially in the United States. The fluoridation of water is known to prevent tooth decay and is considered by the U.S. Centers for Disease Control and Prevention to be "one of 10 great public health achievements of the 20th century". In some countries where large, centralized water systems are uncommon, fluoride is delivered to the populace by fluoridating table salt. For the method of action for cavity prevention, see Fluoride therapy. Fluoridation of water has its critics . Fluoridated toothpaste is in common use. Meta-analysis show the efficacy of 500 ppm fluoride in toothpastes. However, no beneficial effect can be detected when more than one fluoride source is used for daily oral care.
Laboratory reagent
Fluoride salts are commonly used in biological assay processing to inhibit the activity of phosphatases, such as serine/threonine phosphatases. Fluoride mimics the nucleophilic hydroxide ion in these enzymes' active sites. Beryllium fluoride and aluminium fluoride are also used as phosphatase inhibitors, since these compounds are structural mimics of the phosphate group and can act as analogues of the transition state of the reaction.
Dietary recommendations
The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for some minerals in 1997. Where there was not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) was used instead. AIs are typically matched to actual average consumption, with the assumption that there appears to be a need, and that need is met by what people consume. The current AI for women 19 years and older is 3.0 mg/day (includes pregnancy and lactation). The AI for men is 4.0 mg/day. The AI for children ages 1–18 increases from 0.7 to 3.0 mg/day. The major known risk of fluoride deficiency appears to be an increased risk of bacteria-caused tooth cavities. As for safety, the IOM sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of fluoride the UL is 10 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs).
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women ages 18 and older the AI is set at 2.9 mg/day (including pregnancy and lactation). For men, the value is 3.4 mg/day. For children ages 1–17 years, the AIs increase with age from 0.6 to 3.2 mg/day. These AIs are comparable to the U.S. AIs. The EFSA reviewed safety evidence and set an adult UL at 7.0 mg/day (lower for children).
For U.S. food and dietary supplement labeling purposes, the amount of a vitamin or mineral in a serving is expressed as a percent of Daily Value (%DV). Although there is information to set Adequate Intake, fluoride does not have a Daily Value and is not required to be shown on food labels.
Estimated daily intake
Daily intakes of fluoride can vary significantly according to the various sources of exposure. Values ranging from 0.46 to 3.6–5.4 mg/day have been reported in several studies (IPCS, 1984). In areas where water is fluoridated this can be expected to be a significant source of fluoride, however fluoride is also naturally present in virtually all foods and beverages at a wide range of concentrations. The maximum safe daily consumption of fluoride is 10 mg/day for an adult (U.S.) or 7 mg/day (European Union).
The upper limit of fluoride intake from all sources (fluoridated water, food, beverages, fluoride dental products and dietary fluoride supplements) is set at 0.10 mg/kg/day for infants, toddlers, and children through to 8 years old. For older children and adults, who are no longer at risk for dental fluorosis, the upper limit of fluoride is set at 10 mg/day regardless of weight.
Safety
Ingestion
According to the U.S. Department of Agriculture, the Dietary Reference Intakes, which is the "highest level of daily nutrient intake that is likely to pose no risk of adverse health effects" specify 10 mg/day for most people, corresponding to 10 L of fluoridated water with no risk. For young children the values are smaller, ranging from 0.7 mg/d to 2.2 mg/d for infants. Water and food sources of fluoride include community water fluoridation, seafood, tea, and gelatin.
Soluble fluoride salts, of which sodium fluoride is the most common, are toxic, and have resulted in both accidental and self-inflicted deaths from acute poisoning. The lethal dose for most adult humans is estimated at 5 to 10 g (which is equivalent to 32 to 64 mg elemental fluoride per kg body weight). A case of a fatal poisoning of an adult with 4 grams of sodium fluoride is documented, and a dose of 120 g sodium fluoride has been survived. For sodium fluorosilicate (Na2SiF6), the median lethal dose (LD50) orally in rats is 125 mg/kg, corresponding to 12.5 g for a 100 kg adult.
Treatment may involve oral administration of dilute calcium hydroxide or calcium chloride to prevent further absorption, and injection of calcium gluconate to increase the calcium levels in the blood. Hydrogen fluoride is more dangerous than salts such as NaF because it is corrosive and volatile, and can result in fatal exposure through inhalation or upon contact with the skin; calcium gluconate gel is the usual antidote.
In the higher doses used to treat osteoporosis, sodium fluoride can cause pain in the legs and incomplete stress fractures when the doses are too high; it also irritates the stomach, sometimes so severely as to cause ulcers. Slow-release and enteric-coated versions of sodium fluoride do not have gastric side effects in any significant way, and have milder and less frequent complications in the bones. In the lower doses used for water fluoridation, the only clear adverse effect is dental fluorosis, which can alter the appearance of children's teeth during tooth development; this is mostly mild and is unlikely to represent any real effect on aesthetic appearance or on public health. Fluoride was known to enhance bone mineral density at the lumbar spine, but it was not effective for vertebral fractures and provoked more nonvertebral fractures. In areas that have naturally occurring high levels of fluoride in groundwater which is used for drinking water, both dental and skeletal fluorosis can be prevalent and severe.
Hazard maps for fluoride in groundwater
Around one-third of the human population drinks water from groundwater resources. Of this, about 10%, approximately 300 million people, obtain water from groundwater resources that are heavily contaminated with arsenic or fluoride. These trace elements derive mainly from minerals. Maps locating potential problematic wells are available.
Topical
Concentrated fluoride solutions are corrosive. Gloves made of nitrile rubber are worn when handling fluoride compounds. The hazards of solutions of fluoride salts depend on the concentration. In the presence of strong acids, fluoride salts release hydrogen fluoride, which is corrosive, especially toward glass.
Other derivatives
Organic and inorganic anions are produced from fluoride, including:
Bifluoride, used as an etchant for glass
Tetrafluoroberyllate
Hexafluoroplatinate
Tetrafluoroborate used in organometallic synthesis
Hexafluorophosphate used as an electrolyte in commercial secondary batteries.
Trifluoromethanesulfonate
See also
Per- and polyfluoroalkyl substances
Fluorine-19 nuclear magnetic resonance spectroscopy
Fluoride deficiency
Fluoride selective electrode
Fluoride therapy
Sodium monofluorophosphate
References
External links
"Fluoride in Drinking Water: A Review of Fluoridation and Regulation Issues", Congressional Research Service
U.S. government site for checking status of local water fluoridation
Anions
Biology and pharmacology of chemical elements
Nephrotoxins | Fluoride | [
"Physics",
"Chemistry",
"Biology"
] | 4,104 | [
"Pharmacology",
"Matter",
"Properties of chemical elements",
"Anions",
"Biology and pharmacology of chemical elements",
"Salts",
"Biochemistry",
"Fluorides",
"Ions"
] |
155,710 | https://en.wikipedia.org/wiki/Peer%20pressure | Peer pressure is a direct or indirect influence on peers, i.e., members of social groups with similar interests and experiences, or social statuses. Members of a peer group are more likely to influence a person's beliefs, values, religion and behavior. A group or individual may be encouraged and want to follow their peers by changing their attitudes, values or behaviors to conform to those of the influencing group or individual. For the individual affected by peer pressure, this can have both a positive or negative effect on them.
Social groups include both membership groups in which individuals hold "formal" membership (e.g. political parties, trade unions, schools) and cliques in which membership is less clearly defined. However, a person does not need to be a member or be seeking membership of a group to be affected by peer pressure. An individual may be in a crowd, a group of many cliques, and still be affected by peer pressure. Research suggests that organizations as well as individuals are susceptible to peer pressure. For example, an organization may base a decision off of the current trends to receive more affection or grow a following group.
Peer pressure can affect individuals of all ethnic groups, genders and ages. Researchers have frequently studied the effects of peer pressure on children and on adolescents, and in popular discourse the term "peer pressure" is used most often with reference to those age-groups. It's important to understand that for children of adolescent age, they are faced with finding their identity. Erikson, a sociopsychologist, explains that identity is faced with role confusion, in other words, these children are trying to find a sense of belonging and are the most susceptible to peer pressure as a form of acceptance. For children, the themes most commonly studied are their abilities for independent decision-making. For adolescents, peer pressure's relationships to sexual intercourse and substance abuse have been significantly researched. Peer pressure can be experienced through both face-to-face interaction and through digital interaction. Social media offers opportunities for adolescents and adults alike to instill and/or experience pressure every day.
Studies of social networks examine connections between members of social groups, including their use of social media, to better understand mechanisms such as information sharing and peer sanctioning. Sanctions can range from subtle glances that suggest disapproval, to threats and physical violence. Peer sanctioning may enhance either positive or negative behaviors. Whether peer sanctioning will have an effect depends strongly on members' expectations and the possible sanctions actually being applied. It can also depend on a person's position in a social network. Those who are more central in a social network seem more likely to be cooperative, perhaps as a result of how networks form. However, this goes both ways and so they are also more likely to participate in negative behaviors. This may be caused by the repeated social pressures they experience in their networks.
Childhood and adolescence
Children
Imitation plays a large role in children's lives; in order to pick up skills and techniques that they use in their own life, children are always searching for behaviors and attitudes around them that they can co-opt. In other words, children are influenced by people that are important in their lives, such as friends, parents, celebrities (including YouTubers), singers, dancers, etc. This may explain why children with parents who eat unhealthy or don't live active lifestyles can conform to creating habits just like their parents as young adults, and why children try to walk when very young. Children are aware of their position in the social hierarchy from a young age: their instinct is to defer to adults' judgements and majority opinions. Similar to the Asch conformity experiments, a study done on groups of preschool children showed that they were influenced by groups of their peers to change their opinion to a demonstrably wrong one. Each child was handed a book with two sets of images on each page, with a groups of differently sized animals on the left hand page and one animal on the right hand, and each child was asked to indicate the size of the lone animal. All the books appeared the same, but the last child would sometimes get a book that was different. The children reported their size judgements in turn, and the child being tested was asked last. Before the child was to be tested, however, there was a group of children working in conjunction with the researchers. Sometimes, the children who answered before the test subject all gave an incorrect answer. When asked in the presence of the other children, the last child's response was often the same as his or her peers. However, when allowed to privately share their responses with a researcher, the children proved much more resistant to their peers' pressure, illustrating the importance of the physical presence of their peers in shaping their opinions.
An observation is that children can monitor and intervene in their peers' behavior through pressure. A study conducted in a remedial kindergarten class, in the Edna A. Hill Child Development Laboratory at the University of Kansas, was designed to measure how children could ease disruptive behavior in their peers through a two-part system. After describing a series of tasks to their classroom that included going to the bathroom, cleaning up, and general classroom behavior, teachers and researchers would observe children's performance on the tasks. The study focused on three children who were clearly identified as being more disruptive than their peers. They looked at their responses to potential techniques. They utilized the two-part system: first, each student would be given points by their teachers for correctly completing tasks with little disruption (e.g. sitting down on a mat for reading time), and if a student reached three points by the end of the day they would receive a prize. The second part brought in peer interaction, where students who reached three points were appointed "peer monitors" whose role was to lead their small groups and assign points at the end of the day. The results were clear-cut, showing that the monitored students' disruption level dropped when teachers started the points system and monitored them, but when peer monitors were introduced the target students' disruption dropped to average rates of 1% for student C1, 8% for student C2, and 11% for student C3 (down from 36%, 62%, and 59%, respectively). Even small children, then, are susceptible to pressure from their peers, and that pressure can be used to effect positive change in academic and social environments.
Adolescence
Adolescence is the time when a person is most susceptible to peer pressure because peers become an important influence on behavior during adolescence, and peer pressure has been called a hallmark of adolescent experience. Children entering this period in life become aware for the first time of the other people around them and realize the importance of perception in their interactions. Peer conformity in young people is most pronounced with respect to style, taste, appearance, ideology, and values. Peer pressure is commonly associated with episodes of adolescent risk-taking because these activities commonly occur in the company of peers. Affiliation with friends who engage in risky behaviors has been shown to be a strong predictor of an adolescent's own behavior. Peer pressure can also have positive effects when youth are pressured by their peers toward positive behavior, such as volunteering for charity, excelling in academics, or participating in a service project. The importance of peer approval declines upon entering adulthood.
Even though socially accepted children are more prone to experience higher, more frequent, positive fulfillments and participate in more opportunities, research shows that social acceptance (being in the popular crowd) may increase the likelihood of engaging in risky behavior, depending on the norms in the group. Groups of popular children showed an increased propensity to engage in risky, drug-related and delinquent behavior when this behavior was likely to receive approval in their groups. Peer pressure was greatest among more popular children because they were the children most attuned to the judgments of their peers, making them more susceptible to group pressures. Gender also has a clear effect on the amount of peer pressure an adolescent experiences: girls report significantly higher pressures to conform to their groups in the form of clothing choices or speech patterns. Additionally, girls and boys reported facing differing amounts of pressures in different areas of their lives, perhaps reflecting a different set of values and priorities for each gender. Both boys and girls are susceptible to peer pressure, but what it revolves around is defining the values, beliefs, or attitudes that their peer groups have or deeply desire. For girls, it typically revolves around their physical appearance, including their fashion choices, such as wearing thong underwear. For boys, it's more likely to revolve around typical masculine ideals, such as athleticism or intellect. Either way, peer pressure tends to follow the trends with the current world.
Peer pressure is widely recognized as a major contributor to the initiation of drug use, particularly in adolescents. This has been shown for a variety of substances, including nicotine and alcohol. While this link is well established, moderating factors do exist. For example, parental monitoring is negatively associated with substance use; yet when there is little monitoring, adolescents are more likely to succumb to peer coercion during initiation to substance use, but not during the transition from experimental to regular use. Caldwell and colleagues extended this work by finding that peer pressure was a factor leading to heightened risk in the context of social gatherings with little parental monitoring, and if the individual reported themselves as vulnerable to peer pressure. Conversely, some research has observed that peer pressure can be a protective factor against substance use.
Peer pressure produces a wide array of negative outcomes. Allen and colleagues showed that susceptibility to peer pressure in 13- and 14-year-olds was predictive of not only future response to peer pressure, but also a wider array of functioning. For example, greater depression symptomatology, decreasing popularity, more sexual behavior, and externalizing behavior were greater for more susceptible teens. Of note, substance use was also predicted by peer pressure susceptibility such that greater susceptibility was predictive of greater alcohol and drug use.
Peer pressure and adolescent behaviors
Substance use
Nicotine use
Substance use is likely not attributed to peer pressure alone. Evidence of genetic predispositions for substance use exists and some have begun to examine gene x environment interactions for peer influence. In a nationally representative sample, adolescents who had a genetic predisposition were more likely to have close friends who were heavy substance users and were furthermore, more likely to be vulnerable to the adverse influence of these friends. Results from specific candidate gene studies have been mixed. For instance, in a study of nicotine use Johnson and colleagues found that peer smoking had a lower effect on nicotine dependence for those with the high risk allele (CHRNA5). This suggests that social contexts do not play a significant role in substance use initiation and maintenance and that interventions for these individuals should be developed with genetics in mind as well
While tobacco is one of the most widespread forms of nicotine, it is not the only form of nicotine adolescents use. E-cigarette use is on the rise, and over the course of four years, vaporizer use increased ninefold among adolescents. In the United States, youths are commonly introduced to e-cigarettes and vaporizers in their middle and high-school years; almost 6% of students in this age group reported use of some form of e-cigarettes. The mechanisms behind why adolescents adhere to vaping largely relate to social psychology topics such as conformity and acceptance within social groups. Conformity and acceptance can be associated with several factors which are personality and habit-based. Some of the most often cited criteria include a need to belong, alleviation of emotional or physical pain, and curiosity. The onset and continued use of electronic cigarette products are considered normative behaviors within certain social groups, and through behavioral modifications to fit the norms, adolescents and adults gain acceptance and approval from their peers. Additionally, nicotine abuse through social contexts can be traced to individuals and locations where people feel most comfortable. The sites of initiation, or the first location a substance is taken, are most often locations such as schools and homes These locations are familiar spaces for individuals and tend to have low risk of consequences.
Alcohol use
Though the impact of peer influence in adolescence has been well established, it was unclear at what age this effect begins to diminish. It is accepted that such peer pressure to use alcohol or illicit substances is less likely to exist in elementary school and very young adolescents given the limited access and exposure. Using the Resistance to Peer Influence Scale, Sumter and colleagues found that resistance to peer pressure grew as age increased in a large study of 10- to 18-year-olds. This study also found that girls were generally more resistant to peer influence than boys, particularly at mid-adolescence (i.e. ages 13–15). The higher vulnerability to peer pressure for teenage boys makes sense given the higher rates of substance use in male teens. For girls, increased and positive parental behaviors (e.g. parental social support, consistent discipline) have been shown to be an important contributor to the ability to resist peer pressure to use substances.
It is believed that peer pressure relating to alcohol use in college is caused by a variety of factors including: Modeling, social norms, and being offered alcohol. Offering alcohol can be seen as a kind gesture, but in some cases a forceful one. Students may feel like their social position could become compromised if they don't follow the actions of their fellow peers. This correlates to modeling, a term used to describe the action of copying/imitating the actions of your peers to fit in. This usually occurs when students give into peer pressure to seem more attractive to the perceived majority. Lastly, you have common, socially acceptable norms that frequently occur in college settings such as substance abuse and drinking. One of the most commonly used excuses among students to which why they drink is because "everyone does it". Upon entering college, it's common to see students begin to increase their alcohol intake, especially for those who do not live at home. Because they have shifted from being influenced by their parents to being influenced by their college peers, it's common to see students reflect their peers, most likely due to an increase of modeling to fit in to social settings.
Other substances
Besides the impacts of peer pressure on adolescent alcohol and tobacco use, peer pressure plays a role in the use of other substances, such as marijuana and hard drugs. One contributor to peer pressure with marijuana is legalization efforts; the legalization of recreational marijuana may increase adolescent access and decrease stigma, increasing the likelihood of peer exposure and peer pressure. With legalization comes other challenges, such as deregulation and a lack of control of substances like marijuana and non-medical opioids when it comes to safety concerns. On an international scale, contaminants such as fentanyl are seeping into deregulated opioid markets, which dramatically decreases safety and increases risks for opioid toxicity and death.
Peer pressure and social group selection can create a positive feedback loop with marijuana abuse as well as other substances. Through homophily, the sociological concept in which people connect more with others they are similar to, pro-substance use adolescents and adults self-select with others who share their habits. Similar to nicotine, comfort and familiarity with people and places of first initiation are predictors for whether individuals will use substances. Opioid use is closely linked to peer pressure and comfort, as well as a number of other risk factors which connect with other substance use trends. Opioid use is strongly correlated with tobacco use, and "experimentation," or trying several different substances during adolescence, is closely tied to long-term abuse. Additionally, delinquent behaviors and peer selection connect closely with opioid use. Opioid use and distribution outside of prescriptions is commonly associated with crime, and if peer groups contain individuals who commit these crimes, the risk of group abuse increases.
Prevention
Substance use prevention and intervention programs have utilized multiple techniques in order to combat the impact of peer pressure. One major technique is peer influence resistance skills. The known correlational relationship between substance use and relationships with others that use makes resistance skills a natural treatment target. This type of training is meant to help individuals refuse participation with substance use while maintaining their membership in the peer group. Other interventions include normative education approaches (interventions designed to teach students about the true prevalence rates and acceptability of substance use), education interventions that raise awareness of potential dangers of substance use, alcohol awareness training and classroom behavior management. The literature regarding the efficacy of these approaches, however, is mixed. A study in Los Angeles and Orange Counties that established conservative norms and attempted to correct children's beliefs about substance abuse among their peers showed a statistically significant decrease in alcohol, tobacco, and marijuana use, but other studies that systematically reviewed school-based attempts to prevent alcohol misuse in children found "no easily discernible pattern" in both successful and failed programs. A systematic review of intervention programs in schools conducted by Onrust et al. found that programs in elementary school were successful in slightly reducing a student's likelihood to abuse drugs or alcohol. However, this effect started to wear off with programs that targeted older students. Programs that targeted students in grades 8–9 reduced smoking, but not alcohol and other drug abuse, and programs that targeted older children reported no effect at all.
In a non-substance use context, however, research has shown that decision-making training can produce concrete gains in risk perception and decision-making ability among autistic children. When administered the training in several short sessions that taught the children how to recognize risk from peers and react accordingly, the children demonstrated, through post-training assessments, that they were able to identify potential threats and sources of pressure from peers and deflect them far better than non-autistic adolescents in a control group.
Peer pressure and sexual intercourse
There is evidence supporting the conclusion that parental attitudes disapproving sex tends to lead toward lower levels of adolescent unplanned pregnancy. These disparities are not due solely to parental disposition but also to communication.
A study completed in Cape Town, South Africa, looked at students at four secondary schools in the region. They found a number of unhealthy practices derived from peer pressure: condoms are derided, threats of ridicule for abstinence, and engaging in sexual activity with multiple partners as part of a status symbol (especially for males). The students colloquially call others who choose abstinence as "umqwayito", which means dried fruit/meat. An important solution for these problems is communication with adults, which the study found to be extremely lacking within adolescent social groups.
Another investigation, completed in 2011, looked at the effect of peer pressure surrounding sexual activities in the youth surrounding US born Mexicans and Mexico born Mexicans. It summarized that US born Mexican youths are more susceptive of peer pressure, specifically towards sexual relations, than Mexico born youths. It has been found that Mexican born youths grow up with stronger familial households than US born Mexico born youths, which leads to why Mexico born youths are more apt to talk with family than with peers. Less interaction with peers means less influence with peers and more trust in family.
Literature reviews in this field have attempted to analyse the norms present in the interactions and decision making behind these behaviors. A review conducted by Bongardt et al. defined three types of peer norms that led to a person's participation in sexual intercourse: descriptive norms, injunctive norms, and outright peer pressure. Descriptive norms and injunctive norms are both observed behaviors and are thus more indirect forms of pressure, but differ in one key aspect: descriptive norms describe peers' sexual behaviors, but injunctive norms describe peers' attitudes toward those behaviors (e.g. approval or disapproval). The last norm defined by the study is called "peer pressure" by the authors, and is used to describe direct encouragement or pressure by a person's peers to engage in sexual behavior.
The review found that indirect norms (descriptive and injunctive) had a stronger effect on a person's decision to engage in sexual behavior than direct peer pressure. Between the two indirect norms, descriptive norms had a stronger effect: people were likely to try what they thought their peers were engaging in rather than what they thought had approval in their peer group.
Additionally, studies have found a link between self-regulation and likeliness to engage in sexual behavior. The more trouble an individual had with self-regulation and self-control growing up, the more they were likely to fall prey to peer pressure that would lead them to engage in risky sexual acts. Based on these findings, it may be a good idea to prevent these through either a decision-making program or by targeting adolescents' ability to self-regulate against possible risks.
Psychological explanations
Neurology and physiological psychology
From a neurological perspective, the medial prefrontal cortex (mPFC) and the striatum play an important role in figuring out the value of specific actions. The mPFC is active when determining "socially tagged" objects, which are objects that peers have expressed an opinion about; the striatum is significant for determining the value of these "socially tagged" objects and rewards in general. An experiment performed by Mason et al. utilizing fMRI scans analyzed individuals who were assigned to indicate if a chosen symbol appeared consecutively. The researchers did not tell the subjects the real purpose of the experiment, which was to collect data regarding mPFC and striatum stimulation. Before the actual experiment began, the subjects were subject to a phase of "social" influence, where they learned which symbols were preferred by other subjects who had completed the experiment (while in actuality these other subjects did not exist). Mason et al. found that determining an object's social value/significance is dependent on combined information from the mPFC and the striatum [along the lines denoted in the beginning of the paragraph]. Without both present and functional, it would be difficult to determine the value of action based upon social circumstances.
A similar experiment was conducted by Stallen, Smidts, and Sanfrey. Twenty-four subjects were manipulated using a minimal group paradigm approach. Unbeknownst to them, they were all selected as part of the "in-group", although there was an established "out-group". Following this socialization, the subjects estimated the number of dots seen on the screen while given information about what an in-group or out-group member chose. Participants were more likely to conform to in-group decisions as compared to out-group ones. The experiment confirmed the importance of the striatum in social influence, suggesting that conformity with the in-group is mediated with a fundamental value signal—rewards. In other words, the brain associates social inclusion with positive reward. The posterior superior temporal sulcus (pSTS), which is associated with perspective taking, appeared to be active as well, which correlated with patients' self-reports of in-group trustworthiness.
In adolescence, risk-taking appears to increase dramatically. Researchers conducted an experiment with adolescent males who were of driving age and measured their risk-taking depending on whether a passenger (a peer of the same age) was in the car. A driving simulation was created, and certain risky scenarios, such as a decaying yellow light as the car was approaching, were modeled and presented to the subjects. Those who were most likely to take risks in the presence of peers (but took fewer risks when there were no passengers) had greater brain activity in the social-cognitive and social-affective brain systems during solo activity (no passengers.) The social-cognitive aspect refers to the ability to gauge what others are thinking and is primarily controlled by the mPFC, right temporal parietal junction, and the posterior cingulate cortex. The social-affective aspect relates to the reward system for committing actions that are accepted or rejected by other people. One side of the reward system is "social pain", which refers to the emotional pain felt by individual due to group repudiation and is associated with heightened activity in the anterior insula and the subgenual anterior cingulate cortex.
Social psychology
An explanation of how the peer pressure process works, called "the identity shift effect," was introduced by social psychologist Wendy Treynor, who weaves together Festinger's two seminal social-psychological theories (on dissonance, which addresses internal conflict, and social comparison, which addresses external conflict) into a unified whole. According to Treynor's original "identity shift effect" hypothesis, the peer pressure process works in the following way: One's state of harmony is disrupted when faced with the threat of external conflict (social rejection) for failing to conform to a group standard. Thus, one conforms to the group standard, but as soon as one does, eliminating this external conflict, internal conflict is introduced (because one has violated one's own standards). To rid oneself of this internal conflict (self-rejection), an "identity shift" is undertaken, where one adopts the group's standards as one's own, thereby eliminating internal conflict (in addition to the formerly eliminated external conflict), returning one to a state of harmony. Although the peer pressure process begins and ends with one in a (conflict-less) state of harmony, as a result of conflict and the conflict resolution process, one leaves with a new identity—a new set of internalized standards.
Social media
Social media provides a massive new digital arena for peer pressure and influence. Research suggests there are a variety of benefits from social media use, such as increased socialization, exposure to ideas, and greater self-confidence. However, there is also evidence of negative influences such as advertising pressure, exposure to inappropriate behavior and/or dialogue, and fake news. These versions of digital peer pressure exist between youth, adults and businesses. In some cases, people can feel pressure to make themselves available 24/7 or to be perfect. Within this digital conversation there can be pressure to conform, especially as people are impacted by the frequency of times others hit the like button. In 2014, 39% of the 789 respondents, in ages 13–17, felt pressured to post content for likes and comments. The way others portray themselves on social media might lead to young people trying to mimic those qualities or actions in an attempt to conform. In 2014, 40% of 789 respondents, in ages 13–17, felt the need to only post content to look good to others on social media. It may also lead to a fear of missing out, which can pressure youth into irresponsible actions or decisions. Actions and influence on social media may lead to changes in identity, confidence, or habits in real life for children, adolescents, and adults. Another area in which social media and social network groups influence people is in the purchasing of products. When a person is a part of an online social networking group, they are more likely to purchase a product if it was recommended by another member of that group than if it were recommended by a random person online. Knowledge about brands, opinions of brands, and purchasing behavior are directly influenced by peers and the media; people's purchase decisions largely stem from what their friends are purchasing. The effects of social networking groups on purchasing products even translates to subscriptions. If a subscription-based product was given to a member of an online social networking group as a gift by another member of the same group, the person receiving the gift is more likely to adopt the cost of the subscription and keep paying for the service.
Peer pressure on social media across cultures
Over 3 billion social media users across the world are using a variety of platforms, in turn, the type, frequency, and scope of the resulting peer pressure fluctuates. Some research suggests social media has a greater influence on purchasing decisions for consumers in China than in other countries in the world. In addition, Chinese consumers say that they are more likely to consider buying a product if they see it discussed positively by friends on a social media site. Some countries have a very low usage rate of social media platforms, or have cultures that do not value it as highly. As a result, the power and impact of digital peer pressure may vary throughout the world. Overall, there is limited research on this topic and its global scope.
Historical examples
Holocaust
The Holocaust is one of the most well-known of genocides. In the 1940s, Nazi Germany, led by Adolf Hitler, began a systematic purge against the Jewish people living in Europe, killing around six million Jews by the end of World War II. It is clear that some Germans are culpable for the Holocaust; SS officers and soldiers clearly bought into the Jewish genocide and participated as executioners, jailers, and hunters (for hiding Jews). However, not all Germans wanted to kill the Jews. When bringing the concept of peer pressure into the Holocaust, German culpability is even harder to decide.
The primary issue revolves around collective responsibility and beliefs. As such, there are two positions, most notably held by Christopher Browning and David Goldhagen.
Browning's Ordinary Men
Christopher Browning, most known for his book Ordinary Men: Reserve Police Battalion 101, relies on an analysis of the men in Reserve Police Battalion 101. The men of the 101st were not ardent Nazis but ordinary middle-aged men of working-class backgrounds from Hamburg. They were drafted but found ineligible for regular military duty. Their test as an Order Police battalion first came in the form of Jozefow, a Jewish ghetto in Poland. The Battalion was ordered to round up the men in the ghetto and kill all women, children, and elderly on sight. During the executions, a few dozen men were granted release of their execution tasks and were reassigned to guard or truck duty. Others tried to stall as long as possible, trying not to be assigned to a firing squad. After the executions were completed, the men drank heavily, shaken by their ordeal.
At the end of his book, Browning supplies his theory on 101's actions: a combination of authoritative and peer pressure was a powerful coercive tool. First, the Nazi leadership wanted to keep the country's soldiers psychologically healthy, so soldiers were not forced to commit these murders. Throughout the German ranks, nothing negative happened to the soldiers and policemen who refused to join in on a firing squad or Jewish search party. They would simply be assigned other or additional duties, and perhaps subject to a little verbal abuse deriding their "cowardice". For the officers, no official sanction was given, but it was well known that being unable to carry out executions was a sign of a "weak" leader, and the officer would be passed for promotions. Second, Major Trapp, the head of Battalion 101, consistently offered protection from committing these actions, even so far as supporting one man who was blatantly and vocally against these practices. He established "ground" rules in which only volunteers were taking on 'Jewish Hunts" and raids.
Browning relies on Milgram's experiments on authority to expand his point. Admitting that Trapp was not a particularly strong authority figure, Browning instead points to the Nazi leadership and the orders of the "highest order" that were handed down. Furthermore, according to Browning's analysis, one reason so few men separated themselves from their task was peer pressure—individual policemen did not want to "lose face" in front of their comrades. Some argued that it was better to shoot one and quit than to be a coward immediately. Some superior officers treated those who did not want to execute Jews with disdain; on the other hand, those selected for the executions or Jewish hunts were regarded as real "men" and were verbally praised accordingly. For some, refusing their tasks meant that their compatriots would need to carry the burden and the guilt of abandoning their comrades (as well as fear of ostracization) compelled them to kill.
Goldhagen's Hitler's Willing Executioners
Daniel Goldhagen, disagreeing with Browning's conclusion, decided to write his own book, Hitler's Willing Executioners. Its release was highly controversial. He argues that the Germans were always anti-Semitic, engaging in a form of "eliminationism". Taking photos of the deceased, going on "Jew-Hunts", death marches near the end of the war, and a general focus on hate (rather than ignorance) are points Goldhagen utilizes in his book.
He does not believe that peer pressure or authoritative pressure can explain why ordinary Germans engaged in these actions. He believes that in order for the policemen in Battalion 101 (and those in similar situations) to kill, they must all be fully committed to the action—no half-heartedness. As he notes,"For that matter, for someone to be pressured into doing something, by peer pressure, everyone else has to want to do it. Peer pressure can, of course, operate on isolated individuals, or small groups, but it depends upon the majority wanting to do it. So the peer pressure argument contradicts itself. If the majority of the people hadn't wanted to kill Jews, then there would have been peer pressure not to do it" (37).Instead, he places a significant emphasis on the German people's anti-Semitism, to the extent of drawing ire from other historians. Browning notes Goldhagen's "uniform portrayal" of Germans, dehumanizing all of the perpetrators without looking at the whole picture. For example, in the town of Niezdow, the Police Battalion executed over a dozen elderly Poles in retaliation for the murder of a German policeman. It is less clear, then, if the Germans in the Police Battalion are antagonistic only towards Jews. The German-Canadian historian Ruth Bettina Birn has—in collaboration with Volker Rieß— checked Goldhagen's archival sources from Ludwigsburg. Their findings confirm the arbitrary nature of his selection and evaluation of existing records as opposed to a more holistic combination of primary sources. Furthermore, Konrad Kwiet, a Holocaust historian, argues that Goldhagen's narrow focus on German anti-Semitism has blinded him to other considerations. He points to the massacres of non-Jews as an example:"[Goldhagen does not shine light] on the motives of "Hitler's willing executioners" in murdering disabled people within the so-called "Euthanasia Program", in liquidating 2.7 million Soviet prisoners of war, in exterminating Romas or in killing hundreds of thousands of other people classified as enemies of the "German People and Nation". The emphasis on German responsibility allows Goldhagen to push aside the willingness of genocidal killers of other nationalities [such as Latvians] who, recruited from the vast army of indigenous collaborators, were often commissioned with the task of carrying out the 'dirty work', such as the murder of women and children, and who, in many cases, surpassed their German masters in their cruelty and spontaneous brutality".
Rwandan genocide
The Rwandan genocide occurred in 1994, with ethnic violence between the Hutu and Tutsi ethnicities. The primary belligerents were the Hutu; however, as with most ethnic conflicts, not all Hutu wanted to kill Tutsi. A survivor, Mectilde, described the Hutu breakdown as follows: 10% helped, 30% forced, 20% reluctant, and 40% willing. For the willing, a rewards structure was put in place. For the unwilling, a punishment system was in effect. The combination, Professor Bhavnani argues, is a behavioral norm enforced by in-group policing. Instead of the typical peer pressure associated with western high school students, the peer pressure within the Rwandan genocide, where Tutsi and Hutu have inter-married, worked under coercion. Property destruction, rape, incarceration, and death faced the Hutu who were unwilling to commit to the genocide or protected the Tutsi from violence.
When observing a sample community of 3426 in the village of Tare during the genocide, McDoom found that neighborhoods and familial structures are important micro-spaces that helped determine if an individual would participate in violence. Physical proximity increases the likelihood of social interaction and influence. For example, starting at a set point such as the home of a "mobilizing" agent for the Hutu (any individual who planned or led an attack in the village), the proportion of convicts living in a 100m radius of a resident is almost twice as many for convicts (individuals convicted of genocide by the gacaca, a local institution of transitional justice that allows villagers to adjudicate on many of the perpetrators' crimes by themselves) as for non-convicts. As the radius increases, so does the proportion decrease. This data implies that "social influence" was a major factor. Looking at neighborhoods, an individual is 4% more likely to join the genocide for every single percentage point increase in the proportion of convicted perpetrators living within a 100m radius of them. Looking at familial structures, for any individual, each percentage point increase in the proportion of genocide participants in the individual's household increased their chances of joining the violence by 21 to 25%.
However, the complete situation is a little more nuanced. The extreme control of citizens' daily lives by the government in social affairs facilitated the rapidity of the genocide's spread and broke down the resolve of some who initially wanted to have no part in the genocide. First, prior to the genocide, Rwandans' sense of discipline was introduced and reinforced through weekly umuganda (collective work) sessions, involving praise for the regime and its leaders and a host of collective activities for the community. Respect for authority and the fear of stepping out of line were strong cultural values of pre-genocide Rwanda and so were included in these activities. Second, their value of social conformity only increased in the decades leading up to the genocide in both social and political manners. Peasants were told exactly when and what to farm and could be fined given any lack of compliance. These factors helped to drive the killing's fast pace.
Most importantly, there were already ethnic tensions among the groups for a variety of reasons: conflicts over land allocation (farming versus pasture) and declining prices of Rwanda's main export: coffee. These problems combined with a history of previously existing conflict. With the introduction of the Second Republic under Habyarimana, former Tutsis in power were immediately purged, and racism served as an explanation as keeping the majority Hutu in legitimate government power. As a result, when the war came, the Hutu were already introduced to the concept of racism against their very own peers.
The division in Rwanda was reinforced for hundreds of years. King Kigeli IV, a Tutsi, centralized Rwandan power in the 1800s, just as Belgian colonization was taking place. The Belgian furthered the message of distinct races, allowing Tutsi men to remain the leaders in the society.
Applications
Leadership tool
Education
Principals who served as strong "instructional" leaders and introduced new curricula and academic programs were able to create a system of peer pressure at the teaching level, where the teachers placed accountability pressure on themselves.
Voting
Peer pressure can be especially effective (more so than door-to-door visits and telephone calls) in getting people to vote. Gerber, Green, and Larimer conducted a large-scale field experiment involving over 180,000 Michigan households in 2006 and four treatments: one was a reminder to vote, one was a reminder to vote and a note informing them that they were being studied, one that listed the voting records for all potential household individuals, and finally one that listed the voting records for the household individuals and their neighbors. The final treatment emphasized peer pressure within a neighborhood; neighbors could view each other's voting habits with the lists, and so the social norm of "voting is best for the community" is combined with the fear that individuals' peers would judge their lack of voting. Compared to a baseline rate of 29.7% (only the voting reminder), the treatment that utilized peer pressure increased the percentage of household voters by 8.1 percentage points (to 37.8%), which exceeds the value of in-person canvassing and personalized phone calls.
A similar large-scale field experiment conducted by Todd Rogers, Donald P. Green, Carolina Ferrerosa Young, and John Ternovski (2017) studied the impact of a social pressure mailing in the context of a high-salience election, the 2012 Wisconsin gubernatorial election. Social pressure mailers included the line, "We're sending this mailing to you and your neighbors to publicize who does and does not vote." This study found a treatment effect of 1.0 percentage point, a statistically significant but far weaker effect than the 8.1 percentage point effect reported by Gerber, Green, and Larimer. The 2017 study's effects were particularly sizable for low-propensity voters.
Charitable donations
An experiment conducted by Diane Reyniers and Richa Bhalla measured the amount donated by a group of London School of Economics students. The group was split into individual donators and pair donators. The donation amounts were revealed within each pair; then, the pair was given time to discuss their amounts and then revise them as necessary. In general, pair subjects donated an average of 3.64 pounds (Sterling) while individuals donated an average of 2.55 pounds. Furthermore, in pairs where one subject donated significantly more than the other, the latter would on average increase the donation amount by 0.55 pounds. This suggests that peer pressure "shames" individuals for making smaller donations. But when controlling for donation amount, paired subjects were significantly less happy with their donation amount than individual subjects—suggesting that paired subjects felt coerced to donate more than they would have otherwise. This leads to a dilemma: charities will do better by approaching groups of people (such as friends); however, this could result in increased donor discomfort, which would impact their future donations.
Organizational researchers have found a generally similar phenomenon among large corporations: executives and managers of large companies look to similar organizations in their industry or headquarters city to figure out the appropriate level of corporate charitable donations, and those that make smaller donations might be seen as stingy and suffer damage to their reputations.
Criminal justice
There are a number of applications for peer pressure related to adolescent exposure to the criminal justice and juvenile justice system, which relate to disproportionate minority contact, differential involvement, and topics like codes of the street. Tens of thousands of juveniles offend and make contact with the criminal justice system per day, which has significant impacts within communities and neighborhoods. Neighborhoods and social contexts contribute largely to crime outcomes through social disorganization and connections with communities. There is a significant correlation between a lack of social connection between the individual and their neighborhoods and likelihood of offending / recidivism. Literature is mixed regarding the causality of neighborhood conditions and substance use, but it is more strongly correlated with peer pressure and differential social environments. This can be examined as a partial correlation, where peer pressure is the confound which strongly influences the relationships between social disorganization and negative outcomes like substance use and crime. Risky behaviors and lifestyles are closely related with all types of substance use, seen in the connections between crime and drug possession. Delinquent behaviors and tendencies for crime are most commonly associated with "hard drugs" like opioids and prescription drugs.
See also
References
Further reading
Lowe, M. L., & Haws, K. L. (2014). (Im)moral Support: The Social Outcomes of Parallel Self-Control Decisions. Journal of Consumer Research, 41(2), 489–505.
Group processes
Youth
Conformity
Popular psychology
Social influence
Majority–minority relations | Peer pressure | [
"Biology"
] | 8,964 | [
"Behavior",
"Conformity",
"Human behavior"
] |
155,715 | https://en.wikipedia.org/wiki/Pyroelectricity | Pyroelectricity (from Greek: pyr (πυρ), "fire" and electricity) is a property of certain crystals which are naturally electrically polarized and as a result contain large electric fields. Pyroelectricity can be described as the ability of certain materials to generate a temporary voltage when they are heated or cooled. The change in temperature modifies the positions of the atoms slightly within the crystal structure, so that the polarization of the material changes. This polarization change gives rise to a voltage across the crystal. If the temperature stays constant at its new value, the pyroelectric voltage gradually disappears due to leakage current. The leakage can be due to electrons moving through the crystal, ions moving through the air, or current leaking through a voltmeter attached across the crystal.
Explanation
Pyroelectric charge in minerals develops on the opposite faces of asymmetric crystals. The direction in which the propagation of the charge tends is usually constant throughout a pyroelectric material, but, in some materials, this direction can be changed by a nearby electric field. These materials are said to exhibit ferroelectricity.
All known pyroelectric materials are also piezoelectric. Despite being pyroelectric, novel materials such as boron aluminum nitride (BAlN) and boron gallium nitride (BGaN) have zero piezoelectric response for strain along the c-axis at certain compositions, the two properties being closely related. However, note that some piezoelectric materials have a crystal symmetry that does not allow pyroelectricity.
Pyroelectric materials are mostly hard and crystals; however, soft pyroelectricity can be achieved by using electrets.
Pyroelectricity is measured as the change in net polarization (a vector) proportional to a change in temperature. The total pyroelectric coefficient measured at constant stress is the sum of the pyroelectric coefficients at constant strain (primary pyroelectric effect) and the piezoelectric contribution from thermal expansion (secondary pyroelectric effect). Under normal circumstances, even polar materials do not display a net dipole moment. As a consequence, there are no electric dipole equivalents of bar magnets because the intrinsic dipole moment is neutralized by "free" electric charge that builds up on the surface by internal conduction or from the ambient atmosphere. Polar crystals only reveal their nature when perturbed in some fashion that momentarily upsets the balance with the compensating surface charge.
Spontaneous polarization is temperature dependent, so a good perturbation probe is a change in temperature which induces a flow of charge to and from the surfaces. This is the pyroelectric effect. All polar crystals are pyroelectric, so the 10 polar crystal classes are sometimes referred to as the pyroelectric classes. Pyroelectric materials can be used as infrared and millimeter wavelength radiation detectors.
An electret is the electrical equivalent of a permanent magnet.
Mathematical description
The pyroelectric coefficient may be described as the change in the spontaneous polarization vector with temperature:
where pi (Cm−2K−1) is the vector for the pyroelectric coefficient.
History
The first record of the pyroelectric effect was made in 1707 by Johann Georg Schmidt, who noted that the "[hot] tourmaline could attract the ashes from the warm or burning coals, as the magnet does iron, but also repelling them again [after the contact]". In 1717 Louis Lemery noticed, as Schmidt had, that small scraps of non-conducting material were first attracted to tourmaline, but then repelled by it once they contacted the stone. In 1747 Linnaeus first related the phenomenon to electricity (he called tourmaline Lapidem Electricum, "the electric stone"), although this was not proven until 1756 by Franz Ulrich Theodor Aepinus.
Research into pyroelectricity became more sophisticated in the 19th century. In 1824 Sir David Brewster gave the effect the name it has today. Both William Thomson in 1878 and Woldemar Voigt in 1897 helped develop a theory for the processes behind pyroelectricity. Pierre Curie and his brother, Jacques Curie, studied pyroelectricity in the 1880s, leading to their discovery of some of the mechanisms behind piezoelectricity.
It is mistakenly attributed to Theophrastus (c. 314 BC) the first record of pyroelectricity. The misconception arose soon after the discovery of the pyroelectric properties of tourmaline, which made mineralogists of the time associate the legendary stone Lyngurium with it. Lyngurium is described in the work of Theophrastus as being similar to amber, without specifying any pyroelectric properties.
Crystal classes
All crystal structures belong to one of thirty-two crystal classes based on the number of rotational axes and reflection planes they possess that leave the crystal structure unchanged (point groups). Of the thirty-two crystal classes, twenty-one are non-centrosymmetric (not having a centre of symmetry). Of these twenty-one, twenty exhibit direct piezoelectricity, the remaining one being the cubic class 432. Ten of these twenty piezoelectric classes are polar, i.e., they possess a spontaneous polarization, having a dipole in their unit cell, and exhibit pyroelectricity. If this dipole can be reversed by the application of an electric field, the material is said to be ferroelectric. Any dielectric material develops a dielectric polarization (electrostatics) when an electric field is applied, but a substance which has such a natural charge separation even in the absence of a field is called a polar material. Whether or not a material is polar is determined solely by its crystal structure. Only 10 of the 32 point groups are polar. All polar crystals are pyroelectric, so the ten polar crystal classes are sometimes referred to as the pyroelectric classes.
Piezoelectric crystal classes: 1, 2, m, 222, mm2, 4, -4, 422, 4mm, -42m, 3, 32, 3m, 6, -6, 622, 6mm, -62m, 23, -43m
Pyroelectric: 1, 2, m, mm2, 3, 3m, 4, 4mm, 6, 6mm
Related effects
Two effects which are closely related to pyroelectricity are ferroelectricity and piezoelectricity. Normally materials are very nearly electrically neutral on the macroscopic level. However, the positive and negative charges which make up the material are not necessarily distributed in a symmetric manner. If the sum of charge times distance for all elements of the basic cell does not equal zero the cell will have an electric dipole moment (a vector quantity). The dipole moment per unit volume is defined as the dielectric polarization. If this dipole moment changes with the effect of applied temperature changes, applied electric field, or applied pressure, the material is pyroelectric, ferroelectric, or piezoelectric, respectively.
The ferroelectric effect is exhibited by materials which possess an electric polarization in the absence of an externally applied electric field such that the polarization can be reversed if the electric field is reversed. Since all ferroelectric materials exhibit a spontaneous polarization, all ferroelectric materials are also pyroelectric (but not all pyroelectric materials are ferroelectric).
The piezoelectric effect is exhibited by crystals (such as quartz or ceramic) for which an electric voltage across the material appears when pressure is applied. Similar to pyroelectric effect, the phenomenon is due to the asymmetric structure of the crystals that allows ions to move more easily along one axis than the others. As pressure is applied, each side of the crystal takes on an opposite charge, resulting in a voltage drop across the crystal.
Pyroelectricity should not be confused with thermoelectricity: In a typical demonstration of pyroelectricity, the whole crystal is changed from one temperature to another, and the result is a temporary voltage across the crystal. In a typical demonstration of thermoelectricity, one part of the device is kept at one temperature and the other part at a different temperature, and the result is a permanent voltage across the device as long as there is a temperature difference. Both effects convert temperature change to electrical potential, but the pyroelectric effect converts temperature change over time into electrical potential, while the thermoelectric effect converts temperature change with position into electrical potential.
Pyroelectric materials
Although artificial pyroelectric materials have been engineered, the effect was first discovered in minerals such as tourmaline. The pyroelectric effect is also present in bone and tendon.
The most important example is gallium nitride, a semiconductor. The large electric fields in this material are detrimental in light emitting diodes (LEDs), but useful for the production of power transistors.
Progress has been made in creating artificial pyroelectric materials, usually in the form of a thin film, using gallium nitride (GaN), caesium nitrate (CsNO3), polyvinyl fluorides, derivatives of phenylpyridine, and cobalt phthalocyanine. Lithium tantalate (LiTaO3) is a crystal exhibiting both piezoelectric and pyroelectric properties, which has been used to create small-scale nuclear fusion ("pyroelectric fusion"). Recently, pyroelectric and piezoelectric properties have been discovered in doped hafnium oxide (HfO2), which is a standard material in CMOS manufacturing.
Applications
Heat sensors
Very small changes in temperature can produce a pyroelectric potential. Passive infrared sensors are often designed around pyroelectric materials, as the heat of a human or animal from several feet away is enough to generate a voltage.
Power generation
A pyroelectric can be repeatedly heated and cooled (analogously to a heat engine) to generate usable electrical power. An example of a heat engine is the movement of the pistons in an internal combustion engine like that found in a gasoline powered automobile.
One group calculated that a pyroelectric in an Ericsson cycle could reach 50% of Carnot efficiency, while a different study found a material that could, in theory, reach 84-92% of Carnot efficiency (these efficiency values are for the pyroelectric itself, ignoring losses from heating and cooling the substrate, other heat-transfer losses, and all other losses elsewhere in the system).
Possible advantages of pyroelectric generators for generating electricity (as compared to the conventional heat engine plus electrical generator) include:
Harvesting energy from waste-heat
Potentially lower operating temperatures
Less bulky equipment
Fewer moving parts.
Although a few patents have been filed for such a device, such generators do not appear to be anywhere close to commercialization.
Nuclear fusion
Pyroelectric materials have been used to generate large electric fields necessary to steer deuterium ions in a nuclear fusion process. This is known as pyroelectric fusion.
See also
Electrocaloric effect, an opposite effect of pyroelectricity
Kelvin probe force microscope
Lithium tantalate
Thermoelectricity
Zinc oxide
References
Gautschi, Gustav, 2002, Piezoelectric Sensorics, Springer, Piezoelectric Sensorics: Force Strain Pressure Acceleration and Acoustic Emission Sensors Materials and Amplifiers
External links
Pyroelectric Detectors for THz applications WiredSense
Pyroelectric Infrared Detectors DIAS Infrared
DoITPoMS Teaching and Learning Package- "Pyroelectric Materials"
Lithium Tantalate (LiTaO3)
Lithium Tantalate (LiTaO3)
laser detection with lithium tantalate
Optical and Dielectric Properties of Sr(x)Ba(1-x)Nb(2)O(6)
Dielectric and Electrical Properties of Ce,Mn:SBN
Thermodynamics
Electrical phenomena
Energy conversion
Crystals | Pyroelectricity | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 2,581 | [
"Physical phenomena",
"Crystallography",
"Electrical phenomena",
"Crystals",
"Thermodynamics",
"Dynamical systems"
] |
155,725 | https://en.wikipedia.org/wiki/Sodium%20bicarbonate | Sodium bicarbonate (IUPAC name: sodium hydrogencarbonate), commonly known as baking soda or bicarbonate of soda, is a chemical compound with the formula NaHCO3. It is a salt composed of a sodium cation (Na+) and a bicarbonate anion (HCO3−). Sodium bicarbonate is a white solid that is crystalline but often appears as a fine powder. It has a slightly salty, alkaline taste resembling that of washing soda (sodium carbonate). The natural mineral form is nahcolite, although it is more commonly found as a component of the mineral trona.
As it has long been known and widely used, the salt has many different names such as baking soda, bread soda, cooking soda, brewing soda and bicarbonate of soda and can often be found near baking powder in stores. The term baking soda is more common in the United States, while bicarbonate of soda is more common in Australia, the United Kingdom, and New Zealand. Abbreviated colloquial forms such as sodium bicarb, bicarb soda, bicarbonate, and bicarb are common.
The prefix bi- in "bicarbonate" comes from an outdated naming system predating molecular knowledge. It is based on the observation that there is twice as much carbonate (CO3−2) per sodium in sodium bicarbonate (NaHCO3) as there is in sodium carbonate (Na2CO3). The modern chemical formulas of these compounds now express their precise chemical compositions which were unknown when the name bi-carbonate of potash was coined (see also: bicarbonate).
Uses
Cooking
In cooking, baking soda is primarily used in baking as a leavening agent. When it reacts with acid or is heated, carbon dioxide is released, which causes expansion of the batter and forms the characteristic texture and grain in cakes, quick breads, soda bread, and other baked and fried foods. When an acid is used, the acid–base reaction can be generically represented as follows:
NaHCO3 + H+ → Na+ + CO2 + H2O
Acidic materials that induce this reaction include hydrogen phosphates, cream of tartar, lemon juice, yogurt, buttermilk, cocoa, and vinegar. Baking soda may be used together with sourdough, which is acidic, making a lighter product with a less acidic taste. Since the reaction occurs slowly at room temperature, mixtures (cake batter, etc.) can be allowed to stand without rising until they are heated in the oven.
Heat can also by itself cause sodium bicarbonate to act as a raising agent in baking because of thermal decomposition, releasing carbon dioxide at temperatures above , as follows:
2 NaHCO3 → Na2CO3 + H2O + CO2
When used this way on its own, without the presence of an acidic component (whether in the batter or by the use of a baking powder containing acid), only half the available CO2 is released (one CO2 molecule is formed for every two equivalents of NaHCO3). Additionally, in the absence of acid, thermal decomposition of sodium bicarbonate also produces sodium carbonate, which is strongly alkaline and gives the baked product a bitter, soapy taste and a yellow color.
Baking powder
Baking powder, also sold for cooking, contains around 30% of bicarbonate, and various acidic ingredients that are activated by the addition of water, without the need for additional acids in the cooking medium. Many forms of baking powder contain sodium bicarbonate combined with calcium acid phosphate, sodium aluminium phosphate, or cream of tartar. Baking soda is alkaline; the acid used in baking powder avoids a metallic taste when the chemical change during baking creates sodium carbonate.
Food additive
It is often used in conjunction with other bottled water food additives to add taste. Its European Union E number is E500.
Pyrotechnics
Sodium bicarbonate is one of the main components of the common "black snake" firework. The effect is caused by the thermal decomposition, which produces carbon dioxide gas to produce a long snake-like ash as a combustion product of the other main component, sucrose. Sodium bicarbonate also delays combustion reactions through the release of carbon dioxide and water, both of which are flame retardants, when heated.
Mild disinfectant
It has weak disinfectant properties and it may be an effective fungicide against some organisms. As baking soda will absorb musty smells, it has become a reliable method for used booksellers when making books less malodorous.
Fire extinguisher
Sodium bicarbonate can be used to extinguish small grease or electrical fires by being thrown over the fire, as heating of sodium bicarbonate releases carbon dioxide. However, it should not be applied to fires in deep fryers; the sudden release of gas may cause the grease to splatter. Sodium bicarbonate is used in BC dry chemical fire extinguishers as an alternative to the more corrosive monoammonium phosphate in ABC extinguishers. The alkaline nature of sodium bicarbonate makes it the only dry chemical agent, besides Purple-K, that was used in large-scale fire suppression systems installed in commercial kitchens.
Sodium bicarbonate has several fire-extinguishing mechanisms that act simultaneously. It decomposes into water and carbon dioxide when heated, an endothermic reaction that deprives the fire of heat. In addition, it forms intermediates that can scavenge the free radicals which are responsible for the propagation of fire. With grease fires specifically, it also has a mild saponification effect, producing a soapy foam that can help smother the fire.
Neutralization of acids
Sodium bicarbonate reacts spontaneously with acids, releasing CO2 gas as a reaction product. It is commonly used to neutralize unwanted acid solutions or acid spills in chemical laboratories. It is not appropriate to use sodium bicarbonate to neutralize base even though it is amphoteric, reacting with both acids and bases.
Sports supplement
Sodium bicarbonate is taken as a sports supplement to improve muscular endurance. Studies conducted mostly in males have shown that sodium bicarbonate is most effective in enhancing performance in short-term, high-intensity activities.
Agriculture
Sodium bicarbonate can prevent the growth of fungi when applied on leaves, although it will not kill the fungus. Excessive amounts of sodium bicarbonate can cause discolouration of fruits (two percent solution) and chlorosis (one percent solution). Sodium bicarbonate is also commonly used as a free choice dietary supplement in sheep to help prevent bloat.
Medical uses and health
Sodium bicarbonate mixed with water can be used as an antacid to treat acid indigestion and heartburn. Its reaction with stomach acid produces salt, water, and carbon dioxide:
NaHCO3 + HCl → NaCl + H2O + CO2(g)
A mixture of sodium bicarbonate and polyethylene glycol such as PegLyte, dissolved in water and taken orally, is an effective gastrointestinal lavage preparation and laxative prior to gastrointestinal surgery, gastroscopy, etc.
Intravenous sodium bicarbonate in an aqueous solution is sometimes used for cases of acidosis, or when insufficient sodium or bicarbonate ions are in the blood. In cases of respiratory acidosis, the infused bicarbonate ion drives the carbonic acid/bicarbonate buffer of plasma to the left, and thus raises the pH. For this reason, sodium bicarbonate is used in medically supervised cardiopulmonary resuscitation. Infusion of bicarbonate is indicated only when the blood pH is markedly low (< 7.1–7.0).
HCO3− is used for treatment of hyperkalemia, as it will drive K+ back into cells during periods of acidosis. Since sodium bicarbonate can cause alkalosis, it is sometimes used to treat aspirin overdoses. Aspirin requires an acidic environment for proper absorption, and a basic environment will diminish aspirin absorption in cases of overdose. Sodium bicarbonate has also been used in the treatment of tricyclic antidepressant overdose. It can also be applied topically as a paste, with three parts baking soda to one part water, to relieve some kinds of insect bites and stings (as well as accompanying swelling).
Some alternative practitioners, such as Tullio Simoncini, have promoted baking soda as a cancer cure, which the American Cancer Society has warned against due to both its unproven effectiveness and potential danger in use. Edzard Ernst has called the promotion of sodium bicarbonate as a cancer cure "one of the more sickening alternative cancer scams I have seen for a long time".
Sodium bicarbonate can be added to local anaesthetics, to speed up the onset of their effects and make their injection less painful. It is also a component of Moffett's solution, used in nasal surgery.
It has been proposed that acidic diets weaken bones. One systematic meta-analysis of the research shows no such effect. Another also finds that there is no evidence that alkaline diets improve bone health, but suggests that there "may be some value" to alkaline diets for other reasons.
Antacid (such as baking soda) solutions have been prepared and used by protesters to alleviate the effects of exposure to tear gas during protests.
Similarly to its use in baking, sodium bicarbonate is used together with a mild acid such as tartaric acid as the excipient in effervescent tablets: when such a tablet is dropped in a glass of water, the carbonate leaves the reaction medium as carbon dioxide gas (HCO3− + H+ → H2O + CO2↑ or, more precisely, HCO3− + H3O+ → 2 H2O + CO2↑). This makes the tablet disintegrate, leaving the medication suspended and/or dissolved in the water together with the resulting salt (in this example, sodium tartrate).
Personal hygiene
Sodium bicarbonate is also used as an ingredient in some mouthwashes. It has anticaries and abrasive properties. It works as a mechanical cleanser on the teeth and gums, neutralizes the production of acid in the mouth, and also acts as an antiseptic to help prevent infections. Sodium bicarbonate in combination with other ingredients can be used to make a dry or wet deodorant. Sodium bicarbonate may be used as a buffering agent, combined with table salt, when creating a solution for nasal irrigation.
It is used in eye hygiene to treat blepharitis. This is done by adding a teaspoon of sodium bicarbonate to cool water that was recently boiled followed by gentle scrubbing of the eyelash base with a cotton swab dipped in the solution.
Veterinary uses
Sodium bicarbonate is used as a cattle feed supplement, in particular as a buffering agent for the rumen.
Cleaning agent
Sodium bicarbonate is used in a process to remove paint and corrosion called sodablasting. As a blasting medium, sodium bicarbonate is used to remove surface contamination from softer and less resilient substrates such as aluminium, copper, or timber that could be damaged by silica sand abrasive media.
A manufacturer recommends a paste made from baking soda with minimal water as a gentle scouring powder. Such a paste can be useful in removing surface rust because the rust forms a water-soluble compound when in a concentrated alkaline solution. Cold water should be used since hot-water solutions can corrode steel. Sodium bicarbonate attacks the thin protective oxide layer that forms on aluminium, making it unsuitable for cleaning this metal. A solution in warm water will remove the tarnish from silver when the silver is in contact with a piece of aluminium foil. Baking soda is commonly added to washing machines as a replacement for water softener and to remove odors from clothes. When diluted with warm water, it is also almost as effective in removing heavy tea and coffee stains from cups as sodium hydroxide.
During the Manhattan Project to develop the nuclear bomb in the early 1940s, the chemical toxicity of uranium was an issue. Uranium oxides were found to stick very well to cotton cloth and did not wash out with soap or laundry detergent. However, the uranium would wash out with a 2% solution of sodium bicarbonate. Clothing can become contaminated with toxic dust of depleted uranium (DU), which is very dense, hence it is used for counterweights in a civilian context and in armour-piercing projectiles. DU is not removed by normal laundering; washing with about of baking soda in 2 gallons (7.5 L) of water will help wash it out.
Odor control
It is often claimed that baking soda is an effective odor remover and recommended that an open box be kept in the refrigerator to absorb odor. This idea was promoted by the leading U.S. brand of baking soda, Arm & Hammer, in an advertising campaign starting in 1972. Though this campaign is considered a classic of marketing, leading within a year to more than half of American refrigerators containing a box of baking soda, there is little evidence that it is effective in this application.
Education
An educational science experiment known as the "Baking Soda and Vinegar Volcano" uses the acid-base reaction with vinegar acid to mimic a volcanic eruption. The rapid production of CO2 causes the liquid to foam up and overflow its container. Other ingredients such as dish soap and food coloring can be added to enhance the visual effect. If this reaction is performed inside of a closed vessel (such as a bottle) with no way for gas to escape, it can cause an explosion if the pressure is high enough.
Chemistry
Sodium bicarbonate is an amphoteric compound. Aqueous solutions are mildly alkaline due to the formation of carbonic acid and hydroxide ion:
HCO + H2O → + OH−
Sodium bicarbonate can sometimes be used as a mild neutralization agent and a safer alternative to strong bases like sodium hydroxide. Reaction of sodium bicarbonate and an acid produces a salt and carbonic acid, which readily decomposes to carbon dioxide and water:
NaHCO3 + HCl → NaCl + H2O+CO2
H2CO3 → H2O + CO2(g)
Sodium bicarbonate reacts with acetic acid (found in vinegar), producing sodium acetate, water, and carbon dioxide:
NaHCO3 + CH3COOH → CH3COONa + H2O + CO2(g)
Sodium bicarbonate reacts with bases such as sodium hydroxide to form carbonates:
NaHCO3 + NaOH → Na2CO3 + H2O
Thermal decomposition
At temperatures from , sodium bicarbonate gradually decomposes into sodium carbonate, water, and carbon dioxide. The conversion is faster at :
2 NaHCO3 → Na2CO3 + H2O + CO2
Most bicarbonates undergo this dehydration reaction. Further heating converts the carbonate into the oxide (above ):
Na2CO3 → Na2O + CO2
The generation of carbon dioxide and water partially explain the fire-extinguishing properties of NaHCO3, although other factors like heat absorption and radical scavenging are more significant.
Natural occurrence
In nature, sodium bicarbonate occurs almost exclusively as either nahcolite or trona. Trona is more common, as nahcolite is more soluble in water and the chemical equilibrium between the two minerals favors trona. Significant nahcolite deposits are in the United States, Botswana and Kenya, Uganda, Turkey, and Mexico. The biggest trona deposits are in the Green River basin in Wyoming.
Nahcolite is sometimes found as a component of oil shale.
Stability and shelf life
If kept cool (room temperature) and dry (an airtight container is recommended to keep out moist air), sodium bicarbonate can be kept without a significant amount of decomposition for at least two or three years.
History
The word natron has been in use in many languages throughout modern times (in the forms of anatron, natrum and natron) and originated (like Spanish, French and English natron as well as 'sodium') via Arabic naṭrūn (or anatrūn; cf. the Lower Egyptian “Natrontal” Wadi El Natrun, where a mixture of sodium carbonate and sodium hydrogen carbonate for the dehydration of mummies was used ) from Greek nítron (νίτρον) (Herodotus; Attic lítron (λίτρον)), which can be traced back to ancient Egyptian ntr. The Greek nítron (soda, saltpeter) was also used in Latin (sal) nitrum and in German Salniter (the source of Nitrogen, Nitrat etc.). The word saleratus, from Latin sal æratus (meaning "aerated salt"), was widely used in the 19th century for both sodium bicarbonate and potassium bicarbonate.
In 1791, French chemist Nicolas Leblanc produced sodium carbonate (also known as soda ash). Pharmacist Valentin Rose the Younger is credited with the discovery of sodium bicarbonate in 1801 in Berlin. In 1846, two American bakers, John Dwight and Austin Church, established the first factory in the United States to produce baking soda from sodium carbonate and carbon dioxide.
Saleratus, potassium or sodium bicarbonate, is mentioned in the novel Captains Courageous by Rudyard Kipling as being used extensively in the 1800s in commercial fishing to prevent freshly caught fish from spoiling.
In 1919, US Senator Lee Overman declared that bicarbonate of soda could cure the Spanish flu. In the midst of the debate on 26 January 1919, he interrupted the discussion to announce the discovery of a cure. "I want to say, for the benefit of those who are making this investigation," he reported, "that I was told by a judge of a superior court in the mountain country of North Carolina they have discovered a remedy for this disease." The purported cure implied a critique of modern science and an appreciation for the simple wisdom of simple people. "They say that common baking soda will cure the disease," he continued, "that they have cured it with it, that they have no deaths up there at all; they use common baking soda, which cures the disease."
Production
Sodium bicarbonate is produced industrially from sodium carbonate:
Na2CO3 + CO2 + H2O → 2 NaHCO3
It is produced on the scale of about 100,000 tonnes/year (as of 2001) with a worldwide production capacity of 2.4 million tonnes per year (as of 2002). Commercial quantities of baking soda are also produced by a similar method: soda ash, mined in the form of the ore trona, is dissolved in water and treated with carbon dioxide. Sodium bicarbonate precipitates as a solid from this solution.
Regarding the Solvay process, sodium bicarbonate is an intermediate in the reaction of sodium chloride, ammonia, and carbon dioxide. The product however shows low purity (75pc).
NaCl + CO2 + NH3 + H2O → NaHCO3 + NH4Cl
Although of no practical value, NaHCO3 may be obtained by the reaction of carbon dioxide with an aqueous solution of sodium hydroxide:
CO2 + NaOH → NaHCO3
Mining
Naturally occurring deposits of nahcolite (NaHCO3) are found in the Eocene-age (55.8–33.9 Mya) Green River Formation, Piceance Basin in Colorado. Nahcolite was deposited as beds during periods of high evaporation in the basin. It is commercially mined using common underground mining techniques such as bore, drum, and longwall mining in a fashion very similar to coal mining.
It is also produced by solution mining, pumping heated water through nahcolite beds and crystallizing the dissolved nahcolite through a cooling crystallization process.
Since nahcolite is sometimes found in shale, it can be produced as a co-product of shale oil extraction, where it is recovered by solution mining.
In popular culture
Sodium bicarbonate, as "bicarbonate of soda", was a frequent source of punch lines for Groucho Marx in Marx Brothers movies. In Duck Soup, Marx plays the leader of a nation at war. In one scene, he receives a message from the battlefield that his general is reporting a gas attack, and Groucho tells his aide: "Tell him to take a teaspoonful of bicarbonate of soda and a half a glass of water." In A Night at the Opera, Groucho's character addresses the opening night crowd at an opera by saying of the lead tenor: "Signor Lassparri comes from a very famous family. His mother was a well-known bass singer. His father was the first man to stuff spaghetti with bicarbonate of soda, thus causing and curing indigestion at the same time."
In the Joseph L. Mankewicz classic All About Eve, the Max Fabian character (Gregory Ratoff) has an extended scene with Margo Channing (Bette Davis) in which, suffering from heartburn, he requests and then drinks bicarbonate of soda, eliciting a prominent burp. Channing promises to always keep a box of bicarb with Max's name on it.
See also
Carbonic acid
List of ineffective cancer treatments
List of minerals
Natron
Potassium bicarbonate
Trona
References
Bibliography
External links
International Chemical Safety Card 1044
Acid salts
Antacids
Bases (chemistry)
Bicarbonates
Chemical substances for emergency medicine
Fire suppression agents
Household chemicals
Leavening agents
Sodium compounds
E-number additives
Powders
Food powders | Sodium bicarbonate | [
"Physics",
"Chemistry"
] | 4,536 | [
"Acid salts",
"Chemical substances for emergency medicine",
"Salts",
"Materials",
"Powders",
"Chemicals in medicine",
"Bases (chemistry)",
"Matter"
] |
155,726 | https://en.wikipedia.org/wiki/Sodium%20carbonate | Sodium carbonate (also known as washing soda, soda ash and soda crystals) is the inorganic compound with the formula and its various hydrates. All forms are white, odourless, water-soluble salts that yield alkaline solutions in water. Historically, it was extracted from the ashes of plants grown in sodium-rich soils, and because the ashes of these sodium-rich plants were noticeably different from ashes of wood (once used to produce potash), sodium carbonate became known as "soda ash". It is produced in large quantities from sodium chloride and limestone by the Solvay process, as well as by carbonating sodium hydroxide which is made using the chloralkali process.
Hydrates
Sodium carbonate is obtained as three hydrates and as the anhydrous salt:
sodium carbonate decahydrate (natron), Na2CO3·10H2O, which readily effloresces to form the monohydrate.
sodium carbonate heptahydrate (not known in mineral form), Na2CO3·7H2O.
sodium carbonate monohydrate (thermonatrite), Na2CO3·H2O. Also known as crystal carbonate.
anhydrous sodium carbonate (natrite), also known as calcined soda, is formed by heating the hydrates. It is also formed when sodium hydrogencarbonate is heated (calcined) e.g. in the final step of the Solvay process.
The decahydrate is formed from water solutions crystallizing in the temperature range −2.1 to +32.0 °C, the heptahydrate in the narrow range 32.0 to 35.4 °C and above this temperature the monohydrate forms. In dry air the decahydrate and heptahydrate lose water to give the monohydrate. Other hydrates have been reported, e.g. with 2.5 units of water per sodium carbonate unit ("Penta hemihydrate").
Washing soda
Sodium carbonate decahydrate (Na2CO3·10H2O), also known as washing soda, is the most common hydrate of sodium carbonate containing 10 molecules of water of crystallization. Soda ash is dissolved in water and crystallized to get washing soda.
It is one of the few metal carbonates that is soluble in water.
Applications
Some common applications of sodium carbonate include:
As a cleansing agent for domestic purposes like washing clothes. Sodium carbonate is a component of many dry soap powders. It has detergent properties through the process of saponification, which converts fats and grease to water-soluble salts (specifically, soaps).
It is used for lowering the hardness of water (see ).
It is used in the manufacture of glass, soap, and paper (see ).
It is used in the manufacture of sodium compounds like borax (sodium borate).
Glass manufacture
Sodium carbonate serves as a flux for silica (SiO2, melting point 1,713 °C), lowering the melting point of the mixture to something achievable without special materials. This "soda glass" is mildly water-soluble, so some calcium carbonate is added to the melt mixture to make the glass insoluble. Bottle and window glass ("soda–lime glass" with transition temperature ~570 °C) is made by melting such mixtures of sodium carbonate, calcium carbonate, and silica sand (silicon dioxide (SiO2)). When these materials are heated, the carbonates release carbon dioxide. In this way, sodium carbonate is a source of sodium oxide. Soda–lime glass has been the most common form of glass for centuries. It is also a key input for tableware glass manufacturing.
Water softening
Hard water usually contains calcium or magnesium ions. Sodium carbonate is used for removing these ions and replacing them with sodium ions.
Sodium carbonate is a water-soluble source of carbonate. The calcium and magnesium ions form insoluble solid precipitates upon treatment with carbonate ions:
The water is softened because it no longer contains dissolved calcium ions and magnesium ions.
Food additive and cooking
Sodium carbonate has several uses in cuisine, largely because it is a stronger base than baking soda (sodium bicarbonate) but weaker than lye (which may refer to sodium hydroxide or, less commonly, potassium hydroxide). Alkalinity affects gluten production in kneaded doughs, and also improves browning by reducing the temperature at which the Maillard reaction occurs. To take advantage of the former effect, sodium carbonate is therefore one of the components of , a solution of alkaline salts used to give Japanese ramen noodles their characteristic flavour and chewy texture; a similar solution is used in Chinese cuisine to make lamian, for similar reasons. Cantonese bakers similarly use sodium carbonate as a substitute for lye-water to give moon cakes their characteristic texture and improve browning. In German cuisine (and Central European cuisine more broadly), breads such as pretzels and lye rolls traditionally treated with lye to improve browning can be treated instead with sodium carbonate; sodium carbonate does not produce quite as strong a browning as lye, but is much safer and easier to work with.
Sodium carbonate is used in the production of sherbet powder. The cooling and fizzing sensation results from the endothermic reaction between sodium carbonate and a weak acid, commonly citric acid, releasing carbon dioxide gas, which occurs when the sherbet is moistened by saliva.
Sodium carbonate also finds use in the food industry as a food additive (European Food Safety Authority number E500) as an acidity regulator, anticaking agent, raising agent, and stabilizer. It is also used in the production of to stabilize the pH of the final product.
While it is less likely to cause chemical burns than lye, care must still be taken when working with sodium carbonate in the kitchen, as it is corrosive to aluminum cookware, utensils, and foil.
Other applications
Sodium carbonate is also used as a relatively strong base in various fields. As a common alkali, it is preferred in many chemical processes because it is cheaper than sodium hydroxide and far safer to handle. Its mildness especially recommends its use in domestic applications.
For example, it is used as a pH regulator to maintain stable alkaline conditions necessary for the action of the majority of photographic film developing agents. It is also a common additive in swimming pools and aquarium water to maintain a desired pH and carbonate hardness (KH). In dyeing with fiber-reactive dyes, sodium carbonate (often under a name such as soda ash fixative or soda ash activator) is used as mordant to ensure proper chemical bonding of the dye with cellulose (plant) fiber. It is also used in the froth flotation process to maintain a favourable pH as a float conditioner besides CaO and other mildly basic compounds.
Precursor to other compounds
Sodium (NaHCO3) or baking soda, also a component in fire extinguishers, is often generated from sodium carbonate. Although NaHCO3 is itself an intermediate product of the Solvay process, the heating needed to remove the ammonia that contaminates it decomposes some NaHCO3, making it more economical to react finished Na2CO3 with CO2:
In a related reaction, sodium carbonate is used to make sodium bisulfite (NaHSO3), which is used for the "sulfite" method of separating lignin from cellulose. This reaction is exploited for removing sulfur dioxide from flue gases in power stations:
This application has become more common, especially where stations have to meet stringent emission controls.
Sodium carbonate is used by the cotton industry to neutralize the sulfuric acid needed for acid delinting of fuzzy cottonseed.
It is also used to form carbonates of other metals by ion exchange, often with the other metals' sulphates.
Miscellaneous
Sodium carbonate is used by the brick industry as a wetting agent to reduce the amount of water needed to extrude the clay. In casting, it is referred to as "bonding agent" and is used to allow wet alginate to adhere to gelled alginate. Sodium carbonate is used in toothpastes, where it acts as a foaming agent and an abrasive, and to temporarily increase mouth pH.
Sodium carbonate is also used in the processing and tanning of animal hides.
Physical properties
The integral enthalpy of solution of sodium carbonate is −28.1 kJ/mol for a 10% w/w aqueous solution. The Mohs hardness of sodium carbonate monohydrate is 1.3.
Occurrence as natural mineral
Sodium carbonate is soluble in water, and can occur naturally in arid regions, especially in mineral deposits (evaporites) formed when seasonal lakes evaporate. Deposits of the mineral natron have been mined from dry lake bottoms in Egypt since ancient times, when natron was used in the preparation of mummies and in the early manufacture of glass.
The anhydrous mineral form of sodium carbonate is quite rare and called natrite. Sodium carbonate also erupts from Ol Doinyo Lengai, Tanzania's unique volcano, and it is presumed to have erupted from other volcanoes in the past, but due to these minerals' instability at the Earth's surface, are likely to be eroded. All three mineralogical forms of sodium carbonate, as well as trona, trisodium hydrogendi carbonate dihydrate, are also known from ultra-alkaline pegmatitic rocks, that occur for example in the Kola Peninsula in Russia.
Extra terrestrially, known sodium carbonate is rare. Deposits have been identified as the source of bright spots on Ceres, interior material that has been brought to the surface. While there are carbonates on Mars, and these are expected to include sodium carbonate, deposits have yet to be confirmed, this absence is explained by some as being due to a global dominance of low pH in previously aqueous Martian soil.
Production
The initial large-scale chemical procedure was established in England in 1823 to manufacture soda ash.
Mining
Trona, also known as trisodium hydrogendicarbonate dihydrate (Na3HCO3CO3·2H2O), is mined in several areas of the US and provides nearly all the US consumption of sodium carbonate. Large natural deposits found in 1938, such as the one near Green River, Wyoming, have made mining more economical than industrial production in North America. There are important reserves of trona in Turkey; two million tons of soda ash have been extracted from the reserves near Ankara.
Barilla and kelp
Several "halophyte" (salt-tolerant) plant species and seaweed species can be processed to yield an impure form of sodium carbonate, and these sources predominated in Europe and elsewhere until the early 19th century. The land plants (typically glassworts or saltworts) or the seaweed (typically Fucus species) were harvested, dried, and burned. The ashes were then "lixivated" (washed with water) to form an alkali solution. This solution was boiled dry to create the final product, which was termed "soda ash"; this very old name derives from the Arabic word soda, in turn applied to Salsola soda, one of the many species of seashore plants harvested for production. "Barilla" is a commercial term applied to an impure form of potash obtained from coastal plants or kelp.
The sodium carbonate concentration in soda ash varied very widely, from 2–3 percent for the seaweed-derived form ("kelp"), to 30 percent for the best barilla produced from saltwort plants in Spain. Plant and seaweed sources for soda ash, and also for the related alkali "potash", became increasingly inadequate by the end of the 18th century, and the search for commercially viable routes to synthesizing soda ash from salt and other chemicals intensified.
Leblanc process
In 1792, the French chemist Nicolas Leblanc patented a process for producing sodium carbonate from salt, sulfuric acid, limestone, and coal. In the first step, sodium chloride is treated with sulfuric acid in the Mannheim process. This reaction produces sodium sulfate (salt cake) and hydrogen chloride:
The salt cake and crushed limestone (calcium carbonate) was reduced by heating with coal. This conversion entails two parts. First is the carbothermic reaction whereby the coal, a source of carbon, reduces the sulfate to sulfide:
The second stage is the reaction to produce sodium carbonate and calcium sulfide:
This mixture is called black ash. The soda ash is extracted from the black ash with water. Evaporation of this extract yields solid sodium carbonate. This extraction process was termed lixiviating.
The hydrochloric acid produced by the Leblanc process was a major source of air pollution, and the calcium sulfide byproduct also presented waste disposal issues. However, it remained the major production method for sodium carbonate until the late 1880s.
Solvay process
In 1861, the Belgian industrial chemist Ernest Solvay developed a method for making sodium carbonate by first reacting sodium chloride, ammonia, water, and carbon dioxide to generate sodium bicarbonate and ammonium chloride:
The resulting sodium bicarbonate was then converted to sodium carbonate by heating it, releasing water and carbon dioxide:
Meanwhile, the ammonia was regenerated from the ammonium chloride byproduct by treating it with the lime (calcium oxide) left over from carbon dioxide generation:
The Solvay process recycles its ammonia. It consumes only brine and limestone, and calcium chloride is its only waste product. The process is substantially more economical than the Leblanc process, which generates two waste products, calcium sulfide and hydrogen chloride. The Solvay process quickly came to dominate sodium carbonate production worldwide. By 1900, 90% of sodium carbonate was produced by the Solvay process, and the last Leblanc process plant closed in the early 1920s.
The second step of the Solvay process, heating sodium bicarbonate, is used on a small scale by home cooks and in restaurants to make sodium carbonate for culinary purposes (including pretzels and alkali noodles). The method is appealing to such users because sodium bicarbonate is widely sold as baking soda, and the temperatures required ( to ) to convert baking soda to sodium carbonate are readily achieved in conventional kitchen ovens.
Hou's process
This process was developed by Chinese chemist Hou Debang in the 1930s. The earlier steam reforming by-product carbon dioxide was pumped through a saturated solution of sodium chloride and ammonia to produce sodium bicarbonate by these reactions:
The sodium bicarbonate was collected as a precipitate due to its low solubility and then heated up to approximately or to yield pure sodium carbonate similar to last step of the Solvay process. More sodium chloride is added to the remaining solution of ammonium and sodium chlorides; also, more ammonia is pumped at 30–40 °C to this solution. The solution temperature is then lowered to below 10 °C. Solubility of ammonium chloride is higher than that of sodium chloride at 30 °C and lower at 10 °C. Due to this temperature-dependent solubility difference and the common-ion effect, ammonium chloride is precipitated in a sodium chloride solution.
The Chinese name of Hou's process, lianhe zhijian fa (), means "coupled manufacturing alkali method": Hou's process is coupled to the Haber process and offers better atom economy by eliminating the production of calcium chloride, since any ammonia generated gets used by the reaction. The by-product ammonium chloride can be sold as a fertilizer.
See also
Residual sodium carbonate index
References
Further reading
External links
American Natural Soda Ash Company
International Chemical Safety Card 1135
FMC Wyoming Corporation
Use of sodium carbonate in dyeing
Sodium carbonate manufacturing by synthetic processes
Carbonates
E-number additives
Household chemicals
Photographic chemicals
Sodium compounds
soda ash | Sodium carbonate | [
"Chemistry"
] | 3,317 | [
"Combustion",
"Types of ash"
] |
155,747 | https://en.wikipedia.org/wiki/Travel | Travel is the movement of people between distant geographical locations. Travel can be done by foot, bicycle, automobile, train, boat, bus, airplane, ship or other means, with or without luggage, and can be one way or round trip. Travel can also include relatively short stays between successive movements, as in the case of tourism.
Etymology
The origin of the word "travel" is most likely lost to history. The term "travel" may originate from the Old French word travail, which means 'work'. According to the Merriam-Webster dictionary, the first known use of the word travel was in the 14th century. It also states that the word comes from Middle English , (which means to torment, labor, strive, journey) and earlier from Old French (which means to work strenuously, toil).
In English, people still occasionally use the words , which means struggle. According to Simon Winchester in his book The Best Travelers' Tales (2004), the words travel and travail both share an even more ancient root: a Roman instrument of torture called the (in Latin it means "three stakes", as in to impale). This link may reflect the extreme difficulty of travel in ancient times. Travel in modern times may or may not be much easier, depending upon the destination. Travel to Mount Everest, the Amazon rainforest, extreme tourism, and adventure travel are more difficult forms of travel. Travel can also be more difficult depending on the method of travel, such as by bus, cruise ship, or even by bullock cart.
Purpose and motivation
Reasons for traveling include recreation, holidays, rejuvenation, tourism or vacationing, research travel, the gathering of information, visiting people, volunteer travel for charity, migration to begin life somewhere else, religious pilgrimages and mission trips, business travel, trade, commuting, obtaining health care, waging or fleeing war, for the enjoyment of traveling, or other reasons. Travelers may use human-powered transport such as walking or bicycling; or vehicles, such as public transport, automobiles, trains, ferries, boats, cruise ships and airplanes.
Motives for travel include:
Pleasure
Relaxation
Discovery and exploration
Adventure
Intercultural communications
Taking personal time for building interpersonal relationships.
Avoiding stress
Forming memories
Cultural experiences
Volunteering
Festivals and events
History
Travel dates back to antiquity where wealthy Greeks and Romans would travel for leisure to their summer homes and villas in cities such as Pompeii and Baiae. While early travel tended to be slower, more dangerous, and more dominated by trade and migration, cultural and technological advances over many years have tended to mean that travel has become easier and more accessible. Humankind has come a long way in transportation since Christopher Columbus sailed to the New World from Spain in 1492, an expedition which took over 10 weeks to arrive at the final destination; to the 21st century when aircraft allows travel from Spain to the United States overnight.
Travel in the Middle Ages offered hardships and challenges, though it was important to the economy and to society. The wholesale sector depended (for example) on merchants dealing with/through caravans or sea-voyagers, end-user retailing often demanded the services of many itinerant peddlers wandering from village to hamlet, gyrovagues (wandering monks) and wandering friars brought theology and pastoral support to neglected areas, traveling minstrels toured, and armies ranged far and wide in various crusades and in sundry other wars. Pilgrimages were common in both the European and Islamic world and involved streams of travelers both locally and internationally.
In the late 16th century, it became fashionable for young European aristocrats and wealthy upper-class men to travel to significant European cities as part of their education in the arts and literature. This was known as the Grand Tour, and included cities such as London, Paris, Venice, Florence, and Rome. However, the French Revolution brought with it the end of the Grand Tour.
Travel by water often provided more comfort and speed than land-travel, at least until the advent of a network of railways in the 19th century. Travel for the purpose of tourism is reported to have started around this time when people began to travel for fun as travel was no longer a hard and challenging task. This was capitalized on by people like Thomas Cook selling tourism packages where trains and hotels were booked together. Airships and airplanes took over much of the role of long-distance surface travel in the 20th century, notably after the Second World War where there was a surplus of both aircraft and pilots. Air travel has become so ubiquitous in the 21st century that one woman, Alexis Alford, visited all 196 countries before the age of 21.
Geographic types
Travel may be local, regional, national (domestic) or international. In some countries, non-local internal travel may require an internal passport, while international travel typically requires a passport and visa. Tours are a common type of travel. Examples of travel tours are expedition cruises, small group tours, and river cruises.
Safety
Authorities emphasize the importance of taking precautions to ensure travel safety. When traveling abroad, the odds favor a safe and incident-free trip, however, travelers can be subject to difficulties, crime and violence. Some safety considerations include being aware of one's surroundings, avoiding being the target of a crime, leaving copies of one's passport and itinerary information with trusted people, obtaining medical insurance valid in the country being visited and registering with one's national embassy when arriving in a foreign country. Many countries do not recognize drivers' licenses from other countries; however most countries accept international driving permits. Automobile insurance policies issued in one's own country are often invalid in foreign countries, and it is often a requirement to obtain temporary auto insurance valid in the country being visited. It is also advisable to become oriented with the driving rules and regulations of destination countries. Wearing a seat belt is highly advisable for safety reasons; many countries have penalties for violating seatbelt laws.
There are three main statistics which may be used to compare the safety of various forms of travel (based on a Department of the Environment, Transport and the Regions survey in October 2000):
See also
Environmental impact of aviation
Layover
List of travelers
Mode of transport
Recreational travel
Science tourism
Transport
Tourism
References
External links
Tourism
Tourist activities
Transport culture | Travel | [
"Physics"
] | 1,279 | [
"Physical systems",
"Transport",
"Transport culture",
"Travel"
] |
155,758 | https://en.wikipedia.org/wiki/Gravity%20assist | A gravity assist, gravity assist maneuver, swing-by, or generally a gravitational slingshot in orbital mechanics, is a type of spaceflight flyby which makes use of the relative movement (e.g. orbit around the Sun) and gravity of a planet or other astronomical object to alter the path and speed of a spacecraft, typically to save propellant and reduce expense.
Gravity assistance can be used to accelerate a spacecraft, that is, to increase or decrease its speed or redirect its path. The "assist" is provided by the motion of the gravitating body as it pulls on the spacecraft. Any gain or loss of kinetic energy and linear momentum by a passing spacecraft is correspondingly lost or gained by the gravitational body, in accordance with Newton's Third Law. The gravity assist maneuver was first used in 1959 when the Soviet probe Luna 3 photographed the far side of Earth's Moon, and it was used by interplanetary probes from Mariner 10 onward, including the two Voyager probes' notable flybys of Jupiter and Saturn.
Explanation
A gravity assist around a planet changes a spacecraft's velocity (relative to the Sun) by entering and leaving the gravitational sphere of influence of a planet. The sum of the kinetic energies of both bodies remains constant (see elastic collision). A slingshot maneuver can therefore be used to change the spaceship's trajectory and speed relative to the Sun.
A close terrestrial analogy is provided by a tennis ball bouncing off the front of a moving train. Imagine standing on a train platform, and throwing a ball at 30 km/h toward a train approaching at 50 km/h. The driver of the train sees the ball approaching at 80 km/h and then departing at 80 km/h after the ball bounces elastically off the front of the train. Because of the train's motion, however, that departure is at 130 km/h relative to the train platform; the ball has added twice the train's velocity to its own.
Translating this analogy into space: in the planet reference frame, the spaceship has a vertical velocity of v relative to the planet. After the slingshot occurs the spaceship is leaving on a course 90 degrees to that which it arrived on. It will still have a velocity of v, but in the horizontal direction. In the Sun reference frame, the planet has a horizontal velocity of v, and by using the Pythagorean Theorem, the spaceship initially has a total velocity of v. After the spaceship leaves the planet, it will have a velocity of v + v = 2v, gaining approximately 0.6v.
This oversimplified example cannot be refined without additional details regarding the orbit, but if the spaceship travels in a path which forms a hyperbola, it can leave the planet in the opposite direction without firing its engine. This example is one of many trajectories and gains of speed the spaceship can experience.
This explanation might seem to violate the conservation of energy and momentum, apparently adding velocity to the spacecraft out of nothing, but the spacecraft's effects on the planet must also be taken into consideration to provide a complete picture of the mechanics involved. The linear momentum gained by the spaceship is equal in magnitude to that lost by the planet, so the spacecraft gains velocity and the planet loses velocity. However, the planet's enormous mass compared to the spacecraft makes the resulting change in its speed negligibly small even when compared to the orbital perturbations planets undergo due to interactions with other celestial bodies on astronomically short timescales. For example, one metric ton is a typical mass for an interplanetary space probe whereas Jupiter has a mass of almost 2 x 1024 metric tons. Therefore, a one-ton spacecraft passing Jupiter will theoretically cause the planet to lose approximately 5 x 10−25 km/s of orbital velocity for every km/s of velocity relative to the Sun gained by the spacecraft. For all practical purposes the effects on the planet can be ignored in the calculation.
Realistic portrayals of encounters in space require the consideration of three dimensions. The same principles apply as above except adding the planet's velocity to that of the spacecraft requires vector addition as shown below.
Due to the reversibility of orbits, gravitational slingshots can also be used to reduce the speed of a spacecraft. Both Mariner 10 and MESSENGER performed this maneuver to reach Mercury.
If more speed is needed than available from gravity assist alone, a rocket burn near the periapsis (closest planetary approach) uses the least fuel. A given rocket burn always provides the same change in velocity (Δv), but the change in kinetic energy is proportional to the vehicle's velocity at the time of the burn. Therefore the maximum kinetic energy is obtained when the burn occurs at the vehicle's maximum velocity (periapsis). The Oberth effect describes this technique in more detail.
Historical origins
In his paper "To Those Who Will Be Reading in Order to Build" (), published in 1938 but dated 1918–1919, Yuri Kondratyuk suggested that a spacecraft traveling between two planets could be accelerated at the beginning and end of its trajectory by using the gravity of the two planets' moons. The portion of his manuscript considering gravity-assists received no later development and was not published until the 1960s. In his 1925 paper "Problems of Flight by Jet Propulsion: Interplanetary Flights" (), Friedrich Zander showed a deep understanding of the physics behind the concept of gravity assist and its potential for the interplanetary exploration of the solar system.
Italian engineer Gaetano Crocco was first to calculate an interplanetary journey considering multiple gravity-assists.
The gravity assist maneuver was first used in 1959 when the Soviet probe Luna 3 photographed the far side of the Moon. The maneuver relied on research performed under the direction of Mstislav Keldysh at the Keldysh Institute of Applied Mathematics.
In 1961, Michael Minovitch, UCLA graduate student who worked at NASA's Jet Propulsion Laboratory (JPL), developed a gravity assist technique, that would later be used for the Gary Flandro's Planetary Grand Tour idea.
During the summer of 1964 at the NASA JPL, Gary Flandro was assigned the task of studying techniques for exploring the outer planets of the solar system. In this study he discovered the rare alignment of the outer planets (Jupiter, Saturn, Uranus, and Neptune) and conceived the Planetary Grand Tour multi-planet mission utilizing gravity assist to reduce mission duration from forty years to less than ten.
Purpose
A spacecraft traveling from Earth to an inner planet will increase its relative speed because it is falling toward the Sun, and a spacecraft traveling from Earth to an outer planet will decrease its speed because it is leaving the vicinity of the Sun.
Although the orbital speed of an inner planet is greater than that of the Earth, a spacecraft traveling to an inner planet, even at the minimum speed needed to reach it, is still accelerated by the Sun's gravity to a speed notably greater than the orbital speed of that destination planet. If the spacecraft's purpose is only to fly by the inner planet, then there is typically no need to slow the spacecraft. However, if the spacecraft is to be inserted into orbit about that inner planet, then there must be some way to slow it down.
Similarly, while the orbital speed of an outer planet is less than that of the Earth, a spacecraft leaving the Earth at the minimum speed needed to travel to some outer planet is slowed by the Sun's gravity to a speed far less than the orbital speed of that outer planet. Therefore, there must be some way to accelerate the spacecraft when it reaches that outer planet if it is to enter orbit about it.
Rocket engines can certainly be used to increase and decrease the speed of the spacecraft. However, rocket thrust takes propellant, propellant has mass, and even a small change in velocity (known as Δv, or "delta-v", the delta symbol being used to represent a change and "v" signifying velocity) translates to a far larger requirement for propellant needed to escape Earth's gravity well. This is because not only must the primary-stage engines lift the extra propellant, they must also lift the extra propellant beyond that which is needed to lift that additional propellant. The liftoff mass requirement increases exponentially with an increase in the required delta-v of the spacecraft.
Because additional fuel is needed to lift fuel into space, space missions are designed with a tight propellant "budget", known as the "delta-v budget". The delta-v budget is in effect the total propellant that will be available after leaving the earth, for speeding up, slowing down, stabilization against external buffeting (by particles or other external effects), or direction changes, if it cannot acquire more propellant. The entire mission must be planned within that capability. Therefore, methods of speed and direction change that do not require fuel to be burned are advantageous, because they allow extra maneuvering capability and course enhancement, without spending fuel from the limited amount which has been carried into space. Gravity assist maneuvers can greatly change the speed of a spacecraft without expending propellant, and can save significant amounts of propellant, so they are a very common technique to save fuel.
Limits
The main practical limit to the use of a gravity assist maneuver is that planets and other large masses are seldom in the right places to enable a voyage to a particular destination. For example, the Voyager missions which started in the late 1970s were made possible by the "Grand Tour" alignment of Jupiter, Saturn, Uranus and Neptune. A similar alignment will not occur again until the middle of the 22nd century. That is an extreme case, but even for less ambitious missions there are years when the planets are scattered in unsuitable parts of their orbits.
Another limitation is the atmosphere, if any, of the available planet. The closer the spacecraft can approach, the faster its periapsis speed as gravity accelerates the spacecraft, allowing for more kinetic energy to be gained from a rocket burn. However, if a spacecraft gets too deep into the atmosphere, the energy lost to drag can exceed that gained from the planet's velocity. On the other hand, the atmosphere can be used to accomplish aerobraking. There have also been theoretical proposals to use aerodynamic lift as the spacecraft flies through the atmosphere. This maneuver, called an aerogravity assist, could bend the trajectory through a larger angle than gravity alone, and hence increase the gain in energy.
Even in the case of an airless body, there is a limit to how close a spacecraft may approach. The magnitude of the achievable change in velocity depends on the spacecraft's approach velocity and the planet's escape velocity at the point of closest approach (limited by either the surface or the atmosphere.)
Interplanetary slingshots using the Sun itself are not possible because the Sun is at rest relative to the Solar System as a whole. However, thrusting when near the Sun has the same effect as the powered slingshot described as the Oberth effect. This has the potential to magnify a spacecraft's thrusting power enormously, but is limited by the spacecraft's ability to resist the heat.
A rotating black hole might provide additional assistance, if its spin axis is aligned the right way. General relativity predicts that a large spinning frame-dragging—close to the object, space itself is dragged around in the direction of the spin. Any ordinary rotating object produces this effect. Although attempts to measure frame dragging about the Sun have produced no clear evidence, experiments performed by Gravity Probe B have detected frame-dragging effects caused by Earth. General relativity predicts that a spinning black hole is surrounded by a region of space, called the ergosphere, within which standing still (with respect to the black hole's spin) is impossible, because space itself is dragged at the speed of light in the same direction as the black hole's spin. The Penrose process may offer a way to gain energy from the ergosphere, although it would require the spaceship to dump some "ballast" into the black hole, and the spaceship would have had to expend energy to carry the "ballast" to the black hole.
Notable examples of use
Luna 3
The gravity assist maneuver was first attempted in 1959 for Luna 3, to photograph the far side of the Moon. The satellite did not gain speed, but its orbit was changed in a way that allowed successful transmission of the photos.
Pioneer 10
NASA's Pioneer 10 is a space probe launched in 1972 that completed the first mission to the planet Jupiter. Thereafter, Pioneer 10 became the first of five artificial objects to achieve the escape velocity needed to leave the Solar System. In December 1973, Pioneer 10 spacecraft was the first one to use the gravitational slingshot effect to reach escape velocity to leave Solar System.
Pioneer 11
Pioneer 11 was launched by NASA in 1973, to study the asteroid belt, the environment around Jupiter and Saturn, solar winds, and cosmic rays. It was the first probe to encounter Saturn, the second to fly through the asteroid belt, and the second to fly by Jupiter. To get to Saturn, the spacecraft got a gravity assist on Jupiter.
Mariner 10
The Mariner 10 probe was the first spacecraft to use the gravitational slingshot effect to reach another planet, passing by Venus on 5 February 1974 on its way to becoming the first spacecraft to explore Mercury.
Voyager 1
Voyager 1 was launched by NASA on September 5, 1977. It gained the energy to escape the Sun's gravity by performing slingshot maneuvers around Jupiter and Saturn. Having operated for as of , the spacecraft still communicates with the Deep Space Network to receive routine commands and to transmit data to Earth. Real-time distance and velocity data is provided by NASA and JPL. At a distance of from Earth as of January 12, 2020, it is the most distant human-made object from Earth.
Voyager 2
Voyager 2 was launched by NASA on August 20, 1977, to study the outer planets. Its trajectory took longer to reach Jupiter and Saturn than its twin spacecraft but enabled further encounters with Uranus and Neptune.
Galileo
The Galileo spacecraft was launched by NASA in 1989 and on its route to Jupiter got three gravity assists, one from Venus (February 10, 1990), and two from Earth (December 8, 1990 and December 8, 1992). Spacecraft reached Jupiter in December 1995. Gravity assists also allowed Galileo to flyby two asteroids, 243 Ida and 951 Gaspra.
Ulysses
In 1990, NASA launched the ESA spacecraft Ulysses to study the polar regions of the Sun. All the planets orbit approximately in a plane aligned with the equator of the Sun. Thus, to enter an orbit passing over the poles of the Sun, the spacecraft would have to eliminate the speed it inherited from the Earth's orbit around the Sun and gain the speed needed to orbit the Sun in the pole-to-pole plane. It was achieved by a gravity assist from Jupiter on February 8, 1992.
MESSENGER
The MESSENGER mission (launched in August 2004) made extensive use of gravity assists to slow its speed before orbiting Mercury. The MESSENGER mission included one flyby of Earth, two flybys of Venus, and three flybys of Mercury before finally arriving at Mercury in March 2011 with a velocity low enough to permit orbit insertion with available fuel. Although the flybys were primarily orbital maneuvers, each provided an opportunity for significant scientific observations.
Cassini
The Cassini–Huygens spacecraft was launched from Earth on 15 October 1997, followed by gravity assist flybys of Venus (26 April 1998 and 21 June 1999), Earth (18 August 1999), and Jupiter (30 December 2000). Transit to Saturn took 6.7 years, the spacecraft arrived at 1 July 2004. Its trajectory was called "the Most Complex Gravity-Assist Trajectory Flown to Date" in 2019.
After entering orbit around Saturn, the Cassini spacecraft used multiple Titan gravity assists to achieve significant changes in the inclination of its orbit as well so that instead of staying nearly in the equatorial plane, the spacecraft's flight path was inclined well out of the plane of the rings. A typical Titan encounter changed the spacecraft's velocity by 0.75 km/s, and the spacecraft made 127 Titan encounters. These encounters enabled an orbital tour with a wide range of periapsis and apoapsis distances, various alignments of the orbit with respect to the Sun, and orbital inclinations from 0° to 74°. The multiple flybys of Titan also allowed Cassini to flyby other moons, such as Rhea and Enceladus.
Rosetta
The Rosetta probe, launched in March 2004, used four gravity assist maneuvers (including one just 250 km from the surface of Mars, and three assists from Earth) to accelerate throughout the inner Solar System. That enabled it to flyby the asteroids 21 Lutetia and 2867 Šteins as well as eventually match the velocity of the 67P/Churyumov–Gerasimenko comet at the rendezvous point in August 2014.
New Horizons
New Horizons was launched by NASA in 2006, and reached Pluto in 2015. In 2007 it performed a gravity assist on Jupiter.
Juno
The Juno spacecraft was launched on August 5, 2011 (UTC). The trajectory used a gravity assist speed boost from Earth, accomplished by an Earth flyby in October 2013, two years after its launch on August 5, 2011. In that way Juno changed its orbit (and speed) toward its final goal, Jupiter, after only five years.
Parker Solar Probe
The Parker Solar Probe, launched by NASA in 2018, has seven planned Venus gravity assists. Each gravity assist brings the Parker Solar Probe progressively closer to the Sun. As of 2022, the spacecraft has performed five of its seven assists. The Parker Solar Probe's mission will make the closest approach to the Sun by any space mission. The mission's final planned gravity assist maneuver, completed on November 6, 2024, prepared it for three final solar flybys reaching just 3.8 million miles of the surface of the sun on December 24, 2024 (see figure).
Solar Orbiter
Solar Orbiter was launched by ESA in 2020. In its initial cruise phase, which lasts until November 2021, Solar Orbiter performed two gravity-assist manoeuvres around Venus and one around Earth to alter the spacecraft's trajectory, guiding it towards the innermost regions of the Solar System. The first close solar pass will take place on 26 March 2022 at around a third of Earth's distance from the Sun.
BepiColombo
BepiColombo is a joint mission of the European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA) to the planet Mercury. It was launched on 20 October 2018. It will use the gravity assist technique with Earth once, with Venus twice, and six times with Mercury. It will arrive in 2025. BepiColombo is named after Giuseppe (Bepi) Colombo who was a pioneer thinker with this way of maneuvers.
Lucy
Lucy was launched by NASA on 16 October 2021. It gained one gravity assist from Earth on the 16th of October, 2022, and after a flyby of the main-belt asteroid 152830 Dinkinesh it will gain another in 2024. In 2025, it will fly by the inner main-belt asteroid 52246 Donaldjohanson. In 2027, it will arrive at the Trojan cloud (the Greek camp of asteroids that orbits about 60° ahead of Jupiter), where it will fly by four Trojans, 3548 Eurybates (with its satellite), 15094 Polymele, 11351 Leucus, and 21900 Orus. After these flybys, Lucy will return to Earth in 2031 for another gravity assist toward the Trojan cloud (the Trojan camp which trails about 60° behind Jupiter), where it will visit the binary Trojan 617 Patroclus with its satellite Menoetius in 2033.
In fiction
In the novel 2001: A Space Odyssey – but not the movie – Discovery performs such a manoeuvre to gain speed as it goes around Jupiter. As Arthur C. Clarke made clear at various times, the location of TMA-2 was switched from near Saturn (in the novel) to near Jupiter (in the movie).
See also
3753 Cruithne, an asteroid which periodically has gravitational slingshot encounters with Earth
Delta-v budget
Low-energy transfer, a type of gravitational assist where a spacecraft is gravitationally snagged into orbit by a celestial body. This method is usually executed in the Earth-Moon system.
Dynamical friction
Flyby anomaly, an anomalous delta-v increase during gravity assists
Gravitational keyhole
Interplanetary Transport Network
n-body problem
Oberth effect, applying thrust near closest approach in a gravity well
Pioneer H, first Out-Of-The-Ecliptic mission (OOE) proposed, for Jupiter and solar (Sun) observations
STEREO, a gravity-assisted mission which used Earth's Moon to eject two spacecraft from Earth's orbit into heliocentric orbit
Notes
References
External links
Basics of Space Flight: A Gravity Assist Primer at NASA.gov
Spaceflight and Spacecraft: Gravity Assist, discussion at Phy6.org
Double-ball drop experiment
Astrodynamics
Soviet inventions
Orbital maneuvers
Spacecraft propulsion
Assist
Articles containing video clips | Gravity assist | [
"Engineering"
] | 4,356 | [
"Astrodynamics",
"Aerospace engineering"
] |
155,760 | https://en.wikipedia.org/wiki/Hohmann%20transfer%20orbit | In astronautics, the Hohmann transfer orbit () is an orbital maneuver used to transfer a spacecraft between two orbits of different altitudes around a central body. For example, a Hohmann transfer could be used to raise a satellite's orbit from low Earth orbit to geostationary orbit. In the idealized case, the initial and target orbits are both circular and coplanar. The maneuver is accomplished by placing the craft into an elliptical transfer orbit that is tangential to both the initial and target orbits. The maneuver uses two impulsive engine burns: the first establishes the transfer orbit, and the second adjusts the orbit to match the target.
The Hohmann maneuver often uses the lowest possible amount of impulse (which consumes a proportional amount of delta-v, and hence propellant) to accomplish the transfer, but requires a relatively longer travel time than higher-impulse transfers. In some cases where one orbit is much larger than the other, a bi-elliptic transfer can use even less impulse, at the cost of even greater travel time.
The maneuver was named after Walter Hohmann, the German scientist who published a description of it in his 1925 book Die Erreichbarkeit der Himmelskörper (The Attainability of Celestial Bodies). Hohmann was influenced in part by the German science fiction author Kurd Lasswitz and his 1897 book Two Planets.
When used for traveling between celestial bodies, a Hohmann transfer orbit requires that the starting and destination points be at particular locations in their orbits relative to each other. Space missions using a Hohmann transfer must wait for this required alignment to occur, which opens a launch window. For a mission between Earth and Mars, for example, these launch windows occur every 26 months. A Hohmann transfer orbit also determines a fixed time required to travel between the starting and destination points; for an Earth-Mars journey this travel time is about 9 months. When transfer is performed between orbits close to celestial bodies with significant gravitation, much less delta-v is usually required, as the Oberth effect may be employed for the burns.
They are also often used for these situations, but low-energy transfers which take into account the thrust limitations of real engines, and take advantage of the gravity wells of both planets can be more fuel efficient.
Example
The diagram shows a Hohmann transfer orbit to bring a spacecraft from a lower circular orbit into a higher one. It is an elliptic orbit that is tangential both to the lower circular orbit the spacecraft is to leave (cyan, labeled 1 on diagram) and the higher circular orbit that it is to reach (red, labeled 3 on diagram). The transfer orbit (yellow, labeled 2 on diagram) is initiated by firing the spacecraft's engine to add energy and raise the apoapsis. When the spacecraft reaches the apoapsis, a second engine firing adds energy to raise the periapsis, putting the spacecraft in the larger circular orbit.
Due to the reversibility of orbits, a similar Hohmann transfer orbit can be used to bring a spacecraft from a higher orbit into a lower one; in this case, the spacecraft's engine is fired in the opposite direction to its current path, slowing the spacecraft and lowering the periapsis of the elliptical transfer orbit to the altitude of the lower target orbit. The engine is then fired again at the lower distance to slow the spacecraft into the lower circular orbit.
The Hohmann transfer orbit is based on two instantaneous velocity changes. Extra fuel is required to compensate for the fact that the bursts take time; this is minimized by using high-thrust engines to minimize the duration of the bursts. For transfers in Earth orbit, the two burns are labelled the perigee burn and the apogee burn (or apogee kick); more generally, for bodies that are not the Earth, they are labelled periapsis and apoapsis burns. Alternatively, the second burn to circularize the orbit may be referred to as a circularization burn.
Type I and Type II
An ideal Hohmann transfer orbit transfers between two circular orbits in the same plane and traverses exactly 180° around the primary. In the real world, the destination orbit may not be circular, and may not be coplanar with the initial orbit. Real world transfer orbits may traverse slightly more, or slightly less, than 180° around the primary. An orbit which traverses less than 180° around the primary is called a "Type I" Hohmann transfer, while an orbit which traverses more than 180° is called a "Type II" Hohmann transfer.
Transfer orbits can go more than 360° around the primary. These multiple-revolution transfers are sometimes referred to as Type III and Type IV, where a Type III is a Type I plus 360°, and a Type IV is a Type II plus 360°.
Uses
A Hohmann transfer orbit can be used to transfer an object's orbit toward another object, as long as they co-orbit a more massive body. In the context of Earth and the Solar System, this includes any object which orbits the Sun. An example of where a Hohmann transfer orbit could be used is to bring an asteroid, orbiting the Sun, into contact with the Earth.
Calculation
For a small body orbiting another much larger body, such as a satellite orbiting Earth, the total energy of the smaller body is the sum of its kinetic energy and potential energy, and this total energy also equals half the potential at the average distance (the semi-major axis):
Solving this equation for velocity results in the vis-viva equation,
where:
is the speed of an orbiting body,
is the standard gravitational parameter of the primary body, assuming is not significantly bigger than (which makes ), (for Earth, this is μ~3.986E14 m3 s−2)
is the distance of the orbiting body from the primary focus,
is the semi-major axis of the body's orbit.
Therefore, the delta-v (Δv) required for the Hohmann transfer can be computed as follows, under the assumption of instantaneous impulses:
to enter the elliptical orbit at from the circular orbit, where is the aphelion of the resulting elliptical orbit, and
to leave the elliptical orbit at to the circular orbit,
where and are respectively the radii of the departure and arrival circular orbits;
the smaller (greater) of and corresponds to the periapsis distance (apoapsis distance) of the Hohmann elliptical transfer orbit. Typically, is given in units of m3/s2, as such be sure to use meters, not kilometers, for and . The total is then:
Whether moving into a higher or lower orbit, by Kepler's third law, the time taken to transfer between the orbits is
(one half of the orbital period for the whole ellipse), where is length of semi-major axis of the Hohmann transfer orbit.
In application to traveling from one celestial body to another it is crucial to start maneuver at the time when the two bodies are properly aligned. Considering the target angular velocity being
angular alignment α (in radians) at the time of start between the source object and the target object shall be
Example
Consider a geostationary transfer orbit, beginning at r1 = 6,678 km (altitude 300 km) and ending in a geostationary orbit with r2 = 42,164 km (altitude 35,786 km).
In the smaller circular orbit the speed is 7.73 km/s; in the larger one, 3.07 km/s. In the elliptical orbit in between the speed varies from 10.15 km/s at the perigee to 1.61 km/s at the apogee.
Therefore the Δv for the first burn is 10.15 − 7.73 = 2.42 km/s, for the second burn 3.07 − 1.61 = 1.46 km/s, and for both together 3.88 km/s.
This is greater than the Δv required for an escape orbit: 10.93 − 7.73 = 3.20 km/s. Applying a Δv at the Low Earth orbit (LEO) of only 0.78 km/s more (3.20−2.42) would give the rocket the escape velocity, which is less than the Δv of 1.46 km/s required to circularize the geosynchronous orbit. This illustrates the Oberth effect that at large speeds the same Δv provides more specific orbital energy, and energy increase is maximized if one spends the Δv as quickly as possible, rather than spending some, being decelerated by gravity, and then spending some more to overcome the deceleration (of course, the objective of a Hohmann transfer orbit is different).
Worst case, maximum delta-v
As the example above demonstrates, the Δv required to perform a Hohmann transfer between two circular orbits is not the greatest when the destination radius is infinite. (Escape speed is times orbital speed, so the Δv required to escape is − 1 (41.4%) of the orbital speed.) The Δv required is greatest (53.0% of smaller orbital speed) when the radius of the larger orbit is 15.5817... times that of the smaller orbit. This number is the positive root of , which is . For higher orbit ratios the required for the second burn decreases faster than the first increases.
Application to interplanetary travel
When used to move a spacecraft from orbiting one planet to orbiting another, the Oberth effect allows to use less delta-v than the sum of the delta-v for separate manoeuvres to escape the first planet, followed by a Hohmann transfer to the second planet, followed by insertion into an orbit around the other planet.
For example, consider a spacecraft travelling from Earth to Mars. At the beginning of its journey, the spacecraft will already have a certain velocity and kinetic energy associated with its orbit around Earth. During the burn the rocket engine applies its delta-v, but the kinetic energy increases as a square law, until it is sufficient to escape the planet's gravitational potential, and then burns more so as to gain enough energy to get into the Hohmann transfer orbit (around the Sun). Because the rocket engine is able to make use of the initial kinetic energy of the propellant, far less delta-v is required over and above that needed to reach escape velocity, and the optimum situation is when the transfer burn is made at minimum altitude (low periapsis) above the planet. The delta-v needed is only 3.6 km/s, only about 0.4 km/s more than needed to escape Earth, even though this results in the spacecraft going 2.9 km/s faster than the Earth as it heads off for Mars (see table below).
At the other end, the spacecraft must decelerate for the gravity of Mars to capture it. This capture burn should optimally be done at low altitude to also make best use of the Oberth effect. Therefore, relatively small amounts of thrust at either end of the trip are needed to arrange the transfer compared to the free space situation.
However, with any Hohmann transfer, the alignment of the two planets in their orbits is crucial – the destination planet and the spacecraft must arrive at the same point in their respective orbits around the Sun at the same time. This requirement for alignment gives rise to the concept of launch windows.
The term lunar transfer orbit (LTO) is used for the Moon.
It is possible to apply the formula given above to calculate the Δv in km/s needed to enter a Hohmann transfer orbit to arrive at various destinations from Earth (assuming circular orbits for the planets). In this table, the column labeled "Δv to enter Hohmann orbit from Earth's orbit" gives the change from Earth's velocity to the velocity needed to get on a Hohmann ellipse whose other end will be at the desired distance from the Sun. The column labeled "LEO height" gives the velocity needed (in a non-rotating frame of reference centered on the earth) when 300 km above the Earth's surface. This is obtained by adding to the specific kinetic energy the square of the escape velocity (10.9 km/s) from this height. The column "LEO" is simply the previous speed minus the LEO orbital speed of 7.73 km/s.
Note that in most cases, Δv from LEO is less than the Δv to enter Hohmann orbit from Earth's orbit.
To get to the Sun, it is actually not necessary to use a Δv of 24 km/s. One can use 8.8 km/s to go very far away from the Sun, then use a negligible Δv to bring the angular momentum to zero, and then fall into the Sun. This can be considered a sequence of two Hohmann transfers, one up and one down. Also, the table does not give the values that would apply when using the Moon for a gravity assist. There are also possibilities of using one planet, like Venus which is the easiest to get to, to assist getting to other planets or the Sun.
Comparison to other transfers
Bi-elliptic transfer
The bi-elliptic transfer consists of two half-elliptic orbits. From the initial orbit, a first burn expends delta-v to boost the spacecraft into the first transfer orbit with an apoapsis at some point away from the central body. At this point a second burn sends the spacecraft into the second elliptical orbit with periapsis at the radius of the final desired orbit, where a third burn is performed, injecting the spacecraft into the desired orbit.
While they require one more engine burn than a Hohmann transfer and generally require a greater travel time, some bi-elliptic transfers require a lower amount of total delta-v than a Hohmann transfer when the ratio of final to initial semi-major axis is 11.94 or greater, depending on the intermediate semi-major axis chosen.
The idea of the bi-elliptical transfer trajectory was first published by Ary Sternfeld in 1934.
Low-thrust transfer
Low-thrust engines can perform an approximation of a Hohmann transfer orbit, by creating a gradual enlargement of the initial circular orbit through carefully timed engine firings. This requires a change in velocity (delta-v) that is greater than the two-impulse transfer orbit and takes longer to complete.
Engines such as ion thrusters are more difficult to analyze with the delta-v model. These engines offer a very low thrust and at the same time, much higher delta-v budget, much higher specific impulse, lower mass of fuel and engine. A 2-burn Hohmann transfer maneuver would be impractical with such a low thrust; the maneuver mainly optimizes the use of fuel, but in this situation there is relatively plenty of it.
If only low-thrust maneuvers are planned on a mission, then continuously firing a low-thrust, but very high-efficiency engine might generate a higher delta-v and at the same time use less propellant than a conventional chemical rocket engine.
Going from one circular orbit to another by gradually changing the radius simply requires the same delta-v as the difference between the two speeds. Such maneuver requires more delta-v than a 2-burn Hohmann transfer maneuver, but does so with continuous low thrust rather than the short applications of high thrust.
The amount of propellant mass used measures the efficiency of the maneuver plus the hardware employed for it. The total delta-v used measures the efficiency of the maneuver only. For electric propulsion systems, which tend to be low-thrust, the high efficiency of the propulsive system usually compensates for the higher delta-V compared to the more efficient Hohmann maneuver.
Transfer orbits using electrical propulsion or low-thrust engines optimize the transfer time to reach the final orbit and not the delta-v as in the Hohmann transfer orbit. For geostationary orbit, the initial orbit is set to be supersynchronous and by thrusting continuously in the direction of the velocity at apogee, the transfer orbit transforms to a circular geosynchronous one. This method however takes much longer to achieve due to the low thrust injected into the orbit.
Interplanetary Transport Network
In 1997, a set of orbits known as the Interplanetary Transport Network (ITN) was published, providing even lower propulsive delta-v (though much slower and longer) paths between different orbits than Hohmann transfer orbits. The Interplanetary Transport Network is different in nature than Hohmann transfers because Hohmann transfers assume only one large body whereas the Interplanetary Transport Network does not. The Interplanetary Transport Network is able to achieve the use of less propulsive delta-v by employing gravity assist from the planets.
See also
Bi-elliptic transfer
Delta-v budget
Geostationary transfer orbit
Halo orbit
Lissajous orbit
List of orbits
Orbital mechanics
Citations
General and cited sources
Further reading
Astrodynamics
Spacecraft propulsion
Orbital maneuvers
Types of orbit | Hohmann transfer orbit | [
"Engineering"
] | 3,516 | [
"Astrodynamics",
"Aerospace engineering"
] |
155,823 | https://en.wikipedia.org/wiki/Sievert | The sievert (symbol: Sv) is a unit in the International System of Units (SI) intended to represent the stochastic health risk of ionizing radiation, which is defined as the probability of causing radiation-induced cancer and genetic damage. The sievert is important in dosimetry and radiation protection. It is named after Rolf Maximilian Sievert, a Swedish medical physicist renowned for work on radiation dose measurement and research into the biological effects of radiation.
The sievert is used for radiation dose quantities such as equivalent dose and effective dose, which represent the risk of external radiation from sources outside the body, and committed dose, which represents the risk of internal irradiation due to inhaled or ingested radioactive substances. According to the International Commission on Radiological Protection (ICRP), one sievert results in a 5.5% probability of eventually developing fatal cancer based on the disputed linear no-threshold model of ionizing radiation exposure.
To calculate the value of stochastic health risk in sieverts, the physical quantity absorbed dose is converted into equivalent dose and effective dose by applying factors for radiation type and biological context, published by the ICRP and the International Commission on Radiation Units and Measurements (ICRU). One sievert equals 100 rem, which is an older, CGS radiation unit.
Conventionally, deterministic health effects due to acute tissue damage that is certain to happen, produced by high dose rates of radiation, are compared to the physical quantity absorbed dose measured by the unit gray (Gy).
Definition
CIPM definition of the sievert
The SI definition given by the International Committee for Weights and Measures (CIPM) says:
"The quantity dose equivalent H is the product of the absorbed dose D of ionizing radiation and the dimensionless factor Q (quality factor) defined as a function of linear energy transfer by the ICRU"
H = Q × D
The value of Q is not defined further by CIPM, but it requires the use of the relevant ICRU recommendations to provide this value.
The CIPM also says that "in order to avoid any risk of confusion between the absorbed dose D and the dose equivalent H, the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed dose D and the name sievert instead of joules per kilogram for the unit of dose equivalent H".
In summary:
gray: quantity D—absorbed dose
1 Gy = 1 joule/kilogram—a physical quantity. 1 Gy is the deposit of a joule of radiation energy per kilogram of matter or tissue.
sievert: quantity H—equivalent dose
1 Sv = 1 joule/kilogram—a biological effect. The sievert represents the equivalent biological effect of the deposit of a joule of radiation energy in a kilogram of human tissue. The ratio to absorbed dose is denoted by Q.
ICRP definition of the sievert
The ICRP definition of the sievert is:
"The sievert is the special name for the SI unit of equivalent dose, effective dose, and operational dose quantities. The unit is joule per kilogram."
The sievert is used for a number of dose quantities which are described in this article and are part of the international radiological protection system devised and defined by the ICRP and ICRU.
External dose quantities
When the sievert is used to represent the stochastic effects of external ionizing radiation on human tissue, the radiation doses received are measured in practice by radiometric instruments and dosimeters and are called operational quantities. To relate these actual received doses to likely health effects, protection quantities have been developed to predict the likely health effects using the results of large epidemiological studies. Consequently, this has required the creation of a number of different dose quantities within a coherent system developed by the ICRU working with the ICRP.
The external dose quantities and their relationships are shown in the accompanying diagram. The ICRU is primarily responsible for the operational dose quantities, based upon the application of ionising radiation metrology, and the ICRP is primarily responsible for the protection quantities, based upon modelling of dose uptake and biological sensitivity of the human body.
Naming conventions
The ICRU/ICRP dose quantities have specific purposes and meanings, but some use common words in a different order. There can be confusion between, for instance, equivalent dose and dose equivalent.
Although the CIPM definition states that the linear energy transfer function (Q) of the ICRU is used in calculating the biological effect, the ICRP in 1990 developed the "protection" dose quantities effective and equivalent dose which are calculated from more complex computational models and are distinguished by not having the phrase dose equivalent in their name. Only the operational dose quantities which still use Q for calculation retain the phrase dose equivalent. However, there are joint ICRU/ICRP proposals to simplify this system by changes to the operational dose definitions to harmonise with those of protection quantities. These were outlined at the 3rd International Symposium on Radiological Protection in October 2015, and if implemented would make the naming of operational quantities more logical by introducing "dose to lens of eye" and "dose to local skin" as equivalent doses.
In the USA there are differently named dose quantities which are not part of the ICRP nomenclature.
Physical quantities
These are directly measurable physical quantities in which no allowance has been made for biological effects. Radiation fluence is the number of radiation particles impinging per unit area per unit time, kerma is the ionising effect on air of gamma rays and X-rays and is used for instrument calibration, and absorbed dose is the amount of radiation energy deposited per unit mass in the matter or tissue under consideration.
Operational quantities
Operational quantities are measured in practice, and are the means of directly measuring dose uptake due to exposure, or predicting dose uptake in a measured environment. In this way they are used for practical dose control, by providing an estimate or upper limit for the value of the protection quantities related to an exposure. They are also used in practical regulations and guidance.
The calibration of individual and area dosimeters in photon fields is performed by measuring the collision "air kerma free in air" under conditions of secondary electron equilibrium. Then the appropriate operational quantity is derived applying a conversion coefficient that relates the air kerma to the appropriate operational quantity. The conversion coefficients for photon radiation are published by the ICRU.
Simple (non-anthropomorphic) "phantoms" are used to relate operational quantities to measured free-air irradiation. The ICRU sphere phantom is based on the definition of an ICRU 4-element tissue-equivalent material which does not really exist and cannot be fabricated. The ICRU sphere is a theoretical 30 cm diameter "tissue equivalent" sphere consisting of a material with a density of 1 g·cm−3 and a mass composition of 76.2% oxygen, 11.1% carbon, 10.1% hydrogen and 2.6% nitrogen. This material is specified to most closely approximate human tissue in its absorption properties. According to the ICRP, the ICRU "sphere phantom" in most cases adequately approximates the human body as regards the scattering and attenuation of penetrating radiation fields under consideration. Thus radiation of a particular energy fluence will have roughly the same energy deposition within the sphere as it would in the equivalent mass of human tissue.
To allow for back-scattering and absorption of the human body, the "slab phantom" is used to represent the human torso for practical calibration of whole body dosimeters. The slab phantom is depth to represent the human torso.
The joint ICRU/ICRP proposals outlined at the 3rd International Symposium on Radiological Protection in October 2015 to change the definition of operational quantities would not change the present use of calibration phantoms or reference radiation fields.
Protection quantities
Protection quantities are calculated models, and are used as "limiting quantities" to specify exposure limits to ensure, in the words of ICRP, "that the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided". These quantities cannot be measured in practice but their values are derived using models of external dose to internal organs of the human body, using anthropomorphic phantoms. These are 3D computational models of the body which take into account a number of complex effects such as body self-shielding and internal scattering of radiation. The calculation starts with organ absorbed dose, and then applies radiation and tissue weighting factors.
As protection quantities cannot practically be measured, operational quantities must be used to relate them to practical radiation instrument and dosimeter responses.
Instrument and dosimetry response
This is an actual reading obtained from such as an ambient dose gamma monitor, or a personal dosimeter. Such instruments are calibrated using radiation metrology techniques which will trace them to a national radiation standard, and thereby relate them to an operational quantity. The readings of instruments and dosimeters are used to prevent the uptake of excessive dose and to provide records of dose uptake to satisfy radiation safety legislation; such as in the UK, the Ionising Radiations Regulations 1999.
Calculating protection dose quantities
The sievert is used in external radiation protection for equivalent dose (the external-source, whole-body exposure effects, in a uniform field), and effective dose (which depends on the body parts irradiated).
These dose quantities are weighted averages of absorbed dose designed to be representative of the stochastic health effects of radiation, and use of the sievert implies that appropriate weighting factors have been applied to the absorbed dose measurement or calculation (expressed in grays).
The ICRP calculation provides two weighting factors to enable the calculation of protection quantities.
1. The radiation factor WR, which is specific for radiation type R – This is used in calculating the equivalent dose HT which can be for the whole body or for individual organs.
2. The tissue weighting factor WT, which is specific for tissue type T being irradiated. This is used with WR to calculate the contributory organ doses to arrive at an effective dose E for non-uniform irradiation.
When a whole body is irradiated uniformly only the radiation weighting factor WR is used, and the effective dose equals the whole body equivalent dose. But if the irradiation of a body is partial or non-uniform the tissue factor WT is used to calculate dose to each organ or tissue. These are then summed to obtain the effective dose. In the case of uniform irradiation of the human body, these summate to 1, but in the case of partial or non-uniform irradiation, they will summate to a lower value depending on the organs concerned; reflecting the lower overall health effect. The calculation process is shown on the accompanying diagram. This approach calculates the biological risk contribution to the whole body, taking into account complete or partial irradiation, and the radiation type or types.
The values of these weighting factors are conservatively chosen to be greater than the bulk of experimental values observed for the most sensitive cell types, based on averages of those obtained for the human population.
Radiation type weighting factor WR
Since different radiation types have different biological effects for the same deposited energy, a corrective radiation weighting factor WR, which is dependent on the radiation type and on the target tissue, is applied to convert the absorbed dose measured in the unit gray to determine the equivalent dose. The result is given the unit sievert.
The equivalent dose is calculated by multiplying the absorbed energy, averaged by mass over an organ or tissue of interest, by a radiation weighting factor appropriate to the type and energy of radiation. To obtain the equivalent dose for a mix of radiation types and energies, a sum is taken over all types of radiation energy dose.
where
is the equivalent dose absorbed by tissue T,
is the absorbed dose in tissue T by radiation type R and
is the radiation weighting factor defined by regulation.
Thus for example, an absorbed dose of 1 Gy by alpha particles will lead to an equivalent dose of 20 Sv.
This may seem to be a paradox. It implies that the energy of the incident radiation field in joules has increased by a factor of 20, thereby violating the laws of conservation of energy. However, this is not the case. The sievert is used only to convey the fact that a gray of absorbed alpha particles would cause twenty times the biological effect of a gray of absorbed x-rays. It is this biological component that is being expressed when using sieverts rather than the actual energy delivered by the incident absorbed radiation.
Tissue type weighting factor WT
The second weighting factor is the tissue factor WT, but it is used only if there has been non-uniform irradiation of a body. If the body has been subject to uniform irradiation, the effective dose equals the whole body equivalent dose, and only the radiation weighting factor WR is used. But if there is partial or non-uniform body irradiation the calculation must take account of the individual organ doses received, because the sensitivity of each organ to irradiation depends on their tissue type. This summed dose from only those organs concerned gives the effective dose for the whole body. The tissue weighting factor is used to calculate those individual organ dose contributions.
The ICRP values for WT are given in the table shown here.
The article on effective dose gives the method of calculation. The absorbed dose is first corrected for the radiation type to give the equivalent dose, and then corrected for the tissue receiving the radiation. Some tissues like bone marrow are particularly sensitive to radiation, so they are given a weighting factor that is disproportionally large relative to the fraction of body mass they represent. Other tissues like the hard bone surface are particularly insensitive to radiation and are assigned a disproportionally low weighting factor.
In summary, the sum of tissue-weighted doses to each irradiated organ or tissue of the body adds up to the effective dose for the body. The use of effective dose enables comparisons of overall dose received regardless of the extent of body irradiation.
Operational quantities
The operational quantities are used in practical applications for monitoring and investigating external exposure situations. They are defined for practical operational measurements and assessment of doses in the body. Three external operational dose quantities were devised to relate operational dosimeter and instrument measurements to the calculated protection quantities. Also devised were two phantoms, The ICRU "slab" and "sphere" phantoms which relate these quantities to incident radiation quantities using the Q(L) calculation.
Ambient dose equivalent
This is used for area monitoring of penetrating radiation and is usually expressed as the quantity H*(10). This means the radiation is equivalent to that found 10 mm within the ICRU sphere phantom in the direction of origin of the field. An example of penetrating radiation is gamma rays.
Directional dose equivalent
This is used for monitoring of low penetrating radiation and is usually expressed as the quantity H'''(0.07). This means the radiation is equivalent to that found at a depth of 0.07 mm in the ICRU sphere phantom. Examples of low penetrating radiation are alpha particles, beta particles and low-energy photons. This dose quantity is used for the determination of equivalent dose to such as the skin, lens of the eye. In radiological protection practice value of omega is usually not specified as the dose is usually at a maximum at the point of interest.
Personal dose equivalent
This is used for individual dose monitoring, such as with a personal dosimeter worn on the body. The recommended depth for assessment is 10 mm which gives the quantity Hp(10).
Proposals for changing the definition of protection dose quantities
In order to simplify the means of calculating operational quantities and assist in the comprehension of radiation dose protection quantities, ICRP Committee 2 & ICRU Report Committee 26 started in 2010 an examination of different means of achieving this by dose coefficients related to Effective Dose or Absorbed Dose.
Specifically;
1. For area monitoring of effective dose of whole body it would be:H = Φ × conversion coefficient
The driver for this is that H∗(10) is not a reasonable estimate of effective dose due to high energy photons, as a result of the extension of particle types and energy ranges to be considered in ICRP report 116. This change would remove the need for the ICRU sphere and introduce a new quantity called Emax.
2. For individual monitoring, to measure deterministic effects on eye lens and skin, it would be:D = Φ × conversion coefficient for absorbed dose.
The driver for this is the need to measure the deterministic effect, which it is suggested, is more appropriate than stochastic effect. This would calculate equivalent dose quantities Hlens and Hskin.
This would remove the need for the ICRU Sphere and the Q-L function. Any changes would replace ICRU report 51, and part of report 57.
A final draft report was issued in July 2017 by ICRU/ICRP for consultation.
Internal dose quantities
The sievert is used for human internal dose quantities in calculating committed dose. This is dose from radionuclides which have been ingested or inhaled into the human body, and thereby "committed" to irradiate the body for a period of time. The concepts of calculating protection quantities as described for external radiation applies, but as the source of radiation is within the tissue of the body, the calculation of absorbed organ dose uses different coefficients and irradiation mechanisms.
The ICRP defines Committed effective dose, E(t) as the sum of the products of the committed organ or tissue equivalent doses and the appropriate tissue weighting factors WT, where t'' is the integration time in years following the intake. The commitment period is taken to be 50 years for adults, and to age 70 years for children.
The ICRP further states "For internal exposure, committed effective doses are generally determined from an assessment of the intakes of radionuclides from bioassay measurements or other quantities (e.g., activity retained in the body or in daily excreta). The radiation dose is determined from the intake using recommended dose coefficients".
A committed dose from an internal source is intended to carry the same effective risk as the same amount of equivalent dose applied uniformly to the whole body from an external source, or the same amount of effective dose applied to part of the body.
Health effects
Ionizing radiation has deterministic and stochastic effects on human health. Deterministic (acute tissue effect) events happen with certainty, with the resulting health conditions occurring in every individual who received the same high dose. Stochastic (cancer induction and genetic) events are inherently random, with most individuals in a group failing to ever exhibit any causal negative health effects after exposure, while an indeterministic random minority do, often with the resulting subtle negative health effects being observable only after large detailed epidemiology studies.
The use of the sievert implies that only stochastic effects are being considered, and to avoid confusion deterministic effects are conventionally compared to values of absorbed dose expressed by the SI unit gray (Gy).
Stochastic effects
Stochastic effects are those that occur randomly, such as radiation-induced cancer. The consensus of nuclear regulators, governments and the UNSCEAR is that the incidence of cancers due to ionizing radiation can be modeled as increasing linearly with effective dose at a rate of 5.5% per sievert. This is known as the linear no-threshold model (LNT model). Some argue that this LNT model is now outdated and should be replaced with a threshold below which the body's natural cell processes repair damage and/or replace damaged cells. There is general agreement that the risk is much higher for infants and fetuses than adults, higher for the middle-aged than for seniors, and higher for women than for men, though there is no quantitative consensus about this.
Deterministic effects
The deterministic (acute tissue damage) effects that can lead to acute radiation syndrome only occur in the case of acute high doses (≳ 0.1 Gy) and high dose rates (≳ 0.1 Gy/h) and are conventionally not measured using the unit sievert, but use the unit gray (Gy).
A model of deterministic risk would require different weighting factors (not yet established) than are used in the calculation of equivalent and effective dose.
ICRP dose limits
The ICRP recommends a number of limits for dose uptake in table 8 of report 103. These limits are "situational", for planned, emergency and existing situations. Within these situations, limits are given for the following groups:
Planned exposure – limits given for occupational, medical and public
Emergency exposure – limits given for occupational and public exposure
Existing exposure – All persons exposed
For occupational exposure, the limit is 50 mSv in a single year with a maximum of 100 mSv in a consecutive five-year period, and for the public to an average of 1 mSv (0.001 Sv) of effective dose per year, not including medical and occupational exposures.
For comparison, natural radiation levels inside the United States Capitol are such that a human body would receive an additional dose rate of 0.85 mSv/a, close to the regulatory limit, because of the uranium content of the granite structure. According to the conservative ICRP model, someone who spent 20 years inside the capitol building would have an extra one in a thousand chance of getting cancer, over and above any other existing risk (calculated as: 20 a·0.85 mSv/a·0.001 Sv/mSv·5.5%/Sv ≈ 0.1%). However, that "existing risk" is much higher; an average American would have a 10% chance of getting cancer during this same 20-year period, even without any exposure to artificial radiation (see natural Epidemiology of cancer and cancer rates).
Dose examples
Significant radiation doses are not frequently encountered in everyday life. The following examples can help illustrate relative magnitudes; these are meant to be examples only, not a comprehensive list of possible radiation doses. An "acute dose" is one that occurs over a short and finite period of time, while a "chronic dose" is a dose that continues for an extended period of time so that it is better described by a dose rate.
Dose examples
Dose rate examples
All conversions between hours and years have assumed continuous presence in a steady field, disregarding known fluctuations, intermittent exposure and radioactive decay. Converted values are shown in parentheses. "/a" is "per annum", which means per year. "/h" means "per hour".
Notes on examples:
History
The sievert has its origin in the röntgen equivalent man (rem) which was derived from CGS units. The International Commission on Radiation Units and Measurements (ICRU) promoted a switch to coherent SI units in the 1970s, and announced in 1976 that it planned to formulate a suitable unit for equivalent dose. The ICRP pre-empted the ICRU by introducing the sievert in 1977.
The sievert was adopted by the International Committee for Weights and Measures (CIPM) in 1980, five years after adopting the gray. The CIPM then issued an explanation in 1984, recommending when the sievert should be used as opposed to the gray. That explanation was updated in 2002 to bring it closer to the ICRP's definition of equivalent dose, which had changed in 1990. Specifically, the ICRP had introduced equivalent dose, renamed the quality factor (Q) to radiation weighting factor (WR), and dropped another weighting factor "N" in 1990. In 2002, the CIPM similarly dropped the weighting factor "N" from their explanation but otherwise kept other old terminology and symbols. This explanation only appears in the appendix to the SI brochure and is not part of the definition of the sievert.
Common SI usage
Frequently used SI prefixes are the millisievert (1 mSv = 0.001 Sv) and microsievert (1 μSv = 0.000 001 Sv) and commonly used units for time derivative or "dose rate" indications on instruments and warnings for radiological protection are μSv/h and mSv/h. Regulatory limits and chronic doses are often given in units of mSv/a or Sv/a, where they are understood to represent an average over the entire year. In many occupational scenarios, the hourly dose rate might fluctuate to levels thousands of times higher for a brief period of time, without infringing on the annual limits. The conversion from hours to years varies because of leap years and exposure schedules, but approximate conversions are:
1 mSv/h = 8.766 Sv/a
114.1 μSv/h = 1 Sv/a
Conversion from hourly rates to annual rates is further complicated by seasonal fluctuations in natural radiation, decay of artificial sources, and intermittent proximity between humans and sources. The ICRP once adopted fixed conversion for occupational exposure, although these have not appeared in recent documents:
8 h = 1 day
40 h = 1 week
50 weeks = 1 year
Therefore, for occupation exposures of that time period,
1 mSv/h = 2 Sv/a
500 μSv/h = 1 Sv/a
Ionizing radiation quantities
The following table shows radiation quantities in SI and non-SI units:
Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985.
Rem equivalence
An older unit for the dose equivalent is the rem, still often used in the United States. One sievert is equal to 100 rem:
See also
Acute radiation syndrome
Becquerel (disintegrations per second)
Counts per minute
Exposure (radiation)
Rutherford (unit)
Sverdrup (a non-SI unit of volume transport with the same symbol Sv as sievert)
Explanatory notes
References
External links
Eurados - The European radiation dosimetry group
Radiation health effects
Radiobiology
Radioactivity
Units of radiation dose
Units of radioactivity
Radiation protection
SI derived units | Sievert | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Biology"
] | 5,398 | [
"Radiation health effects",
"Units of measurement",
"Radiobiology",
"Quantity",
"Units of radioactivity",
"Units of radiation dose",
"Nuclear physics",
"Radiation effects",
"Radioactivity"
] |
155,829 | https://en.wikipedia.org/wiki/Curie%20%28unit%29 | The curie (symbol Ci) is a non-SI unit of radioactivity originally defined in 1910. According to a notice in Nature at the time, it was to be named in honour of Pierre Curie, but was considered at least by some to be in honour of Marie Curie as well, and is in later literature considered to be named for both.
It was originally defined as "the quantity or mass of radium emanation in equilibrium with one gram of radium (element)", but is currently defined as 1 Ci = decays per second after more accurate measurements of the activity of Ra (which has a specific activity of ).
In 1975 the General Conference on Weights and Measures gave the becquerel (Bq), defined as one nuclear decay per second, official status as the SI unit of activity.
Therefore:
1 Ci = = 37 GBq
and
1 Bq ≅ ≅ 27 pCi
While its continued use is discouraged by the National Institute of Standards and Technology (NIST) and other bodies, the curie is still widely used throughout government, industry and medicine in the United States and in other countries.
At the 1910 meeting, which originally defined the curie, it was proposed to make it equivalent to 10 nanograms of radium (a practical amount). But Marie Curie, after initially accepting this, changed her mind and insisted on one gram of radium. According to Bertram Boltwood, Marie Curie thought that "the use of the name 'curie' for so infinitesimally small [a] quantity of anything was altogether inappropriate".
The power emitted in radioactive decay corresponding to one curie can be calculated by multiplying the decay energy by approximately 5.93 mW / MeV.
A radiotherapy machine may have roughly 1000 Ci of a radioisotope such as caesium-137 or cobalt-60. This quantity of radioactivity can produce serious health effects with only a few minutes of close-range, unshielded exposure.
Radioactive decay can lead to the emission of particulate radiation or electromagnetic radiation. Ingesting even small quantities of some particulate emitting radionuclides may be fatal. For example, the median lethal dose (LD-50) for ingested polonium-210 is 240 μCi; about 53.5 nanograms.
The typical human body contains roughly 0.1 μCi (14 mg) of naturally occurring potassium-40. A human body containing of carbon (see Composition of the human body) would also have about 24 nanograms or 0.1 μCi of carbon-14. Together, these would result in a total of approximately 0.2 μCi or 7400 decays per second inside the person's body (mostly from beta decay but some from gamma decay).
As a measure of quantity
Units of activity (the curie and the becquerel) also refer to a quantity of radioactive atoms. Because the probability of decay is a fixed physical quantity, for a known number of atoms of a particular radionuclide, a predictable number will decay in a given time. The number of decays that will occur in one second in one gram of atoms of a particular radionuclide is known as the specific activity of that radionuclide.
The activity of a sample decreases with time because of decay.
The rules of radioactive decay may be used to convert activity to an actual number of atoms. They state that 1 Ci of radioactive atoms would follow the expression
N (atoms) × λ (s) = 1 Ci = 3.7 × 10 Bq,
and so
N = 3.7 × 10 Bq / λ,
where λ is the decay constant in s−1.
Here are some examples, ordered by half-life:
Radiation related quantities
The following table shows radiation quantities in SI and non-SI units:
See also
Geiger counter
Ionizing radiation
Radiation burn
Radiation exposure
Radiation poisoning
United Nations Scientific Committee on the Effects of Atomic Radiation
References
Non-SI metric units
Radioactivity
Units of radioactivity
Pierre Curie
Radium | Curie (unit) | [
"Physics",
"Chemistry",
"Mathematics"
] | 832 | [
"Non-SI metric units",
"Quantity",
"Units of radioactivity",
"Radioactivity",
"Nuclear physics",
"Units of measurement"
] |
155,835 | https://en.wikipedia.org/wiki/Becquerel | The becquerel (; symbol: Bq) is the unit of radioactivity in the International System of Units (SI). One becquerel is defined as an activity of one per second, on average, for aperiodic activity events referred to a radionuclide. For applications relating to human health this is a small quantity, and SI multiples of the unit are commonly used.
The becquerel is named after Henri Becquerel, who shared a Nobel Prize in Physics with Pierre and Marie Curie in 1903 for their work in discovering radioactivity.
Definition
1 Bq = 1 s−1
A special name was introduced for the reciprocal second (s) to represent radioactivity to avoid potentially dangerous mistakes with prefixes. For example, 1 μs would mean 10 disintegrations per second: , whereas 1 μBq would mean 1 disintegration per 1 million seconds. Other names considered were hertz (Hz), a special name already in use for the reciprocal second (for periodic events of any kind), and fourier (Fr; after Joseph Fourier). The hertz is now only used for periodic phenomena. While 1 Hz replaces the deprecated term cycle per second, 1 Bq refers to one event per second on average for aperiodic radioactive decays.
The gray (Gy) and the becquerel (Bq) were introduced in 1975. Between 1953 and 1975, absorbed dose was often measured with the rad. Decay activity was given with the curie before 1946 and often with the rutherford between 1946 and 1975.
Unit capitalization and prefixes
As with every International System of Units (SI) unit named after a person, the first letter of its symbol is uppercase (Bq). However, when an SI unit is spelled out in English, it should always begin with a lowercase letter (becquerel)—except in a situation where any word in that position would be capitalized, such as at the beginning of a sentence or in material using title case.
Like any SI unit, Bq can be prefixed; commonly used multiples are kBq (kilobecquerel, ), MBq (megabecquerel, , equivalent to 1 rutherford), GBq (gigabecquerel, ), TBq (terabecquerel, ), and PBq (petabecquerel, ). Large prefixes are common for practical uses of the unit.
Examples
For practical applications, 1 Bq is a small unit. For example, there is roughly 0.017 g of potassium-40 in a typical human body, producing about 4,400 decays per second (Bq).
The activity of radioactive americium in a home smoke detector is about 37 kBq (1 μCi).
The global inventory of carbon-14 is estimated to be (8.5 EBq, 8.5 exabecquerel).
These examples are useful for comparing the amount of activity of these radioactive materials, but should not be confused with the amount of exposure to ionizing radiation that these materials represent. The level of exposure and thus the absorbed dose received are what should be considered when assessing the effects of ionizing radiation on humans.
Relation to the curie
The becquerel succeeded the curie (Ci), an older, non-SI unit of radioactivity based on the activity of 1 gram of radium-226. The curie is defined as , or 37 GBq.
Conversion factors:
1 Ci = = 37 GBq
1 μCi = = 37 kBq
1 Bq = =
1 MBq = 0.027 mCi
Relation to other radiation-related quantities
The following table shows radiation quantities in SI and non-SI units. W (formerly 'Q' factor) is a factor that scales the biological effect for different types of radiation, relative to x-rays (e.g. 1 for beta radiation, 20 for alpha radiation, and a complicated function of energy for neutrons). In general, conversion between rates of emission, the density of radiation, the fraction absorbed, and the biological effects, requires knowledge of the geometry between source and target, the energy and the type of the radiation emitted, among other factors.
See also
Background radiation
Banana equivalent dose
Counts per minute
Ionizing radiation
Orders of magnitude (radiation)
Radiation poisoning
Relative biological effectiveness
References
External links
Derived units on the International Bureau of Weights and Measures (BIPM) web site
SI derived units
Units of radioactivity
Units of frequency | Becquerel | [
"Chemistry",
"Mathematics"
] | 939 | [
"Quantity",
"Units of radioactivity",
"Radioactivity",
"Units of frequency",
"Units of measurement"
] |
155,869 | https://en.wikipedia.org/wiki/Lux | The lux (symbol: lx) is the unit of illuminance, or luminous flux per unit area, in the International System of Units (SI). It is equal to one lumen per square metre. In photometry, this is used as a measure of the irradiance, as perceived by the spectrally unequally responding human eye, of light that hits or passes through a surface. It is analogous to the radiometric unit watt per square metre, but with the power at each wavelength weighted according to the luminosity function, a model of human visual brightness perception, standardized by the CIE and ISO. In English, "lux" is used as both the singular and plural form.
The word is derived from the Latin word for "light", lux.
Explanation
Illuminance
Illuminance is a measure of how much luminous flux is spread over a given area. One can think of luminous flux (with the unit lumen) as a measure of the total "amount" of visible light present, and the illuminance as a measure of the intensity of illumination on a surface. A given amount of light will illuminate a surface more dimly if it is spread over a larger area, so illuminance is inversely proportional to area when the luminous flux is held constant.
One lux is equal to one lumen per square metre:
1 lx = 1 lm/m2 = 1 cd·sr/m2.
A flux of 1000 lumens, spread uniformly over an area of 1 square metre, lights up that square metre with an illuminance of 1000 lux. However, the same 1000 lumens spread out over 10 square metres produces a dimmer illuminance of only 100 lux.
Achieving an illuminance of 500 lx might be possible in a home kitchen with a single fluorescent light fixture with an output of . To light a factory floor with dozens of times the area of the kitchen would require dozens of such fixtures. Thus, lighting a larger area to the same illuminance (lux) requires a greater luminous flux (lumen).
As with other named SI units, SI prefixes can be used. For example, 1 kilolux (klx) is 1000 lx.
Here are some examples of the illuminance provided under various conditions:
The illuminance provided by a light source on a surface perpendicular to the direction to the source is a measure of the strength of that source as perceived from that location. For instance, a star of apparent magnitude 0 provides 2.08 microlux (μlx) at the Earth's surface. A barely perceptible magnitude 6 star provides 8 nanolux (nlx). The unobscured Sun provides an illumination of up to 100 kilolux (klx) on the Earth's surface, the exact value depending on time of year and atmospheric conditions. This direct normal illuminance is related to the solar illuminance constant Esc, equal to (see Sunlight and Solar constant).
The illuminance on a surface depends on how the surface is tilted with respect to the source. For example, a pocket flashlight aimed at a wall will produce a given level of illumination if aimed perpendicular to the wall, but if the flashlight is aimed at increasing angles to the perpendicular (maintaining the same distance), the illuminated spot becomes larger and so is less highly illuminated. When a surface is tilted at an angle to a source, the illumination provided on the surface is reduced because the tilted surface subtends a smaller solid angle from the source, and therefore it receives less light. For a point source, the illumination on the tilted surface is reduced by a factor equal to the cosine of the angle between a ray coming from the source and the normal to the surface. In practical lighting problems, given information on the way light is emitted from each source and the distance and geometry of the lighted area, a numerical calculation can be made of the illumination on a surface by adding the contributions of every point on every light source.
Relationship between illuminance and irradiance
Like all photometric units, the lux has a corresponding "radiometric" unit. The difference between any photometric unit and its corresponding radiometric unit is that radiometric units are based on physical power, with all wavelengths being weighted equally, while photometric units take into account the fact that the human eye's image-forming visual system is more sensitive to some wavelengths than others, and accordingly every wavelength is given a different weight. The weighting factor is known as the luminosity function.
The lux is one lumen per square metre (lm/m2), and the corresponding radiometric unit, which measures irradiance, is the watt per square metre (W/m2). There is no single conversion factor between lux and W/m2; there is a different conversion factor for every wavelength, and it is not possible to make a conversion unless one knows the spectral composition of the light.
The peak of the luminosity function is at 555 nm (green); the eye's image-forming visual system is more sensitive to light of this wavelength than any other. For monochromatic light of this wavelength, the amount of illuminance for a given amount of irradiance is maximum: 683.002 lx per 1 W/m2; the irradiance needed to make 1 lx at this wavelength is about 1.464 mW/m2. Other wavelengths of visible light produce fewer lux per watt-per-meter-squared. The luminosity function falls to zero for wavelengths outside the visible spectrum.
For a light source with mixed wavelengths, the number of lumens per watt can be calculated by means of the luminosity function. In order to appear reasonably "white", a light source cannot consist solely of the green light to which the eye's image-forming visual photoreceptors are most sensitive, but must include a generous mixture of red and blue wavelengths, to which they are much less sensitive.
This means that white (or whitish) light sources produce far fewer lumens per watt than the theoretical maximum of 683.002 lm/W. The ratio between the actual number of lumens per watt and the theoretical maximum is expressed as a percentage known as the luminous efficiency. For example, a typical incandescent light bulb has a luminous efficiency of only about 2%.
In reality, individual eyes vary slightly in their luminosity functions. However, photometric units are precisely defined and precisely measurable. They are based on an agreed-upon standard luminosity function based on measurements of the spectral characteristics of image-forming visual photoreception in many individual human eyes.
Use in video-camera specifications
Specifications for video cameras such as camcorders and surveillance cameras often include a minimal illuminance level in lux at which the camera will record a satisfactory image. A camera with good low-light capability will have a lower lux rating. Still cameras do not use such a specification, since longer exposure times can generally be used to make pictures at very low illuminance levels, as opposed to the case in video cameras, where a maximal exposure time is generally set by the frame rate.
Non-SI units of illuminance
The corresponding unit in English and American traditional units is the foot-candle. One foot candle is about 10.764 lx. Since one foot-candle is the illuminance cast on a surface by a one-candela source one foot away, a lux could be thought of as a "metre-candle", although this term is discouraged because it does not conform to SI standards for unit names.
One phot (ph) equals 10 kilolux (10 klx).
One nox (nx) equals 1 millilux (1 mlx) at light color 2042 K or 2046 K (formerly 2360 K).
In astronomy, apparent magnitude is a measure of the illuminance of a star on the Earth's atmosphere. A star with apparent magnitude 0 is 2.54 microlux outside the earth's atmosphere, and 82% of that (2.08 microlux) under clear skies. A magnitude 6 star (just barely visible under good conditions) would be 8.3 nanolux. A standard candle (one candela) a kilometre away would provide an illuminance of 1 microlux—about the same as a magnitude 1 star.
Legacy Unicode symbol
Unicode includes a symbol for "lx": . It is a legacy code to accommodate old code pages in some Asian languages. Use of this code is not recommended in new documents.
SI photometry units
See also
Exposure value
References
External links
Radiometry and photometry FAQ Professor Jim Palmer's Radiometry FAQ page (University of Arizona).
SI derived units
Units of illuminance | Lux | [
"Mathematics"
] | 1,836 | [
"Quantity",
"Units of illuminance",
"Units of measurement"
] |
155,899 | https://en.wikipedia.org/wiki/Human%20ecology | Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The philosophy and study of human ecology has a diffuse history with advancements in ecology, geography, sociology, psychology, anthropology, zoology, epidemiology, public health, and home economics, among others.
Historical development
The roots of ecology as a broader discipline can be traced to the Greeks and a lengthy list of developments in natural history science. Ecology also has notably developed in other cultures. Traditional knowledge, as it is called, includes the human propensity for intuitive knowledge, intelligent relations, understanding, and for passing on information about the natural world and the human experience. The term ecology was coined by Ernst Haeckel in 1866 and defined by direct reference to the economy of nature.
Like other contemporary researchers of his time, Haeckel adopted his terminology from Carl Linnaeus where human ecological connections were more evident. In his 1749 publication, Specimen academicum de oeconomia naturae, Linnaeus developed a science that included the economy and polis of nature. Polis stems from its Greek roots for a political community (originally based on the city-states), sharing its roots with the word police in reference to the promotion of growth and maintenance of good social order in a community. Linnaeus was also the first to write about the close affinity between humans and primates. Linnaeus presented early ideas found in modern aspects to human ecology, including the balance of nature while highlighting the importance of ecological functions (ecosystem services or natural capital in modern terms): "In exchange for performing its function satisfactorily, nature provided a species with the necessaries of life" The work of Linnaeus influenced Charles Darwin and other scientists of his time who used Linnaeus' terminology (i.e., the economy and polis of nature) with direct implications on matters of human affairs, ecology, and economics.
Ecology is not just biological, but a human science as well. An early and influential social scientist in the history of human ecology was Herbert Spencer. Spencer was influenced by and reciprocated his influence onto the works of Charles Darwin. Herbert Spencer coined the phrase "survival of the fittest", he was an early founder of sociology where he developed the idea of society as an organism, and he created an early precedent for the socio-ecological approach that was the subsequent aim and link between sociology and human ecology.
The history of human ecology has strong roots in geography and sociology departments of the late 19th century. In this context a major historical development or landmark that stimulated research into the ecological relations between humans and their urban environments was founded in George Perkins Marsh's book Man and Nature; or, physical geography as modified by human action, which was published in 1864. Marsh was interested in the active agency of human-nature interactions (an early precursor to urban ecology or human niche construction) in frequent reference to the economy of nature.
In 1894, an influential sociologist at the University of Chicago named Albion W. Small collaborated with sociologist George E. Vincent and published a "'laboratory guide' to studying people in their 'every-day occupations.'" This was a guidebook that trained students of sociology how they could study society in a way that a natural historian would study birds. Their publication "explicitly included the relation of the social world to the material environment."
The first English-language use of the term "ecology" is credited to American chemist and founder of the field of home economics, Ellen Swallow Richards. Richards first introduced the term as "oekology" in 1892, and subsequently developed the term "human ecology".
The term "human ecology" first appeared in Ellen Swallow Richards' 1907 Sanitation in Daily Life, where it was defined as "the study of the surroundings of human beings in the effects they produce on the lives of men". Richard's use of the term recognized humans as part of rather than separate from nature. The term made its first formal appearance in the field of sociology in the 1921 book "Introduction to the Science of Sociology", published by Robert E. Park and Ernest W. Burgess (also from the sociology department at the University of Chicago). Their student, Roderick D. McKenzie helped solidify human ecology as a sub-discipline within the Chicago school. These authors emphasized the difference between human ecology and ecology in general by highlighting cultural evolution in human societies.
Human ecology has a fragmented academic history with developments spread throughout a range of disciplines, including: home economics, geography, anthropology, sociology, zoology, and psychology. Some authors have argued that geography is human ecology. Much historical debate has hinged on the placement of humanity as part or as separate from nature. In light of the branching debate of what constitutes human ecology, recent interdisciplinary researchers have sought a unifying scientific field they have titled coupled human and natural systems that "builds on but moves beyond previous work (e.g., human ecology, ecological anthropology, environmental geography)." Other fields or branches related to the historical development of human ecology as a discipline include cultural ecology, urban ecology, environmental sociology, and anthropological ecology. Even though the term ‘human ecology' was popularized in the 1920s and 1930s, studies in this field had been conducted since the early nineteenth century in England and France.
In 1969, College of the Atlantic in Bar Harbor, Maine, was founded as a school of human ecology. Since its first enrolled class of 32 students, the college has grown into a small liberal arts institution with about 350 students and 35 full-time faculty. Every graduate receives a degree in human ecology, an interdisciplinary major which each student designs to fit their own interests and needs.
Biological ecologists have traditionally been reluctant to study human ecology, gravitating instead to the allure of wild nature. Human ecology has a history of focusing attention on humans' impact on the biotic world. Paul Sears was an early proponent of applying human ecology, addressing topics aimed at the population explosion of humanity, global resource limits, pollution, and published a comprehensive account on human ecology as a discipline in 1954. He saw the vast "explosion" of problems humans were creating for the environment and reminded us that "what is important is the work to be done rather than the label." "When we as a profession learn to diagnose the total landscape, not only as the basis of our culture, but as an expression of it, and to share our special knowledge as widely as we can, we need not fear that our work will be ignored or that our efforts will be unappreciated." Recently, the Ecological Society of America has added a Section on Human Ecology, indicating the increasing openness of biological ecologists to engage with human dominated systems and the acknowledgement that most contemporary ecosystems have been influenced by human action.
Overview
Human ecology has been defined as a type of analysis applied to the relations in human beings that was traditionally applied to plants and animals in ecology. Toward this aim, human ecologists (which can include sociologists) integrate diverse perspectives from a broad spectrum of disciplines covering "wider points of view". In its 1972 premier edition, the editors of Human Ecology: An Interdisciplinary Journal gave an introductory statement on the scope of topics in human ecology. Their statement provides a broad overview on the interdisciplinary nature of the topic:
Genetic, physiological, and social adaptation to the environment and to environmental change;
The role of social, cultural, and psychological factors in the maintenance or disruption of ecosystems;
Effects of population density on health, social organization, or environmental quality;
New adaptive problems in urban environments;
Interrelations of technological and environmental changes;
The development of unifying principles in the study of biological and cultural adaptation;
The genesis of maladaptions in human biological and cultural evolution;
The relation of food quality and quantity to physical and intellectual performance and to demographic change;
The application of computers, remote sensing devices, and other new tools and techniques
Forty years later in the same journal, Daniel G. Bates (2012) notes lines of continuity in the discipline and the way it has changed:
Today there is greater emphasis on the problems facing individuals and how actors deal with them with the consequence that there is much more attention to decision-making at the individual level as people strategize and optimize risk, costs and benefits within specific contexts. Rather than attempting to formulate a cultural ecology or even a specifically "human ecology" model, researchers more often draw on demographic, economic and evolutionary theory as well as upon models derived from field ecology.
While theoretical discussions continue, research published in Human Ecology Review suggests that recent discourse has shifted toward applying principles of human ecology. Some of these applications focus instead on addressing problems that cross disciplinary boundaries or transcend those boundaries altogether. Scholarship has increasingly tended away from Gerald L. Young's idea of a "unified theory" of human ecological knowledge—that human ecology may emerge as its own discipline—and more toward the pluralism best espoused by Paul Shepard: that human ecology is healthiest when "running out in all directions". But human ecology is neither anti-discipline nor anti-theory, rather it is the ongoing attempt to formulate, synthesize, and apply theory to bridge the widening schism between man and nature. This new human ecology emphasizes complexity over reductionism, focuses on changes over stable states, and expands ecological concepts beyond plants and animals to include people.
Application to epidemiology and public health
The application of ecological concepts to epidemiology has similar roots to those of other disciplinary applications, with Carl Linnaeus having played a seminal role. However, the term appears to have come into common use in the medical and public health literature in the mid-twentieth century. This was strengthened in 1971 by the publication of Epidemiology as Medical Ecology, and again in 1987 by the publication of a textbook on Public Health and Human Ecology. An "ecosystem health" perspective has emerged as a thematic movement, integrating research and practice from such fields as environmental management, public health, biodiversity, and economic development. Drawing in turn from the application of concepts such as the social-ecological model of health, human ecology has converged with the mainstream of global public health literature.
Connection to home economics
In addition to its links to other disciplines, human ecology has a strong historical linkage to the field of home economics through the work of Ellen Swallow Richards, among others. However, as early as the 1960s, a number of universities began to rename home economics departments, schools, and colleges as human ecology programs. In part, this name change was a response to perceived difficulties with the term home economics in a modernizing society, and reflects a recognition of human ecology as one of the initial choices for the discipline which was to become home economics. Current human ecology programs include the University of Wisconsin School of Human Ecology, the Cornell University College of Human Ecology, and the University of Alberta's Department of Human Ecology, among others.
Niche of the Anthropocene
Changes to the Earth by human activities have been so great that a new geological epoch named the Anthropocene has been proposed. The human niche or ecological polis of human society, as it was known historically, has created entirely new arrangements of ecosystems as we convert matter into technology. Human ecology has created anthropogenic biomes (called anthromes). The habitats within these anthromes reach out through our road networks to create what has been called technoecosystems containing technosols. Technodiversity exists within these technoecosystems. In direct parallel to the concept of the ecosphere, human civilization has also created a technosphere. The way that the human species engineers or constructs technodiversity into the environment threads back into the processes of cultural and biological evolution, including the human economy.
Ecosystem services
The ecosystems of planet Earth are coupled to human environments. Ecosystems regulate the global geophysical cycles of energy, climate, soil nutrients, and water that in turn support and grow natural capital (including the environmental, physiological, cognitive, cultural, and spiritual dimensions of life). Ultimately, every manufactured product in human environments comes from natural systems. Ecosystems are considered common-pool resources because ecosystems do not exclude beneficiaries and they can be depleted or degraded. For example, green space within communities provides sustainable health services that reduce mortality and regulate the spread of vector-borne disease. Research shows that people who are more engaged with and who have regular access to natural areas benefit from lower rates of diabetes, heart disease and psychological disorders. These ecological health services are regularly depleted through urban development projects that do not factor in the common-pool value of ecosystems.
The ecological commons delivers a diverse supply of community services that sustains the well-being of human society. The Millennium Ecosystem Assessment, an international UN initiative involving more than 1,360 experts worldwide, identifies four main ecosystem service types having 30 sub-categories stemming from natural capital. The ecological commons includes provisioning (e.g., food, raw materials, medicine, water supplies), regulating (e.g., climate, water, soil retention, flood retention), cultural (e.g., science and education, artistic, spiritual), and supporting (e.g., soil formation, nutrient cycling, water cycling) services.
Sixth mass extinction
Global assessments of biodiversity indicate that the current epoch, the Holocene (or Anthropocene) is a sixth mass extinction. Species loss is accelerating at 100–1000 times faster than average background rates in the fossil record. The field of conservation biology involves ecologists that are researching, confronting, and searching for solutions to sustain the planet's ecosystems for future generations.
"Human activities are associated directly or indirectly with nearly every aspect of the current extinction spasm."
Nature is a resilient system. Ecosystems regenerate, withstand, and are forever adapting to fluctuating environments. Ecological resilience is an important conceptual framework in conservation management and it is defined as the preservation of biological relations in ecosystems that persevere and regenerate in response to disturbance over time.
However, persistent, systematic, large and non-random disturbance caused by the niche-constructing behavior of human beings, including habitat conversion and land development, has pushed many of the Earth's ecosystems to the extent of their resilience thresholds. Three planetary thresholds have already been crossed, including biodiversity loss, climate change, and nitrogen cycles. These biophysical systems are ecologically interrelated and are naturally resilient, but human civilization has transitioned the planet to an Anthropocene epoch and the ecological state of the Earth is deteriorating rapidly, to the detriment of humanity. The world's fisheries and oceans, for example, are facing dire challenges as the threat of global collapse appears imminent, with serious ramifications for the well-being of humanity.
While the Anthropocene is yet to be classified as an official epoch, current evidence suggest that "an epoch-scale boundary has been crossed within the last two centuries." The ecology of the planet is further threatened by global warming, but investments in nature conservation can provide a regulatory feedback to store and regulate carbon and other greenhouse gases.
Ecological footprint
In 1992, William Rees developed the ecological footprint concept. The ecological footprint and its close analog the water footprint has become a popular way of accounting for the level of impact that human society is imparting on the Earth's ecosystems. All indications are that the human enterprise is unsustainable as the footprint of society is placing too much stress on the ecology of the planet. The WWF 2008 living planet report and other researchers report that human civilization has exceeded the bio-regenerative capacity of the planet. This means that the footprint of human consumption is extracting more natural resources than can be replenished by ecosystems around the world.
Ecological economics
Ecological economics is an economic science that extends its methods of valuation onto nature in an effort to address the inequity between market growth and biodiversity loss. Natural capital is the stock of materials or information stored in biodiversity that generates services that can enhance the welfare of communities. Population losses are the more sensitive indicator of natural capital than are species extinction in the accounting of ecosystem services. The prospect for recovery in the economic crisis of nature is grim. Populations, such as local ponds and patches of forest are being cleared away and lost at rates that exceed species extinctions. The mainstream growth-based economic system adopted by governments worldwide does not include a price or markets for natural capital. This type of economic system places further ecological debt onto future generations.
Human societies are increasingly being placed under stress as the ecological commons is diminished through an accounting system that has incorrectly assumed "... that nature is a fixed, indestructible capital asset." The current wave of threats, including massive extinction rates and concurrent loss of natural capital to the detriment of human society, is happening rapidly. This is called a biodiversity crisis, because 50% of the worlds species are predicted to go extinct within the next 50 years. Conventional monetary analyses are unable to detect or deal with these sorts of ecological problems. Multiple global ecological economic initiatives are being promoted to solve this problem. For example, governments of the G8 met in 2007 and set forth The Economics of Ecosystems and Biodiversity (TEEB) initiative:
In a global study we will initiate the process of analyzing the global economic benefit of biological diversity, the costs of the loss of biodiversity and the failure to take protective measures versus the costs of effective conservation.
The work of Kenneth E. Boulding is notable for building on the integration between ecology and its economic origins. Boulding drew parallels between ecology and economics, most generally in that they are both studies of individuals as members of a system, and indicated that the "household of man" and the "household of nature" could somehow be integrated to create a perspective of greater value.
Interdisciplinary approaches
Human ecology expands functionalism from ecology to the human mind. People's perception of a complex world is a function of their ability to be able to comprehend beyond the immediate, both in time and in space. This concept manifested in the popular slogan promoting sustainability: "think global, act local." Moreover, people's conception of community stems from not only their physical location but their mental and emotional connections and varies from "community as place, community as way of life, or community of collective action."
In the last century, the world has faced several challenges, including environmental degradation, public health issues, and climate change. Addressing these issues requires interdisciplinary and transdisciplinary interventions, allowing for a comprehensive understanding of the intricate connections between human societies and the environment. In the early years, human ecology was still deeply enmeshed in its respective disciplines: geography, sociology, anthropology, psychology, and economics. Scholars through the 1970s until present have called for a greater integration between all of the scattered disciplines that has each established formal ecological research.
In art
While some of the early writers considered how art fit into a human ecology, it was Sears who posed the idea that in the long run human ecology will in fact look more like art. Bill Carpenter (1986) calls human ecology the "possibility of an aesthetic science", renewing dialogue about how art fits into a human ecological perspective. According to Carpenter, human ecology as an aesthetic science counters the disciplinary fragmentation of knowledge by examining human consciousness.
In education
While the reputation of human ecology in institutions of higher learning is growing, there is no human ecology at the primary or secondary education levels, with one notable exception, Syosset High School, in Long Island, New York. Educational theorist Sir Kenneth Robinson has called for diversification of education to promote creativity in academic and non-academic (i.e., educate their "whole being") activities to implement a "new conception of human ecology".
Furthermore the College of the Atlantic in Bar Harbor, Maine offers a Master’s of Philosophy in Human Ecology. This unique Master’s degree consists of two components; the first consists of nine courses and the second is a thesis. However, as of currently they are not accepting applications to this master’s program.
Bioregionalism and urban ecology
In the late 1960s, ecological concepts started to become integrated into the applied fields, namely architecture, landscape architecture, and planning. Ian McHarg called for a future when all planning would be "human ecological planning" by default, always bound up in humans' relationships with their environments. He emphasized local, place-based planning that takes into consideration all the "layers" of information from geology to botany to zoology to cultural history. Proponents of the new urbanism movement, like James Howard Kunstler and Andres Duany, have embraced the term human ecology as a way to describe the problem of—and prescribe the solutions for—the landscapes and lifestyles of an automobile oriented society. Duany has called the human ecology movement to be "the agenda for the years ahead." While McHargian planning is still widely respected, the landscape urbanism movement seeks a new understanding between human and environment relations. Among these theorists is Frederich Steiner, who published Human Ecology: Following Nature's Lead in 2002 which focuses on the relationships among landscape, culture, and planning. The work highlights the beauty of scientific inquiry by revealing those purely human dimensions which underlie our concepts of ecology. While Steiner discusses specific ecological settings, such as cityscapes and waterscapes, and the relationships between socio-cultural and environmental regions, he also takes a diverse approach to ecology—considering even the unique synthesis between ecology and political geography. Deiter Steiner's 2003 Human Ecology: Fragments of Anti-fragmentary view of the world is an important expose of recent trends in human ecology. Part literature review, the book is divided into four sections: "human ecology", "the implicit and the explicit", "structuration", and "the regional dimension". Much of the work stresses the need for transciplinarity, antidualism, and wholeness of perspective.
Key journals
Ecology and Society
Human Ecology: An Interdisciplinary Journal
Human Ecology Review
Journal of Human Ecology and Sustainability
See also
Agroecology
Collaborative intelligence
College of the Atlantic
Contact zone
Ecological overshoot
Environmental anthropology
Environmental archaeology
Environmental communication
Environmental economics
Environmental racism
Ecology, espc. Ecology#Human ecology
Environmental psychology
Environmental sociology
Ecological systems theory
Ecosemiotics
Family and consumer science
Green economy
Home economics
Human behavioral ecology
Human ecosystem
Industrial ecology
Integrated landscape management
Otium
Political ecology
Rural sociology
Sociobiology
Social ecology (theory)
Spome
Urie Bronfenbrenner
Ernest Burgess
John Paul Goode
Robert E. Park
Louis Wirth
Rights of nature
Anthropogenic metabolism
Anthroposphere
Collective consciousness
Scale (analytical tool)
Ecological civilization
References
Further reading
Cohen, J. 1995. How Many People Can the Earth Support? New York: Norton and Co.
Dyball, R. and Newell, B. 2015 Understanding Human Ecology: A Systems Approach to Sustainability London, England: Routledge.
Henderson, Kirsten, and Michel Loreau. "An ecological theory of changing human population dynamics." People and Nature 1.1 (2019): 31–43.
Eisenberg, E. 1998. The Ecology of Eden. New York: Knopf.
Hansson, L.O. and B. Jungen (eds.). 1992. Human Responsibility and Global Change. Göteborg, Sweden: University of Göteborg.
Hens, L., R.J. Borden, S. Suzuki and G. Caravello (eds.). 1998. Research in Human Ecology: An Interdisciplinary Overview. Brussels, Belgium: Vrije Universiteit Brussel (VUB) Press.
Marten, G.G. 2001. Human Ecology: Basic Concepts for Sustainable Development. Sterling, VA: Earthscan.
McDonnell, M.J. and S.T. Pickett. 1993. Humans as Components of Ecosystems: The Ecology of Subtle Human Effects and Populated Areas. New York: Springer-Verlag.
Miller, J.R., R.M. Lerner, L.B. Schiamberg and P.M. Anderson. 2003. Encyclopedia of Human Ecology. Santa Barbara, CA: ABC-CLIO.
Polunin, N. and J.H. Burnett. 1990. Maintenance of the Biosphere. (Proceedings of the 3rd International Conference on Environmental Future — ICEF). Edinburgh: University of Edinburgh Press.
Quinn, J.A. 1950. Human Ecology. New York: Prentice-Hall.
Sargent, F. (ed.). 1974. Human Ecology. New York: American Elsevier.
Suzuki, S., R.J. Borden and L. Hens (eds.). 1991. Human Ecology — Coming of Age: An International Overview. Brussels, Belgium: Vrije Universiteit Brussel (VUB) Press.
Tengstrom, E. 1985. Human Ecology — A New Discipline?: A Short Tentative Description of the Institutional and Intellectual History of Human Ecology. Göteborg, Sweden: Humanekologiska Skrifter.
Theodorson, G.A. 1961. . Evanston, IL: Row, Peterson and Co.
Wyrostkiewicz, M. 2013. "Human Ecology. An Outline of the Concept and the Relationship between Man and Nature". Lublin, Poland: Wydawnictwo KUL
Young, G.L. (ed.). 1989. Origins of Human Ecology. Stroudsburg, PA: Hutchinson Ross.
External links
Environmental studies
Human geography | Human ecology | [
"Environmental_science"
] | 5,192 | [
"Human geography",
"Environmental social science",
"Human ecology"
] |
156,089 | https://en.wikipedia.org/wiki/Alexander%20Prokhorov | Alexander Mikhailovich Prokhorov (born Alexander Michael Prochoroff, ; 11 July 1916 – 8 January 2002) was a Russian physicist and researcher on lasers and masers in the former Soviet Union for which he shared the Nobel Prize in Physics in 1964 with Charles Hard Townes and Nikolay Basov.
Early life
Alexander Michael Prochoroff was born on 11 July 1916 at Russell Road, Peeramon, Queensland, Australia (now 322 Gadaloff Road, Butchers Creek, situated about 30 km from Atherton), to Mikhail Ivanovich Prokhorov and Maria Ivanovna (née Mikhailova), Russian revolutionaries who had emigrated from Russia to escape repression by the tsarist regime. As a child he attended Butchers Creek State School.
In 1923, after the October Revolution and the Russian Civil War, the family returned to Russia. In 1934, Prokhorov entered the Saint Petersburg State University to study physics. He was a member of the Komsomol from 1930 to 1944. Prokhorov graduated with honors in 1939 and moved to Moscow to work at the Lebedev Physical Institute, in the oscillations laboratory headed by academician N. D. Papaleksi. His research there was devoted to propagation of radio waves in the ionosphere. At the onset of World War II, in June 1941, he joined the Red Army. Prokhorov fought in the infantry, was wounded twice in battles, and was awarded three medals, including the Medal For Courage in 1946. He was demobilized in 1944 and returned to the Lebedev Institute where, in 1946, he defended his Ph.D. thesis on "Theory of Stabilization of Frequency of a Tube Oscillator in the Theory of a Small Parameter".
Research
In 1947, Prokhorov started working on coherent radiation emitted by electrons orbiting in a cyclic particle accelerator called a synchrotron. He demonstrated that the emission is mostly concentrated in the microwave spectral range. His results became the basis of his habilitation on "Coherent Radiation of Electrons in the Synchrotron Accelerator", defended in 1951. By 1950, Prokhorov was assistant chief of the oscillation laboratory. Around that time, he formed a group of young scientists to work on radiospectroscopy of molecular rotations and vibrations, and later on quantum electronics. The group focused on a special class of molecules which have three (non-degenerate) moments of inertia. The research was conducted both on experiment and theory. In 1954, Prokhorov became head of the laboratory. Together with Nikolay Basov he developed theoretical grounds for creation of a molecular oscillator and constructed such a device based on ammonia. They also proposed a method for the production of population inversion using inhomogeneous electric and magnetic fields. Their results were first presented at a national conference in 1952, but not published until 1954–1955.
In 1955, Prokhorov started his research in the field of electron paramagnetic resonance (EPR). He focused on relaxation times of ions of the iron group elements in a lattice of aluminium oxide, but also investigated other, "non-optical", topics, such as magnetic phase transitions in DPPH. In 1957, while studying ruby, a chromium-doped variation of aluminium oxide, he came upon the idea of using this material as an active medium of a laser. As a new type of laser resonator, he proposed, in 1958, an "open type" cavity design, which is widely used today. In 1963, together with A. S. Selivanenko, he suggested a laser using two-quantum transitions. For his pioneering work on lasers and masers, in 1964, he received the Nobel Prize in Physics shared with Nikolay Basov and Charles Hard Townes.
Posts and awards
In 1959, Prokhorov became a professor at Moscow State University – the most prestigious university in the Soviet Union; the same year, he was awarded the Lenin Prize. In 1960, he became a member of the Russian Academy of Sciences and elected Academician in 1966. In 1967, he was awarded his first Order of Lenin (he received five of them during life, in 1967, 1969, 1975, 1981 and 1986). In 1968, he became vice-director of the Lebedev Institute and in 1971 took the position of Head of Laboratory of another prestigious Soviet institution, the Moscow Institute of Physics and Technology. In the same year, he was elected a member of the American Academy of Arts and Sciences. In 1983 he was elected a Member of the German Academy of Sciences Leopoldina. Between 1982 and 1998, Prokhorov served as acting director of the General Physics Institute of the Russian Academy of Sciences, and after 1998 as honorary director. After his death in 2002, the institute was renamed the of the Russian Academy of Sciences. Prokhorov was a Member and one of the Honorary Presidents of the International Academy of Science, Munich and supported 1993 the foundation and development of the Russian Section of International Academy of Science, Moscow.
In 1969, Prokhorov became a Hero of Socialist Labour, the highest degree of distinction in the Soviet Union for achievements in national economy and culture. He received the second such award in 1986. Starting in 1969, he was the chief editor of the Great Soviet Encyclopedia. He was awarded the Frederic Ives Medal, the highest distinction of the Optical Society of America (OSA), in 2000 and became an Honorary OSA Member in 2001. The same year, he was awarded the Demidov Prize.
He died on 8 January 2002 at Moscow and was buried at Novodevichy Cemetery.
Politics
Prokhorov became a member of the Communist Party in 1950. In 1983, together with three other academicians – Andrey Tychonoff, Anatoly Dorodnitsyn and Georgy Skryabin – he signed the famous open letter called "when they lose honor and conscience" (Когда теряют честь и совесть), denouncing Andrey Sakharov's article in the Foreign Affairs.
Family
Both of Prokhorov's parents died during World War II. Prokhorov married geographer Galina Shelepina in 1941, and they had a son, Kiril, born in 1945. Following his father, Kiril Prokhorov became a physicist in the field of optics and is currently leading a laser-related laboratory at the A. M. Prokhorov General Physics Institute.
Honours and awards
Mandelstam Prize (1948)
Lenin Prize (1959)
Five Orders of Lenin (including 11 May 1981)
Order of the Patriotic War, 1st class (1985)
Nobel Prize in Physics (1964)
Hero of Socialist Labour, twice (1969, 1986)
Medal For Courage
USSR State Prize (1980)
Order of Merit for the Fatherland, 2nd class (1996)
State Prize of the Russian Federation (1998)
Medal Frederick Ayvesa (2000)
Demidov Prize (2001)
Lomonosov Gold Medal (Moscow State University, 1987)
Award of the Council of Ministers
State Prize of the Russian Federation in science and technology (2003, posthumously) for the development of scientific and technological foundations of metrological support of measurements of length in the microwave and nanometer ranges and their application in microelectronics and nanotechnology
Foreign Member of the Czechoslovak Academy of Sciences (1982)
Jubilee Medal "In Commemoration of the 100th Anniversary since the Birth of Vladimir Il'ich Lenin"
Medal "For the Victory over Germany in the Great Patriotic War 1941–1945"
Jubilee Medal "Twenty Years of Victory in the Great Patriotic War 1941-1945"
Jubilee Medal "Thirty Years of Victory in the Great Patriotic War 1941-1945"
Jubilee Medal "Forty Years of Victory in the Great Patriotic War 1941-1945"
Medal "For Valiant Labour in the Great Patriotic War 1941-1945"
Medal "Veteran of Labour"
Jubilee Medal "50 Years of the Armed Forces of the USSR"
Medal "In Commemoration of the 800th Anniversary of Moscow"
Medal "In Commemoration of the 850th Anniversary of Moscow"
Books
A. M. Prokhorov (Editor in Chief), J. M. Buzzi, P. Sprangle, K. Wille. Coherent Radiation Generation and Particle Acceleration, 1992, . Research Trends in Physics series published by the American Institute of Physics Press (presently Springer, New York)
V. Stefan and A. M. Prokhorov (Editors) Diamond Science and Technology Vol 1: Laser Diamond Interaction. Plasma Diamond Reactors (Stefan University Press Series on Frontiers in Science and Technology) 1999 .
V. Stefan and A. M. Prokhorov (Editors). Diamond Science and Technology Vol 2 (Stefan University Press Series on Frontiers in Science and Technology) 1999 .
References
External links
including the Nobel Lecture, 11 December 1964 Quantum Electronics
Prokhorov's role in the invention of lasers and masers
Prokhorov's grave in Novodevichy cemetery
1916 births
2002 deaths
Academic staff of Moscow State University
Academic staff of the Moscow Institute of Physics and Technology
Full Members of the USSR Academy of Sciences
Full Members of the Russian Academy of Sciences
Nobel laureates in Physics
Soviet Nobel laureates
Demidov Prize laureates
Recipients of the USSR State Prize
Recipients of the Lenin Prize
State Prize of the Russian Federation laureates
Heroes of Socialist Labour
Recipients of the Order of Lenin
Recipients of the Order "For Merit to the Fatherland", 2nd class
Recipients of the Medal "For Courage" (Russia)
Recipients of the Lomonosov Gold Medal
Australian people of Russian descent
Experimental physicists
Optical physicists
Laser researchers
Soviet inventors
Soviet physicists
Soviet military personnel of World War II
Saint Petersburg State University alumni
Australian emigrants
Immigrants to the Soviet Union
Communist Party of the Soviet Union members
Burials at Novodevichy Cemetery
Spectroscopists
Members of the German National Academy of Sciences Leopoldina
Members of the German Academy of Sciences at Berlin
Russian scientists | Alexander Prokhorov | [
"Physics",
"Chemistry",
"Technology"
] | 2,045 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Recipients of the Lomonosov Gold Medal",
"Spectroscopists",
"Science and technology awards",
"Spectroscopy"
] |
156,267 | https://en.wikipedia.org/wiki/Cosplay | Cosplay, a blend word of "costume play", is an activity and performance art in which participants called cosplayers wear costumes and fashion accessories to represent a specific character. Cosplayers often interact to create a subculture, and a broader use of the term "cosplay" applies to any costumed role-playing in venues apart from the stage. Any entity that lends itself to dramatic interpretation may be taken up as a subject. Favorite sources include anime, cartoons, comic books, manga, television series, rock music performances, video games and in some cases, original characters. The term has been adopted as slang, often in politics, to mean someone pretending to play a role or take on a personality disingenuously.
Cosplay grew out of the practice of fan costuming at science fiction conventions, beginning with Morojo's "futuristicostumes" created for the 1st World Science Fiction Convention held in New York City in 1939. The Japanese term was coined in 1984. A rapid growth in the number of people cosplaying as a hobby since the 1990s has made the phenomenon a significant aspect of popular culture in Japan, as well as in other parts of East Asia and in the Western world. Cosplay events are common features of fan conventions, and today there are many dedicated conventions and competitions, as well as social networks, websites, and other forms of media centered on cosplay activities. Cosplay is very popular among all genders, and it is not unusual to see crossplay, also referred to as gender-bending.
Etymology
The term "cosplay" is a Japanese blend word of the English terms costume and play. The term was coined by of Studio Hard after he attended the 1984 World Science Fiction Convention (Worldcon) in Los Angeles and saw costumed fans, which he later wrote about in an article for the Japanese magazine . Takahashi decided to coin a new word rather than use the existing translation of the English term "masquerade" because that translates into Japanese as "an aristocratic costume party", which did not match his experience of the Worldcon. The coinage reflects a common Japanese method of abbreviation in which the first two moras of a pair of words are used to form an independent compound: 'costume' becomes kosu (コス) and 'play' becomes pure (プレ).
History
Pre-20th century
Masquerade balls were a feature of the Carnival season in the 15th century, and involved increasingly elaborate allegorical Royal Entries, pageants, and triumphal processions celebrating marriages and other dynastic events of late medieval court life. They were extended into costumed public festivities in Italy during the 16th century Renaissance, generally elaborate dances held for members of the upper classes, which were particularly popular in Venice.
In April 1877, Jules Verne sent out almost 700 invitations for an elaborate costume ball, where several of the guests showed up dressed as characters from Verne's novels.
Costume parties (American English) or fancy dress parties (British English) were popular from the 19th century onwards. Costuming guides of the period, such as Samuel Miller's Male Character Costumes (1884) or Ardern Holt's Fancy Dresses Described (1887), feature mostly generic costumes, whether that be period costumes, national costumes, objects or abstract concepts such as "Autumn" or "Night". Most specific costumes described therein are for historical figures although some are sourced from fiction, like The Three Musketeers or Shakespeare characters.
By March 1891, a literal call by one Herbert Tibbits for what would today be described as "cosplayers" was advertised for an event held from 5–10 March that year at the Royal Albert Hall in London, for the so-named Vril-Ya Bazaar and Fete based on a science fiction novel and its characters, published two decades earlier.
Fan costuming
A.D. Condo's science fiction comic strip character Mr. Skygack, from Mars (a Martian ethnographer who comically misunderstands many Earthly affairs) is arguably the first fictional character that people emulated by wearing costumes, as in 1908 Mr. and Mrs. William Fell of Cincinnati, Ohio, are reported to have attended a masquerade at a skating rink wearing Mr. Skygack and Miss Dillpickles costumes. Later, in 1910, an unnamed woman won first prize at masquerade ball in Tacoma, Washington, wearing another Skygack costume.
The first people to wear costumes to attend a convention were science fiction fans Forrest J Ackerman and Myrtle R. Douglas, known in fandom as Morojo. They attended the 1939 1st World Science Fiction Convention (Nycon or 1st Worldcon) in the Caravan Hall, New York, US dressed in "futuristicostumes", including green cape and breeches, based on the pulp magazine artwork of Frank R. Paul and the 1936 film Things to Come, designed and created by Douglas.
Ackerman later stated that he thought everyone was supposed to wear a costume at a science fiction convention, although only he and Douglas did.
Fan costuming caught on, however, and the 2nd Worldcon (1940) had both an unofficial masquerade held in Douglas' room and an official masquerade as part of the programme. David Kyle won the masquerade wearing a Ming the Merciless costume created by Leslie Perri, while Robert A. W. Lowndes received second place with a Bar Senestro costume (from the novel The Blind Spot by Austin Hall and Homer Eon Flint). Other costumed attendees included guest of honor E. E. Smith as Northwest Smith (from C. L. Moore's series of short stories) and both Ackerman and Douglas wearing their futuristicostumes again. Masquerades and costume balls continued to be part of World Science Fiction Convention tradition thereafter. Early Worldcon masquerade balls featured a band, dancing, food and drinks. Contestants either walked across a stage or a cleared area of the dance floor.
Ackerman wore a "Hunchbackerman of Notre Dame" costume to the 3rd Worldcon (1941), which included a mask designed and created by Ray Harryhausen, but soon stopped wearing costumes to conventions. Douglas wore an Akka costume (from A. Merritt's novel The Moon Pool), the mask again made by Harryhausen, to the 3rd Worldcon and a Snake Mother costume (another Merritt costume, from The Snake Mother) to the 4th Worldcon (1946). Terminology was yet unsettled; the 1944 edition of Jack Speer's Fancyclopedia used the term costume party.
Rules governing costumes became established in response to specific costumes and costuming trends. The first nude contestant at a Worldcon masquerade was in 1952; but the height of this trend was in the 1970s and early 1980s, with a few every year. This eventually led to "No Costume is No Costume" rule, which banned full nudity, although partial nudity was still allowed as long as it was a legitimate representation of the character. Mike Resnick describes the best of the nude costumes as Kris Lundi wearing a harpy costume to the 32nd Worldcon (1974) (she received an honorable mention in the competition). Another costume that instigated a rule change was an attendee at the 20th Worldcon (1962) whose blaster prop fired a jet of real flame; which led to fire being banned. At the 30th WorldCon (1972), artist Scott Shaw wore a costume composed largely of peanut butter to represent his own underground comix character called "The Turd". The peanut butter rubbed off, doing damage to soft furnishings and other peoples' costumes, and then began to go rancid under the heat of the lighting. Food, odious, and messy substances were banned as costume elements after that event.
Costuming spread with the science fiction conventions and the interaction of fandom. The earliest known instance of costuming at a convention in the United Kingdom was at the London Science Fiction Convention (1953) but this was only as part of a play. However, members of the Liverpool Science Fantasy Society attended the 1st Cytricon (1955), in Kettering, wearing costumes and continued to do so in subsequent years. The 15th Worldcon (1957) brought the first official convention masquerade to the UK. The 1960 Eastercon in London may have been the first British-based convention to hold an official fancy dress party as part of its programme.
The joint winners were Ethel Lindsay and Ina Shorrock as two of the titular witches from the novel The Witches of Karres by James H. Schmitz.
Star Trek conventions began in 1969 and major conventions began in 1972 and they have featured cosplay throughout.
In Japan, costuming at conventions was a fan activity from at least the 1970s, especially after the launch of the Comiket convention in December 1975. Costuming at this time was known as . The first documented case of costuming at a fan event in Japan was at Ashinocon (1978), in Hakone, at which future science fiction critic Mari Kotani wore a costume based on the cover art for Edgar Rice Burroughs' novel A Fighting Man of Mars. In an interview Kotani states that there were about twenty costumed attendees at the convention's costume party—made up of members of her Triton of the Sea fan club and , antecedent of the Gainax anime studio—with most attendees in ordinary clothing. One of the Kansai group, an unnamed friend of Yasuhiro Takeda, wore an impromptu Tusken Raider costume (from the film Star Wars) made from one of the host-hotel's rolls of toilet paper. Costume contests became a permanent part of the Nihon SF Taikai conventions from Tokon VII in 1980.
Possibly the first costume contest held at a comic book convention was at the 1st Academy Con held at Broadway Central Hotel in New York in August 1965. Roy Thomas, future editor-in-chief of Marvel Comics but then just transitioning from a fanzine editor to a professional comic book writer, attended in a Plastic Man costume.
The first Masquerade Ball held at San Diego Comic-Con was in 1974 during the convention's 6th event. Voice actress June Foray was the master of ceremonies. Future scream queen Brinke Stevens won first place wearing a Vampirella costume. Ackerman (who was the creator of Vampirella) was in attendance and posed with Stevens for photographs. They became friends and, according to Stevens "Forry and his wife, Wendayne, soon became like my god parents." Photographer Dan Golden saw a photograph of Stevens in the Vampirella costume while visiting Ackerman's house, leading to him hiring her for a non-speaking role in her first student film, Zyzak is King (1980), and later photographing her for the cover of the first issue of Femme Fatales (1992). Stevens attributes these events to launching her acting career.
As early as a year after the 1975 release of The Rocky Horror Picture Show, audience members began dressing as characters from the movie and role-playing (although the initial incentive for dressing-up was free admission) in often highly accurate costumes.
Costume-Con, a conference dedicated to costuming, was first held in January 1983. The International Costumers Guild, Inc., originally known as the Greater Columbia Fantasy Costumer's Guild, was launched after the 3rd Costume-Con (1985) as a parent organization and to support costuming.
Cosplay
Costuming had been a fan activity in Japan from the 1970s, and it became much more popular in the wake of Takahashi's report. The new term did not catch on immediately, however. It was a year or two after the article was published before it was in common use among fans at conventions. It was in the 1990s, after exposure on television and in magazines, that the term and practice of cosplaying became common knowledge in Japan.
The first cosplay cafés appeared in the Akihabara area of Tokyo in the late 1990s. A temporary maid café was set up at the Tokyo Character Collection event in August 1998 to promote the video game Welcome to Pia Carrot 2 (1997). An occasional Pia Carrot Restaurant was held at the shop Gamers in Akihabara in the years up to 2000. Being linked to specific intellectual properties limited the lifespan of these cafés, which was solved by using generic maids, leading to the first permanent establishment, Cure Maid Café, which opened in March 2001.
The first World Cosplay Summit was held on 12 October 2003 at the Rose Court Hotel in Nagoya, Japan, with five cosplayers invited from Germany, France and Italy. There was no contest until 2005, when the World Cosplay Championship began. The first winners were the Italian team of , Francesca Dani and Emilia Fata Livia.
Worldcon masquerade attendance peaked in the 1980s and started to fall thereafter. This trend was reversed when the concept of cosplay was re-imported from Japan.
Practice of cosplay
Cosplay costumes vary greatly and can range from simple themed clothing to highly detailed costumes. It is generally considered different from Halloween and Mardi Gras costume wear, as the intention is to replicate a specific character, rather than to reflect the culture and symbolism of a holiday event. As such, when in costume, some cosplayers often seek to adopt the affect, mannerisms, and body language of the characters they portray (with "out of character" breaks). The characters chosen to be cosplayed may be sourced from any movie, TV series, book, comic book, video game, music band, anime, or manga. Some cosplayers even choose to cosplay an original character of their own design or a fusion of different genres (e.g., a steampunk version of a character), and it is a part of the ethos of cosplay that anybody can be anything, as with genderbending, crossplay, or drag, a cosplayer playing a character of another ethnicity, or a hijabi portraying Captain America.
Costumes
Cosplayers obtain their apparel through many different methods. Manufacturers produce and sell packaged outfits for use in cosplay, with varying levels of quality. These costumes are often sold online, but also can be purchased from dealers at conventions. Japanese manufacturers of cosplay costumes reported a profit of 35 billion yen in 2008. A number of individuals also work on commission, creating custom costumes, props, or wigs designed and fitted to the individual. Other cosplayers, who prefer to create their own costumes, still provide a market for individual elements, and various raw materials, such as unstyled wigs, hair dye, cloth and sewing notions, liquid latex, body paint, costume jewelry, and prop weapons.
Cosplay represents an act of embodiment. Cosplay has been closely linked to the presentation of self, yet cosplayers' ability to perform is limited by their physical features. The accuracy of a cosplay is judged based on the ability to accurately represent a character through the body, and individual cosplayers frequently are faced by their own "bodily limits" such as level of attractiveness, body size, and disability that often restrict and confine how accurate the cosplay is perceived to be. Authenticity is measured by a cosplayer's individual ability to translate on-screen manifestation to the cosplay itself. Some have argued that cosplay can never be a true representation of the character; instead, it can only be read through the body, and that true embodiment of a character is judged based on nearness to the original character form. Cosplaying can also help some of those with self-esteem problems.
Many cosplayers create their own outfits, referencing images of the characters in the process. In the creation of the outfits, much time is given to detail and qualities, thus the skill of a cosplayer may be measured by how difficult the details of the outfit are and how well they have been replicated. Because of the difficulty of replicating some details and materials, cosplayers often educate themselves in crafting specialties such as textiles, sculpture, face paint, fiberglass, fashion design, woodworking, and other uses of materials in the effort to render the look and texture of a costume accurately. Cosplayers often wear wigs in conjunction with their outfit to further improve the resemblance to the character. This is especially necessary for anime and manga or video-game characters who often have unnaturally colored and uniquely styled hair. Simpler outfits may be compensated for their lack of complexity by paying attention to material choice and overall high quality.
To look more like the characters they are portraying, cosplayers might also engage in various forms of body modification. Cosplayers may opt to change their skin color utilizing make-up to more simulate the race of the character they are adopting. Contact lenses that match the color of their character's eyes are a common form of this, especially in the case of characters with particularly unique eyes as part of their trademark look. Contact lenses that make the pupil look enlarged to visually echo the large eyes of anime and manga characters are also used. Another form of body modification in which cosplayers engage is to copy any tattoos or special markings their character might have. Temporary tattoos, permanent marker, body paint, and in rare cases, permanent tattoos, are all methods used by cosplayers to achieve the desired look. Permanent and temporary hair dye, spray-in hair coloring, and specialized extreme styling products are all used by some cosplayers whose natural hair can achieve the desired hairstyle. It is also commonplace for them to shave off their eyebrows to gain a more accurate look.
Some anime and video game characters have weapons or other accessories that are hard to replicate, and conventions have strict rules regarding those weapons, but most cosplayers engage in some combination of methods to obtain all the items necessary for their costumes; for example, they may commission a prop weapon, sew their own clothing, buy character jewelry from a cosplay accessory manufacturer, or buy a pair of off-the-rack shoes, and modify them to match the desired look.
Presentation
Cosplay may be presented in a number of ways and places. A subset of cosplay culture is centered on sex appeal, with cosplayers specifically choosing characters known for their attractiveness or revealing costumes. However, wearing a revealing costume can be a sensitive issue while appearing in public. People appearing naked at American science fiction fandom conventions during the 1970s were so common, a "no costume is no costume" rule was introduced. Some conventions throughout the United States, such as Phoenix Comicon (now known as Phoenix Fan Fusion) and Penny Arcade Expo, have also issued rules upon which they reserve the right to ask attendees to leave or change their costumes if deemed to be inappropriate to a family-friendly environment or something of a similar nature.
Conventions
The most popular form of presenting a cosplay publicly is by wearing it to a fan convention. Multiple conventions dedicated to anime and manga, comics, TV shows, video games, science fiction, and fantasy may be found all around the world. Cosplay-centered conventions include Cosplay Mania in the Philippines and EOY Cosplay Festival in Singapore.
The single largest event featuring cosplay is the semiannual doujinshi market, Comic Market (Comiket), held in Japan during summer and winter. Comiket attracts hundreds of thousands of manga and anime fans, where thousands of cosplayers congregate on the roof of the exhibition center. In North America, the highest-attended fan conventions featuring cosplayers are San Diego Comic-Con and New York Comic Con held in the United States, and the anime-specific Anime North in Toronto, Otakon held in Washington, D.C. and Anime Expo held in Los Angeles. Europe's largest event is Japan Expo held in Paris, while the London MCM Expo and the London Super Comic Convention are the most notable in the UK. Supanova Pop Culture Expo is Australia's biggest event.
Star Trek conventions have featured cosplay for many decades. These include Destination Star Trek, a UK convention, and Star Trek Las Vegas, a US convention.
In different comic fairs, "Thematic Areas" are set up where cosplayers can take photos in an environment that follows that of the game or animation product from which they are taken. Sometimes the cosplayers are part of the area, playing the role of staff with the task of entertaining the other visitors. Some examples are the thematic areas dedicated to Star Wars or to Fallout. The areas are set up by not for profit associations of fans, but in some major fairs it is possible to visit areas set up directly by the developers of the video games or the producers of the anime.
Photography
The appearance of cosplayers at public events makes them a popular draw for photographers. As this became apparent in the late 1980s, a new variant of cosplay developed in which cosplayers attended events mainly for the purpose of modeling their characters for still photography rather than engaging in continuous role play. Rules of etiquette were developed to minimize awkward situations involving boundaries. Cosplayers pose for photographers and photographers do not press them for personal contact information or private sessions, follow them out of the area, or take photos without permission. The rules allow the collaborative relationship between photographers and cosplayers to continue with the least inconvenience to each other.
Some cosplayers choose to have a professional photographer take high quality images of them in their costumes posing as the character. Cosplayers and photographers frequently exhibit their work online and sometimes sell their images.
Competitions
As the popularity of cosplay has grown, many conventions have come to feature a contest surrounding cosplay that may be the main feature of the convention. Contestants present their cosplay, and often to be judged for an award, the cosplay must be self-made. The contestants may choose to perform a skit, which may consist of a short performed script or dance with optional accompanying audio, video, or images shown on a screen overhead. Other contestants may simply choose to pose as their characters. Often, contestants are briefly interviewed on stage by a master of ceremonies. The audience is given a chance to take photos of the cosplayers. Cosplayers may compete solo or in a group. Awards are presented, and these awards may vary greatly. Generally, a best cosplayer award, a best group award, and runner-up prizes are given. Awards may also go to the best skit and a number of cosplay skill subcategories, such as master tailor, master weapon-maker, master armorer, and so forth.
The most well-known cosplay contest event is the World Cosplay Summit, selecting cosplayers from 40 countries to compete in the final round in Nagoya, Japan. Some other international events include European Cosplay Gathering (finals taking place at Japan Expo in Paris), EuroCosplay (finals taking place at London MCM Comic Con), and the Nordic Cosplay Championship (finals taking place at NärCon in Linköping, Sweden).
Common cosplay judging criteria
This table contains a list of the most common cosplay competition judging criteria, as seen from World Cosplay Summit, Cyprus Comic Con, and ReplayFX.
Gender issues
Portraying a character of the opposite sex is called crossplay. The practicality of crossplay and cross-dress stems in part from the abundance in manga of male characters with delicate and somewhat androgynous features. Such characters, known as (lit. "pretty boy"), are Asian equivalent of the elfin boy archetype represented in Western tradition by figures such as Peter Pan and Ariel.
Male to female cosplayers may experience issues when trying to portray a female character because it is hard to maintain the sexualized femininity of a character. Male cosplayers may also be subjected to discrimination, including homophobic comments and being touched without permission. This affects men possibly even more often than it affects women, despite inappropriate contact already being a problem for women who cosplay, as is "slut-shaming".
Animegao kigurumi players, a niche group in the realm of cosplay, are often male cosplayers who use zentai and stylized masks to represent female anime characters. These cosplayers completely hide their real features so the original appearance of their characters may be reproduced as literally as possible, and to display all the abstractions and stylizations such as oversized eyes and tiny mouths often seen in Japanese cartoon art. This does not mean that only males perform animegao or that masks are only female.
Harassment issues
"Cosplay Is Not Consent", a movement started in 2013 by Rochelle Keyhan, Erin Filson, and Anna Kegler, brought attention to the issue of sexual harassment in the convention attending cosplay community. Harassment of cosplayers include photography without permission, verbal abuse, touching, and groping. Harassment is not limited to women in provocative outfits as male cosplayers talked about being bullied for not fitting certain costume and characters.
Starting in 2014, New York Comic Con placed large signs at the entrance stating that "Cosplay is Not Consent". Attendees were reminded to ask permission for photos and respect the person's right to say no. The movement against sexual harassment against cosplayers has continued to gain momentum and awareness since being publicized. Traditional mainstream news media like The Mercury News and Los Angeles Times have reported on the topic, bringing awareness of sexual harassment to those outside of the cosplay community.
Ethnicity issues
As cosplay has entered more mainstream media, ethnicity becomes a controversial point. Cosplayers of different skin color than the character are often ridiculed for not being 'accurate' or 'faithful'. Many cosplayers feel as if anyone can cosplay any character, but it becomes complicated when cosplayers are not respectful of the character's ethnicity. These views against non-white cosplayers within the community have been attributed to the lack of representation in the industry and in media. Issues such as blackface, brownface, and yellowface are still controversial since a large part of the cosplay community see these as separate problems, or simply an acceptable part of cosplay.
Cosplay models
Cosplay has influenced the advertising industry, in which cosplayers are often used for event work previously assigned to agency models. Some cosplayers have thus transformed their hobby into profitable, professional careers. Japan's entertainment industry has been home to the professional cosplayers since the rise of Comiket and Tokyo Game Show. The phenomenon is most apparent in Japan but exists to some degree in other countries as well. Professional cosplayers who profit from their art may experience problems related to copyright infringement.
A cosplay model, also known as a cosplay idol, cosplays costumes for anime and manga or video game companies. Good cosplayers are viewed as fictional characters in the flesh, in much the same way that film actors come to be identified in the public mind with specific roles. Cosplayers have modeled for print magazines like Cosmode and a successful cosplay model can become the brand ambassador for companies like Cospa. Some cosplay models can achieve significant recognition. While there are many significant cosplay models, Yaya Han was described as having emerged "as a well-recognized figure both within and outside cosplay circuits". Jessica Nigri, used her recognition in cosplay to gain other opportunities such as voice acting and her own documentary on Rooster Teeth. Liz Katz used her fanbase to take her cosplay from a hobby to a successful business venture, sparking debate through the cosplay community whether cosplayers should be allowed to fund and profit from their work.
In the 2000s, cosplayers started to push the boundaries of cosplay into eroticism paving the way to "erocosplay". The advent of social media coupled with crowdfuding platforms like Patreon and OnlyFans have allowed cosplay models to turn cosplay into profitable full-time careers.
Cosplay by country or region
Cosplay in Japan
Cosplayers in Japan used to refer to themselves as , pronounced "layer". Currently in Japan, cosplayers are more commonly called , pronounced "ko-su-pray", as reiyā is more often used to describe layers (i.e. hair, clothes, etc.). Words like cute (kawaii (可愛い)) and cool (kakko ī (かっこ いい)) were often used to describe these changes, expressions that were tied with notions of femininity and masculinity. Those who photograph players are called cameko, short for camera kozō or camera boy. Originally, the cameko gave prints of their photos to players as gifts. Increased interest in cosplay events, both on the part of photographers and cosplayers willing to model for them, has led to formalization of procedures at events such as Comiket. Photography takes place within a designated area removed from the exhibit hall. In Japan, costumes are generally not welcome outside of conventions or other designated areas.
Since 1998, Tokyo's Akihabara district contains a number of cosplay restaurants, catering to devoted anime and cosplay fans, where the waitresses at such cafés dress as video game or anime characters; maid cafés are particularly popular. In Japan, Tokyo's Harajuku district is the favorite informal gathering place to engage in cosplay in public. Events in Akihabara also draw many cosplayers.
is a form of Japanese cosplay where the players use body paint to make their skin color match that of the character they are playing. This allows them to represent anime or video game characters with non-human skin colors.
A 2014 survey for the Comic Market convention in Japan noted that approximately 75% of cosplayers attending the event are female.
Cosplay in other Asian countries
Cosplay is common in many East Asian countries. For example, it is a major part of the Comic World conventions taking place regularly in South Korea, Hong Kong and Taiwan. Historically, the practice of dressing up as characters from works of fiction can be traced as far as the 17th century late Ming dynasty China.
Cosplay in Western countries
Western cosplay's origins are based primarily in science fiction and fantasy fandoms. It is also more common for Western cosplayers to recreate characters from live-action series than it is for Japanese cosplayers. Western costumers also include subcultures of hobbyists who participate in Renaissance faires, live action role-playing games, and historical reenactments. Competition at science fiction conventions typically include the masquerade (where costumes are presented on stage and judged formally) and hall costumes
The increasing popularity of Japanese animation outside of Asia during the late 2000s led to an increase in American and other Western cosplayers who portray manga and anime characters. Anime conventions have become more numerous in the West in the previous decade, now competing with science fiction, comic book and historical conferences in attendance. At these gatherings, cosplayers, like their Japanese counterparts, meet to show off their work, be photographed, and compete in costume contests. Convention attendees also just as often dress up as Western comic book or animated characters, or as characters from movies and video games.
Differences in taste still exist across cultures: some costumes that are worn without hesitation by Japanese cosplayers tend to be avoided by Western cosplayers, such as outfits that evoke Nazi uniforms. Some Western cosplayers have also encountered questions of legitimacy when playing characters of canonically different racial backgrounds, and people can be insensitive to cosplayers playing as characters who are canonically of other skin color. Western cosplayers of anime characters may also be subjected to particular mockery.
In contrast to Japan, the wearing of costumes in public is more accepted in the UK, Ireland, US, Canada and other western countries. These countries have a longer tradition of Halloween costumes, fan costuming and other such activities. As a result, for example, costumed convention attendees can often be seen at local restaurants and eateries, beyond the boundaries of the convention or event.
Media
Magazines and books
Japan is home to two especially popular cosplay magazines, Cosmode (コスモード) and ASCII Media Works' Dengeki Layers (電撃Layers). Cosmode has the largest share in the market and an English-language digital edition. Another magazine, aimed at a broader, worldwide audience is CosplayGen. In the United States, Cosplay Culture began publication in February 2015. Other magazines include CosplayZine featuring cosplayers from all over the world since October 2015, and Cosplay Realm Magazine which was started in April 2017. There are many books on the subject of cosplay as well.
Documentaries and reality shows
Cosplay Encyclopedia, a 1996 film about Japanese cosplay released by Japan Media Supply. It was released in subtitled VHS by Anime Works in 1999, eventually being released onto DVD in 2002.
Otaku Unite!, a 2004 film about otaku subculture, features extensive footage of cosplayers.
Akihabara Geeks, a 2005 Japanese short film.
Animania: The Documentary is a 2007 film that explores the cosplay cultural phenomenon in North America, following four cosplayers from various ethnicities as they prepare to compete at Anime North, Canada's largest anime convention.
Conventional Dress is a short documentary about cosplay at Dragon Con made by Celia Pearce and her students in 2008.
Cosplayers: The Movie, released in 2009 by Martell Brothers Studios for free viewing on YouTube and Crunchyroll, explores the anime subculture in North America with footage from anime conventions and interviews with fans, voice actors and artists.
"I'm a Fanboy", a 2009 episode of the MTV series True Life, focusing on fandom and cosplay.
Fanboy Confessional, a 2011 Space Channel series that featured an episode on cosplay and cosplayers from the perspective of an insider.
Comic-Con Episode IV: A Fan's Hope, a 2011 film about four attendees of San Diego Comic-Con, including a cosplayer.
America's Greatest Otaku, a 2011 TV series where contenders included cosplayers.
Cosplayers UK: The Movie, a 2011 film following a small selection of cosplayers at the London MCM Expo.
My Other Me: A Film About Cosplayers, chronicling a year in the life of three different cosplayers: a veteran cosplayer who launched a career from cosplay, a young 14-year-old first-timer, and a transgender man who found himself through cosplay. It was released in 2013 and was a featured segment on The Electric Playground.
Heroes of Cosplay, a reality show on cosplay that premiered in 2013 on the Syfy network. It follows nine cosplayers as they create their costumes, travel to conventions and compete in contests.
"24 Hours With A Comic Con Character", a segment from CNNMoney following around a known cosplayer while she prepared for and attended New York Comic Con.
WTF is Cosplay?, a reality show that premiered in 2015 on the Channel 4 network. It follows six cosplayers throughout their day-to-day lives and what cosplay means to them.
Call to Cosplay, a competition reality show that premiered in 2014 on Myx TV. It is a cosplay design competition show where contestants were tasked to create a costumes based on theme and time constraints.
Cosplay Melee, a competition reality show on cosplay that premiered in 2017 on the Syfy network.
Cosplay Culture, a 90 minutes documentary that follows cosplayers during preparation and conventions in Canada, Japan and Romania. Includes a visit of Akihabara (Japan), a geek Mardi Gras parade in New Orleans and a historic overview explaining the origin of cosplay.
Other media
Cosplay Complex, a 2002 anime miniseries.
Downtown no Gaki no Tsukai ya Arahende!!, a Japanese TV variety show that includes the Cosplay Bus Tour series segment.
Super Cosplay War Ultra, a 2004 freeware fighting game.
A large number of erotic and pornographic films featuring cosplaying actresses; many of such films come from the Japanese company TMA.
Cosplay groups and organizations
501st Legion
Rebel Legion
See also
Anime and manga fandom
Costume party
Costumed character
Escapism
Fan labor
Furry fandom
Halloween costume
Iga Ueno Ninja Festa
Japanese pop culture in the United States
Japanese street fashion
List of cosplayers
Lolita fashion
Look-alike
Quadrobers
Real-life superhero
Sexual roleplay
Uniform fetishism
Zombie walk
Notes
References
External links
International Cosplay Day
1984 neologisms
Anime and manga terminology
Fandom
Costume design
Japanese subcultures
Japanese youth culture
Nerd culture
Otaku
Video game culture | Cosplay | [
"Engineering"
] | 7,730 | [
"Costume design",
"Design"
] |
156,300 | https://en.wikipedia.org/wiki/Pyrokinesis | Pyrokinesis is a psychic ability allowing a person to create and control fire with the mind. As with other parapsychological phenomena, there is no conclusive evidence in support of the actual existence of pyrokinesis. Many alleged cases are hoaxes; the result of trickery.
Etymology
The word pyrokinesis (from Greek pyr meaning fire, kinesis meaning movement) was popularized by horror novelist Stephen King in his 1980 novel Firestarter to describe the ability to create and control fire with the mind, though its use predates the novel. The word is intended to be parallel to telekinesis, with S. T. Joshi describing it as a "singularly unfortunate coinage" and noting that the correct analogy to telekinesis would "not be 'pyrokinesis' but 'telepyrosis' (fire from a distance)".
History
A. W. Underwood, a 19th-century African-American, achieved minor celebrity status with the ability to set items ablaze. Magicians and scientists have suggested concealed pieces of phosphorus may have instead been responsible. The phosphorus could be readily ignited by breath or rubbing. Skeptical investigator Joe Nickell has written that Underwood may have used a "chemical-combustion technique, and still other means. Whatever the exact method — and the phosphorus trick might be the most likely — the possibilities of deception far outweigh any occult powers hinted at by Charles Fort or others."
The medium Daniel Dunglas Home was known for performing fire feats and handling a heated lump of coal taken from a fire. The magician Henry R. Evans wrote that the coal handling was a juggling trick, performed by Home using a hidden piece of platinum. Hereward Carrington described Evans hypothesis as "certainly ingenious" but pointed out William Crookes, an experienced chemist, was present at a séance whilst Home performed the feat and would have known how to distinguish the difference between coal and platinum. Frank Podmore wrote that most of the fire feats could have easily been performed by conjuring tricks and sleight of hand but hallucination and sense-deception may have explained Crookes' claim about observing flames from Home's fingers.
Joseph McCabe has written that Home's alleged feats of pyrokinesis were weak and unsatisfactory, he noted that they were performed in dark conditions amongst unreliable witnesses. McCabe suggested the coal handling was probably a "piece of asbestos from Home's pocket".
Sometimes claims of pyrokinesis are published in the context of fire ghosts, such as Canneto di Caronia fires and the 1982 Italian case of a young Scottish nanny, Carole Compton.
In March 2011, a three-year-old girl in Antique, a Philippines province with important mysticism and folklore, gained local media attention for the supposed supernatural power to predict or create fires. The town mayor said he witnessed a pillow ignite after the girl said "fire... pillow." Others claimed to have witnessed the girl either predicting or causing fire without any physical contact with the objects. A pastor claimed to have exorcised the girl and police failed to find anything abnormal although a paranormal proponent claimed that she must have inherited those powers from a previous life. The story of the alleged "fire starter" was featured on the June 22, 2020 Kapuso Mo, Jessica Soho show. Since several objects around the house were ignited, local residents flocked to the girl's house to learn of the circumstances and emergency services visited the house to investigate.
There is no scientifically known method for the brain to trigger explosions or fires.
See also
Fire (classical element)
Firewalking
Fire breathing
Spontaneous human combustion
References
Further reading
Gordon Stein. (1993). Encyclopedia of Hoaxes. Gale Research.
John G. Taylor. (1980). Science and the Supernatural: An Investigation of Paranormal Phenomena Including Psychic Healing, Clairvoyance, Telepathy, and Precognition by a Distinguished Physicist and Mathematician. Temple Smith.
Fire
Paranormal hoaxes
Paranormal terminology
Psychic powers
1980s neologisms | Pyrokinesis | [
"Chemistry"
] | 835 | [
"Combustion",
"Fire"
] |
156,310 | https://en.wikipedia.org/wiki/Radicle | In botany, the radicle is the first part of a seedling (a growing plant embryo) to emerge from the seed during the process of germination. The radicle is the embryonic root of the plant, and grows downward in the soil (the shoot emerges from the plumule). Above the radicle is the embryonic stem or hypocotyl, supporting the cotyledon(s).
It is the embryonic root inside the seed. It is the first thing to emerge from a seed and down into the ground to allow the seed to suck up water and send out its leaves so that it can start photosynthesizing.
The radicle emerges from a seed through the micropyle. Radicles in seedlings are classified into two main types. Those pointing away from the seed coat scar or hilum are classified as antitropous, and those pointing towards the hilum are syntropous.
If the radicle begins to decay, the seedling undergoes pre-emergence damping off. This disease appears on the radicle as darkened spots. Eventually, it causes death of the seedling.
The plumule is the baby shoot. It grows after the radicle.
In 1880, Charles Darwin published a book about plants he had studied, The Power of Movement in Plants, where he mentions the radicle.
See also
Plant perception (physiology)
References
Plant anatomy
Plant morphology
Plant intelligence | Radicle | [
"Biology"
] | 299 | [
"Plant morphology",
"Plant intelligence",
"Plants"
] |
156,331 | https://en.wikipedia.org/wiki/Thomas%20Gold | Thomas Gold (May 22, 1920 – June 22, 2004) was an Austrian-born astrophysicist, who also held British and American citizenship. He was a professor of astronomy at Cornell University, a member of the U.S. National Academy of Sciences, and a Fellow of the Royal Society (London). Gold was one of three young Cambridge scientists who in 1948 proposed the now mostly abandoned "steady state" hypothesis of the universe. Gold's work crossed boundaries of academic and scientific disciplines, into biophysics, astronomy, aerospace engineering, and geophysics.
Early life
Gold was born on May 22, 1920, in Vienna, Austria, to Max Gold, a wealthy Jewish industrialist (pre-war) who ran one of Austria's largest mining and metal fabrication companies, and German former actress Josefine Martin. Following the economic downfall of the European mining industry in the late 1920s, Max Gold moved his family to Berlin, where he had taken a job as director of a metal trading company. Following the start of Nazi leader Adolf Hitler's anti-Jewish campaigns in 1933, Gold and his family left Germany because of his father's heritage. The family travelled through Europe for the next few years. Gold attended boarding school at the Lyceum Alpinum Zuoz in Zuoz, Switzerland, where he quickly proved to be a clever, competitive and physically and mentally aggressive individual. Gold finished his schooling at Zuoz in 1938, and fled with his family to England after the German invasion of Austria in early 1938. Gold entered Trinity College, Cambridge in 1939 and began studying mechanical sciences. In May 1940, just as Hitler was commencing his advance in Belgium and France, Gold was sent into internment as an enemy alien by the British government. It was on the first night of internment, at an army barracks in Bury St Edmunds, that he met his future collaborator and close friend, Hermann Bondi.
Gold spent most of his nearly 15 months of internment in a camp in Canada, after which he returned to England and reentered Cambridge University, where he abandoned his study of mechanical sciences for physics. After graduating with a pass (Ordinary) degree in June 1942, Gold worked briefly as an agricultural labourer and lumberjack in northern England before joining Bondi and Fred Hoyle on naval research into radar ground clutter near Dunsfold, Surrey. The three men would spend their off-duty hours in "intense and wide-ranging scientific discussion" on topics such as cosmology, mathematics and astrophysics. Within months, Gold was placed in charge of constructing new radar systems. Gold determined how landing craft could use radar to navigate to the appropriate landing spot on D-Day and also discovered that the German navy had fitted snorkels to its U-boats, making them operable underwater while still taking in air from above the surface.
Schooling and work in England
Immediately after the war, Hoyle and Bondi returned to Cambridge, while Gold stayed with naval research until 1947. He then began working at Cambridge's Cavendish Laboratory to help construct the world's largest magnetron, a device invented by two British scientists in 1940 that generated intense microwaves for radar. Soon after, Gold joined R. J. Pumphrey, a zoologist at the Cambridge Zoology Laboratory who had served as the deputy head of radar naval research during the war, to study the effect of resonance on the human ear.
Theory of human hearing
Via simple experimentation in 1946, Gold found that the degree of resonance observed in the cochlea was not in accordance with the level of damping that would be expected from the viscosity of the watery liquid that fills the inner ear. As recounted by Freeman Dyson, who was one of the fellow students at Cambridge whom Gold experimented on, the procedure was "simple, elegant, and original." Gold built his experimental apparatus out of war surplus Navy electronics and headphones. This was equipment that Gold had used during his World War II assignment to the Royal Navy as a radar and radio communications specialist.
In 1948 he published two papers on his results; one described the theory and the other reporting the experimental results. His theory was that the ear operates instead in the same way as does a "regenerative radio receiver" by adding energy at the same frequency it is trying to detect. (Later this became known as otoacoustic emission.) Although Gold won a prize fellowship from Trinity College for his thesis on this proposed mechanism of hearing and obtained a junior lectureship at the Cavendish Laboratory, his theory was widely ignored by ear specialists and physiologists, such as future (1961) Nobel Prize winner Georg von Békésy, who did not believe the cochlea operated under a feedback system. Later, however, researchers discovered that Gold's hypothesis had been correct. As reported in one of the science obituaries published about Gold in 2004, "Ignored for over 30 years, his research was rediscovered in the 1970s when physiologists discovered the tiny hair cells that act as amplifiers in the inner ear."
Steady-state theory
Gold began discussing problems in physics with Hoyle and Bondi again, centering on the issues over redshift and Hubble's law. This led the three to all start questioning the Big Bang theory originally proposed by Georges Lemaître in 1931 and later advanced by George Gamow, which suggested that the universe expanded from an extremely dense and hot state and continues to expand today. As recounted in a 1978 interview with physicist and historian Spencer R. Weart, Gold believed that there was reason to think that the creation of matter was "done all the time and then none of the problems about fleeting moments arise. It can be just in a steady state with the expansion taking things apart as fast as new matter comes into being and condenses into new galaxies".
Two papers were published in 1948 discussing the "steady-state theory" as an alternative to the Big Bang: one by Hermann Bondi and Gold, the other by Fred Hoyle. In their seminal paper, Bondi and Gold asserted that although the universe is expanding, it nevertheless does not change its look over time; it has no beginning and no end. They proposed the perfect cosmological principle as the underpinning of their theory, which held that the universe is homogeneous and isotropic in space and time. On the large scale, they argued that there "is nothing outstanding about any place in the universe, and that those differences which do exist are only of local significance; that seen on a large scale the universe is homogeneous." However, since the universe was not characterized by a lack of evolution, distinguishing features or recognizable direction of time, they postulated that there had to be large-scale motions in the universe. They highlighted two possible types of motion: large-scale expansion and its reverse, large-scale contraction. They estimated that within the expanding universe, hydrogen atoms were being created out of a vacuum at a rate of one atom per cubic meter per 109 years. This creation of matter would keep the density of the universe constant as it expanded. Gold and Bondi also stated that the issues with time scale that had plagued other cosmological theories – such as the discrepancy between the age of the universe as calculated by Hubble and dating of radioactive decay in terrestrial rocks – were absent for the steady-state theory.
It was not until the 1960s that major problems with the steady-state theory began to emerge, when observations apparently supported the idea that the universe was in fact changing: quasars and radio galaxies were found only at large distances (therefore existing only in the distant past), not in closer galaxies. Whereas the Big Bang theory predicted as much, steady state predicted that such objects would be found everywhere, including close to our own galaxy, since evolution would be more evenly distributed, not observed only at great distances. In addition, proponents of the theory predicted that in addition to hydrogen atoms, antimatter would also be produced, as with cosmic gamma ray background from the annihilation of protons and antiprotons and X-ray emitting gas from the creation of neutrons.
For most cosmologists, the refutation of the steady-state theory came with the discovery of the cosmic microwave background radiation in 1965, which was predicted by the Big Bang theory. Stephen Hawking said that the fact that microwave radiation had been found, and that it was thought to be left over from the Big Bang, was "the final nail in the coffin of the steady-state theory." Bondi conceded that the theory had been disproved, but Hoyle and Gold remained unconvinced for a number of years. Gold even supported Hoyle's modified steady-state theory; however, by 1998 he started to express some doubts about the theory, but maintained that despite its faults, the theory helped improve understanding regarding the origin of the universe.
Extra-galactic radio signals
In 1951, at a meeting of the Royal Astronomical Society, Gold proposed that the source of recent radio signals detected from space was outside the Milky Way galaxy, much to the derision of radio astronomer Martin Ryle and several mathematical cosmologists. However, a year later, a distant source was identified and Gold announced at an International Astronomical Union meeting in Rome that his theory had been proven. Ryle would later take Gold's argument as proof of extragalactic evolution, claiming that it invalidated the steady-state theory.
Shock wave origin of magnetic storms
Gold left Cambridge in 1952 to become the chief assistant to Astronomer Royal Harold Spencer Jones at the Royal Greenwich Observatory in Herstmonceux, Sussex, England. While there, Gold attracted some controversy by suggesting that the interaction between charged particles from the Sun with the Earth's magnetic field in creating magnetic storms in the upper atmosphere was an example of a collisionless shock wave. The theory was widely disputed, until American scientists in 1957 discovered that Gold's theory held up to mathematical scrutiny by conducting a simulation using a shock tube.
Astrophysics work in the USA
Gold resigned from the Royal Observatory following Spencer-Jones's retirement and moved to the United States in 1956, where he served as Professor of Astronomy (1957–1958) and Robert Wheeler Wilson Professor of Applied Astronomy (1958–1959) at Harvard University. In early 1959, he accepted an appointment at Cornell University, which had offered him the opportunity to set up an interdisciplinary unit for radiophysics and space research, and take charge of the Department of Astronomy. At the time, there was only one other faculty member in the department. Gold would serve as director of the Center for Radiophysics and Space Research until 1981, establishing Cornell as a leading hub of scientific research. During his tenure, Gold hired famed astronomers Carl Sagan and Frank Drake, helped establish the world's largest radio telescope at the Arecibo Observatory in Puerto Rico and the Cornell-Sydney University Astronomy Center with Harry Messel. In addition, Gold served as Assistant Vice President for Research from 1969–1971 and the John L. Wetherill Professor of Astronomy from 1971 until his retirement in 1986.
Solar nanoflares and Earth's magnetosphere
In 1959, Gold expanded on his previous prediction of a collisionless shock wave, arguing that solar flares would eject material into magnetic clouds to produce a shock front that would result in geomagnetic storms. He also coined the term "magnetosphere" in his paper "Motions in the Magnetosphere of the Earth" to describe "the region above the ionosphere in which the magnetic field of the Earth has a dominant control over the motions of gas and fast charged particles ... [which was] known to extend out to a distance of the order of 10 Earth radii". A 2015 paper titled "Modelling nanoflares in active regions and implications for coronal heating mechanisms," attributes the initial idea of the cause of magnetic storms above Earth to Gold: "The heating of the solar corona by small, impulsive heating events appears to date to a discussion by Gold [1], and the subsequent more quantitative proposal of Levine [2,3] that small coronal current sheets were responsible for the heating."
Panspermia and pulsars
In 1960, Gold collaborated again with Fred Hoyle to show that magnetic energy fueled solar flares and that flares were triggered when opposite magnetic loops interact and release their stored energy.
In 1960, Gold suggested a "garbage theory" for the origin of life, thus constituting a kind of "accidental panspermia". The theory proposes that life on Earth might have spread from a pile of waste products accidentally dumped on Earth long ago by extraterrestrials.
In 1968, a Cambridge radio astronomy postgraduate student Jocelyn Bell Burnell and her doctoral adviser Antony Hewish discovered a pulsing radio source with a period of 1.337 seconds. The source – which was termed "pulsar" – emitted beams of electromagnetic radiation at a very short and consistent interval. Gold proposed that these objects were rapidly rotating neutron stars. Gold argued that due to their strong magnetic fields and high rotational speed, pulsars would emit radiation similar to a rotating beacon. Gold's conclusion was initially not well received by the scientific community; in fact, he was refused permission to present his theory at the first international conference on pulsars. However, Gold's theory became widely accepted following the discovery of a pulsar in the Crab Nebula using the Arecibo radio telescope, opening the door for future advancements in solid-state physics and astronomy. Anthony Tucker of The Guardian remarked that Gold's discovery paved the way for Stephen Hawking's groundbreaking research into black holes.
Moon dust and NASA
From the 1950s, Gold served as a consultant to NASA and held positions on several national space committees, including the President's Science Advisory Committee, as the United States tried to develop its space program. At the time, scientists were engaged in a heated debate over the physical properties of the Moon's surface. In 1955, he predicted that the Moon was covered by a layer of fine rock powder stemming from "the ceaseless bombardment of its surface by Solar System debris". This led to the dust being jokingly referred to as "Gold dust" or "Gold's dust". Gold initially suggested that astronauts would sink into the dust, but upon later analysis of impact craters and electrostatic fields, he determined that the astronauts' boots would sink only three centimeters into the Moon's surface. In any case, NASA sent unmanned Surveyors to analyze the conditions on the surface of the Moon. Gold was ridiculed by fellow scientists, not only for his hypothesis, but for the approach he took in communicating NASA's concerns to the American public; in particular, some experts were infuriated with his usage of the term "Moon dust" in reference to lunar regolith. When the Apollo 11 crew landed on the Moon in 1969 and brought back the first samples of lunar rocks, researchers found that lunar soil was in fact powdery. Gold said the findings were consistent with his hypothesis, noting that "in one area as they walked along, they sank in between five and eight inches". However, Gold received little credit for his correct prediction, and was even criticized for his original prediction of a deep layer of lunar dust. Gold had also contributed to the Apollo program by designing the Apollo Lunar Surface Closeup Camera (ALSCC) (a kind of stereo camera) used on the Apollo 11, 12, and 14 missions.
In the 1970s and 1980s, Gold was a vocal critic of NASA's Space Shuttle program, deriding claims that the agency could fly 50 missions a year or that it could have low budget costs. NASA officials warned Gold that if he testified his concerns before Congress, his research proposals would lose their support from NASA. Gold ignored the warning and testified before a Congressional committee headed by Senator Walter Mondale. In a letter to NASA administrator James C. Fletcher, George Low wrote that "Gold should realize that being funded by the Government and NASA is a privilege, and that it would make little sense for us to fund him as long as his views are what they are now". Gold recalled the aftermath of his testimony in a 1983 interview with astronomy historian David H. DeVorkin:
I had a very hard time with NASA, year after year. I got some more money, but eventually it fizzled out, after three years or so after this event. My applications, which previously each year had always gone through very smoothly, were turned down. I would then have to go to Washington, discuss it with them. and I then would get a certain fraction of it resurrected. For several years running this happened, and then eventually it fizzled permanently, and I've not tried to get any money out of NASA since.
...
I was certainly regarded as persona non grata with NASA after that. I had a very hard time. Shortly after that Noel Hinners became the Space Science administrator, and he used to joke about it and say, "Oh. Tommy's got to come to his annual pilgrimage to Washington," and regarded it as very funny, but then he'd always give me some money. But always clearly as a persona non grata.
Contrarian views in geology and biology
Abiogenic origins of petroleum
Thomas Gold first became interested in the origins of petroleum in the 1950s, postulating a theory on the abiogenic formation of fossil fuels. Gold engaged in thorough discussion on the matter with Fred Hoyle, who even included a chapter on "Gold's Pore Theory" in his 1955 book Frontiers in Astronomy.
While Russian scientists had long been at work explicating possible abiogenic origins of petroleum, the Cold War blocked knowledge of their publications until the 1990s. Thus, Thomas Gold was credited with the idea in the United States when current events prompted him to submit an opinion piece to the Wall Street Journal in June 1977 titled, "Rethinking the origins of oil and gas." Concern about gasoline shortages that began in 1973 were still troubling the economy. A striking discovery in the deep-sea just four months earlier (February 1977) was another impetus: Exploration and photography of a deep-sea hydrothermal vent showed a dense amount of life living on chemical energy. Stationary organisms depending on vent outflows included albino clams and tube worms larger than ever seen in surface marine ecosystems. Most astonishing was that such ecosystems were based on microbial life living entirely on chemosynthetic rather than photosynthetic ways of capturing energy and building living cells.
Science communicator Paul Davies explained Gold's theory in this way: "Conventional wisdom has it that oil and coal are remnants of ancient surface life that became buried and subjected to extremes of temperature and pressure. Gold maintains that these deposits are not fossil fuels in the normal sense, but the products of primordial hydrocarbons dating from the time of the Earth's formation. He claims that over the aeons the volatile gases migrate towards the surface through cracks in the crust, and either leak into the atmosphere as methane, become trapped in sub-surface gas fields, or are robbed of their hydrogen to become oil, tar or carbonaceous material like coal."
As to the ubiquity of abiotic hydrocarbons in the solar system, a 1999 profile of Gold in the Washington Post quoted him as saying, "it always seemed absurd to me to see petroleum hydrocarbons on other planets, where there was obviously never any vegetation, even as we insist that on Earth they must be biological in origin."
Earthquakes from rising methane
Having established the theoretical foundations of his abiogenic petroleum hypothesis, Gold began in-depth thinking and research on the kinds of empirical evidence that might land in its favor. First was collaborating with a former graduate student of his at Cornell University: Steven Soter. Soter had received his PhD in astronomy in 1971 and had recently concluded another faculty collaboration at Cornell: working with Carl Sagan in the writing of the television series, Cosmos: A Personal Voyage. Gold and Soter teamed up to investigate the knowns and unknowns about earthquakes from the standpoint of plausible causation by or regular co-occurrence with sudden escape of large volumes of methane gas. The result was a series of papers, including two with "earthquakes" in the title: "North Sea-quakes" (New Scientist 1979) and "Fluid Ascent through the Solid Lithosphere and its Relation to Earthquakes" (Pure and Applied Geophysics 1985). Their 1980 article in Scientific American was titled "The Deep-Earth-Gas Hypothesis" and the explanatory value of the idea was presented as, providing "a unified basis for explaining a number of otherwise rather puzzling phenomena that either give warning of earthquakes or accompany them." Even so, they cautioned, "The sampling of such gases is just beginning, and the data will not yet support confident conclusions."
The puzzling phenomena associated with earthquakes include "flames that shoot from the ground, earthquake lights, fierce bubbling in bodies of water, sulfurous air and the strange behavior of animals, loud explosive and hissing noises, and visible waves rolling slowly along alluvial ground." They constructed a map of the world depicting major oil-producing regions and areas with historical seismic activity. Several oil-rich regions, such as Alaska, Texas, the Caribbean, Mexico, Venezuela, the Persian Gulf, the Urals, Siberia, and Southeast Asia, were shown to be lying on major earthquake belts. Gold and Soter suggested that these belts may explain the upward migration of gases originating at depth. "The fact is that oil and gas fields show a distinct association with such earthquake-prone regions. The association suggests to us that the deep faults may provide a conduit for the continuous input of nonbiological methane streaming up from below. Moreover, the upward migration of methane and other gases in fault zones may contribute to the triggering of earthquakes."
He also pointed to the abundance of helium in oil and gas reserves as evidence for "a deep source of the hydrocarbons". Moreover, a few oil reserves thought to have been exhausted were suddenly generating vast amounts of crude oil. From this, Gold proposed that the Earth may possess a virtually endless supply – suggesting as much as "at least 500 million years' worth of gas" – of fossil fuels.
The helium anomaly
In later publications, Gold emphasized that the immense amounts of helium gas that surges upward during commercial petroleum production at some sites is proof in itself that substantial lightweight gases have indeed persisted at depth since the amalgamation of cosmic debris into planet Earth during the birth of this solar system. In his 1998 book, Gold closed his fourth chapter, "Evidence for Deep-Earth Gas," with a section titled "The Association of Helium with Hydrocarbons."
A test: Drilling deep into granite
Gold began testing his abiogenic petroleum theory in 1986. With the backing of a group of investors, Vattenfall and the Gas Research Institute, drilling of a deep borehole – named Gravberg-1 – commenced into the bedrock near Lake Siljan in Sweden. This was the site of a large meteor crater, which would have "opened channels deep enough for the methane to migrate upward" and formed deposits in caprock just a few miles beneath the surface. He estimated that the fractures near Lake Siljan reached down nearly 40 kilometers into the earth.
In 1987, approximately of drilling lubricant disappeared nearly into the ground, leading Gold to believe that the lubricant had fallen into a methane reservoir. Soon after, the team brought up nearly 100 liters of black oily sludge to the surface. Gold claimed that the sludge contained both oil and remnants of archaebacteria. He argued that "it suggests there is an enormous sphere of life, of biology, at deeper levels in the ground than we have had any knowledge of previously" and that this evidence would "destroy the orthodox argument that since oil contains biological molecules, oil reserves must have derived from biological material".
The announcement of Gold's findings was met with mixed reactions, ranging from "furious incredulity" to "deep skepticism". Geochemist Geoffrey P. Glasby speculated that the sludge could have been formed from the Fischer–Tropsch process, a catalyzed chemical reaction in which synthesis gas, a mixture of carbon monoxide and hydrogen, is converted into liquid hydrocarbons. Critics also dismissed Gold's archaebacteria finding, stating that "since micro-organisms cannot survive at such depth, the bacteria prove that the well has been contaminated from the surface". Geochemist Paul Philp analyzed the sludge and concluded that he could not differentiate between the samples of sludge and oil seep found in sedimentary shale rocks near the surface. He reasoned that oil had migrated from the shale down to the granite deep in the ground. Gold disputed Philp's finding, believing that the oil and gas could have just as easily migrated up to the surface: "They would have it that the oil and gas we found down there was from the five feet of sediments on the top – had seeped all the way down six kilometres down into the granite. I mean, such complete absurdity: you can imagine sitting there with five feet of soil and six kilometres underneath of dense granitic rock, and that methane produced up there has crawled all the way down in preference to water. Absolute nonsense."
In light of the controversy surrounding the sludge and possible drill contamination, Gold abandoned the project at Gravberg-1, calling it a "complete fiasco", and redesigned the experiment by replacing his oil-based drilling lubricant with a water-based one.
The drill hit oil in the spring of 1989, but only collected about . Gold stated, "It was not coming up at a rate at which you could sell it, but it showed there was oil down there." The drill then ran into technical problems and was stopped at a depth of 6.8 kilometers. The hole was closed, but a second hole was opened for drilling closer to the "center of the impact ring where there was even less sedimentary rock". By October 1991, the drill hit oil at a depth of 3.8 kilometers, but many skeptics remained unconvinced of the site's prospects. Geologist John R. Castaño concluded that there was insufficient evidence of the mantle as the hydrocarbon source and that it was unlikely that the Siljan site could be used as a commercial gas field. In 2019, a study of gases and secondary carbonate minerals revealed that long-term microbial methanogenesis has occurred in situ deep within the fracture system of the crater (for at least 80 million years) and with an obvious spatial link to seep oils of surficial sedimentary origin, at odds with Gold's theories of deep abiotic gas migration.
Gold's later views on the drilling results can be found in chapter 6, "The Siljan Experiment," of his 1998 book. Another section of the book titled "The Upwelling Theory of Coal Formation" presents another argument in favor of the abiogenic model that he had not presented in an earlier paper. Similarly, he also presents arguments pertaining to the origin of diamonds and that microbial processes are the cause of mineral concentrations at depth.
Dispute nearly forgotten
In 1996 a paper published in the journal Social Studies of Science was titled, "Which Came First, the Fossil or the Fuel?" The author concluded:Beginning in the late 1970s, Gold revived the 'abiogenic' theory, which holds that hydrocarbons are primordial, not remnants of decayed biology. By contesting the central tenet of petroleum geology, Gold precipitated a bitter scientific controversy. Both sides employed novel rhetorical strategies in order to impute interests, to contest expertise, to recruit allies from peripheral disciplines, and to claim the mantle of scientific method; and both managed to construct plausible interpretations of the available data.
The author reported that even the Siljan drilling results had not been sufficient to fully resolve the long-standing dispute about origins, although Gold's hypothesis is favored by only a very slim minority. As of 2024, "fossil fuel" is still the prevailing term widely in use in reference to petroleum resources, both within academia and without. Such terminology includes 21st century communications about the causes of anthropogenic climate change and proffered solutions to the crisis, such as the 2023 IPCC report, "Climate Change 2023 Synthesis Report: Summary for Policymakers." Overall, within the academic disciplines of geobiology and petroleum geology, criticism of Gold's abiogenic theory has been severe — but not entire. The dispute is more set aside and forgotten than resolved.
As to whether Gold's framing of distinct chemosynthetic microbes are active and ubiquitous at depth, the established authorities have moved in his direction. (See next section.)
"Deep Hot Biosphere" theory
In a 1992 paper, "The Deep, Hot Biosphere", Gold first suggested that microbial life is widespread in the porosity of the crust of the Earth, down to depths of several kilometers, where rising temperatures finally set a limit. The subsurface life obtains its energy not from photosynthesis but from chemical sources in fluids migrating upwards through the crust. The mass of the deep biosphere may be comparable to that of the surface biosphere. Subsurface life may be widespread on other bodies in the solar system and throughout the universe, even on worlds unaccompanied by other stars.
A 1993 article by journalist William Broad, published in The New York Times and titled "Strange New Microbes Hint at a Vast Subterranean World," carried Gold's thesis to public attention. The article began, "New forms of microbial life are being discovered in such abundance deep inside the Earth that some scientists are beginning to suspect that the planet has a hidden biosphere extending miles down whose total mass may rival or exceed that of all surface life. If a deep biosphere does exist, scientists say, its discovery will rewrite textbooks while shedding new light on the mystery of life's origins. Even skeptics say the thesis is intriguing enough to warrant new studies of the subterranean realm."
The 1993 article also features how Gold's thesis expands possibilities for astrobiology research: "Dr. Thomas Gold, an astrophysicist at Cornell University known for bold theorizing, has speculated that subterranean life may dot the cosmos, secluded beneath the surfaces of planets and moons and energized by geological processes, with no need for the warming radiation of nearby stars. He wrote in The Proceedings of the National Academy of Sciences last year that the solar system might harbor at least 10 deep biospheres. 'Such life may be widely disseminated in the universe,' he said, 'since planetary type bodies with similar subsurface conditions may be common as solitary objects in space, as well as in other solar-type systems.'"
Gold also published a book of the same title, The Deep Hot Biosphere, in 1999, which expanded on the arguments in his 1992 paper and included speculations on the origin of life and on horizontal gene transfer. According to Gold, bacteria feeding on the oil accounts for the presence of biological debris in hydrocarbon fuels, obviating the need to resort to a biogenic theory for the origin of the latter. The flows of underground hydrocarbons may also explain oddities in the concentration of other mineral deposits. In short, Gold said about the origin of natural hydrocarbons (petroleum and natural gas): Hydrocarbons are not biology reworked by geology (as the traditional view would hold), but rather geology reworked by biology.
Freeman Dyson wrote the foreword to Gold's 1999 book, where he concluded, "Gold's theories are always original, always important, usually controversial — and usually right. It is my belief, based on fifty years of observation of Gold as a friend and colleague, that the deep hot biosphere is all of the above: original, important, controversial — and right." (Dyson also delivered a eulogy at Gold's memorial service, a segment of which pertaining to the deep hot biosphere theory is posted on youtube.)
Following Gold's death, scientific discoveries amplified and also shifted understanding of the deep hot biosphere into what is now generally called deep biosphere. However, it is only at great depth where naturally occurring geochemical processes induced by intense heat and pressure produce elemental hydrogen and carbon dioxide upon which novel metabolisms of life (especially among the primitive Archaea) could have evolved. A retrospective paper published in the same journal as Gold's 1992 paper featured the metabolic and genetic discoveries of life forms at depth that Gold's paper inspired. Titled "The Deep, Hot Biosphere: Twenty-five years of retrospection," the authors conclude:The pioneering ideas proposed by Thomas Gold inspired a generation of researchers in the field of geobiology to dive deeper into the possibilities of subsurface life, spawning hundreds of relevant publications.... Deep hydrocarbon deposits on Mars, Titan, and worlds beyond could play host to life similar to that in Earth’s own crust. The techniques used to better study and understand deep, hot biospheres on Earth could then be applied to robotically probe targets in deep space as we move into the next century of scientific discovery. Technology is advancing at a rate wherein we may find that Gold’s deep, hot biosphere is not only true, but common across the universe.
A term Gold coined in his 1999 book carries forward, too, and is a reminder of the worldview shift he advocated. The term is "surface chauvinism". Gold wrote, "In retrospect, it is not hard to understand why the scientific community has typically sought only surface life in the heavens. Scientists have been hindered by a sort of 'surface chauvinism.'" In 2024 NASA launched the first spacecraft, Europa Clipper, to study and sample whether one of the moons of an outer planet that Gold had pointed to as a prospect for deep life (Europa, a moon of Jupiter) might indeed harbor the physical and chemical conditions essential for carbon-based life.
Academic legacy
Throughout his academic career, Gold received a number of honors and distinctions. He was a Fellow of the Royal Astronomical Society (1948), the Royal Society (1964), the American Geophysical Union (1962), the American Academy of Arts and Sciences (1974), and the American Astronautical Society, a member of the American Philosophical Society (1972), the United States National Academy of Sciences (1974) and the International Academy of Astronautics, and an Honorary Fellow of Trinity College, Cambridge (1986). In addition, he served as President of the New York Astronomical Society from 1981 to 1986. Gold won the John Frederick Lewis Award from the American Philosophical Society in 1972 for his paper "The Nature of the Lunar Surface: Recent Evidence" and the Humboldt Prize from the Alexander von Humboldt Foundation in 1979. In 1985, Gold won the prestigious Gold Medal of the Royal Astronomical Society, an award whose recipients include Fred Hoyle, Hermann Bondi, Martin Ryle, Edwin Hubble, James Van Allen, Fritz Zwicky, Hannes Alfvén and Albert Einstein. Gold did not earn a doctorate, but received an honorary Doctor of Science degree from Cambridge University in 1969.
Following his death in 2004, obituaries laying out the breadth of his scientific inquiries appeared in a number of scientific journals. In the journal Nature, Hermann Bondi wrote "Tommy Gold will long be remembered as a singular scientist who stepped into any field where he thought an option was being overlooked. He was also unusual in working mainly theoretically, but using little mathematics, relying instead on his profound intuitive understanding of physics." The obituary in Physics Today included a listing of topics he delved into: "the alignment of galactic dust, the instability of Earth’s axis of rotation, the dusty lunar surface, the Sun's cosmic rays, and plasmas and magnetic fields in the solar system ... the origin of solar flares, the nature of time, molecules and masers in the interstellar medium, rotating neutron stars and the nature of pulsars, terrestrial sources of hydrocarbons, and the deep Earth biosphere."
Gold's boldness in his approach is another aspect of his legacy. The obituary in the Bulletin of the American Astronomical Society called attention to his being "regarded by some as a scientific maverick who delighted in controversy. In reality, he was an iconoclast whose strength was in penetrating analysis of the assumptions on which some of our most important theories are based.... Tommy's paradigm-changing ideas in astronomy and planetary science, while original and bold, were also highly controversial. With his radical work on the origin of natural gas and petroleum, the controversy is likely to continue.... He will be remembered as one of the most interesting, dynamic and influential scientists of his generation." The obituary in The Guardian stated that Gold would "dive into new territory to open up problems unseen by others — in biophysics, astrophysics, space engineering, or geophysics. Controversy followed him everywhere. Possessing profound scientific intuition and open-minded rigour, he usually ended up challenging the cherished assumptions of others and, to the discomfiture of the scientific establishment, often found them wanting. His stature and influence were international."
Personal life
Gold married his first wife, Merle Eleanor Tuberg, an American astrophysicist who had worked with Subrahmanyan Chandrasekhar, in Cambridge in 1947. He had three daughters with her – Linda, Lucy, and Tanya. After divorcing her, Gold married Carvel Lee Beyer in 1972. With her, he had a daughter Lauren.
Thomas Gold died at the age of 84 from complications due to heart disease, at Cayuga Medical Center in Ithaca, New York. He was buried in the Pleasant Grove Cemetery in Ithaca. He was survived by his wife, four daughters, and six grandchildren.
Selected publications
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
See also
Abiogenic petroleum origin
Gold universe
Astronomy
Astrophysics
Petroleum
Theoretical astrophysics
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
The Origin of Methane (and Oil) in the Crust of the Earth (Thomas Gold) U.S.G.S. Professional Paper 1570, The Future of Energy Gases, 1993
.
.
.
.
.
.
.
.
.
.
Thomas Gold interview
Thomas Gold Publications – Harvard
National Academy of Sciences Biographical Memoir
1920 births
2004 deaths
Alumni of Trinity College, Cambridge
American aerospace engineers
20th-century American astronomers
American biophysicists
American geophysicists
American people of Austrian-Jewish descent
20th-century Austrian astronomers
Austrian biophysicists
Cornell University faculty
American cosmologists
Fellows of the Royal Astronomical Society
Foreign fellows of the Royal Astronomical Society
Fellows of the Royal Society
Fellows of Trinity College, Cambridge
Harvard University faculty
Members of the United States National Academy of Sciences
Panspermia
Scientists from Vienna
Recipients of the Gold Medal of the Royal Astronomical Society
Fellows of the American Geophysical Union
20th-century American engineers
Alumni of Lyceum Alpinum Zuoz | Thomas Gold | [
"Biology"
] | 8,021 | [
"Biological hypotheses",
"Origin of life",
"Panspermia"
] |
156,339 | https://en.wikipedia.org/wiki/Corporate%20farming | Corporate farming is the practice of large-scale agriculture on farms owned or greatly influenced by large companies. This includes corporate ownership of farms and the sale of agricultural products, as well as the roles of these companies in influencing agricultural education, research, and public policy through funding initiatives and lobbying efforts.
The definition and effects of corporate farming on agriculture are widely debated, though sources that describe large businesses in agriculture as "corporate farms" may portray them negatively.
Definitions and usage
The varied and fluid meanings of "corporate farming" have resulted in conflicting definitions of the term, with implications in particular for legal definitions.
Legal definitions
Most legal definitions of corporate farming in the United States pertain to tax laws, anti-corporate farming laws, and census data collection. These definitions mostly reference farm income, indicating farms over a certain threshold as corporate farms, as well as ownership of the farm, specifically targeting farms that do not pass ownership through family lines.
Common definitions
In public discourse, the term "corporate farming" lacks a firmly established definition and is variously applied. However, several features of the term's usage frequently arise:
It is largely used as a pejorative with strong negative connotations.
It most commonly refers to corporations that are large-scale farms, market agricultural technologies (in particular pesticides, fertilizers, and GMO's), have significant economic and political influence, or some combination of the three.
It is usually used in opposition to family farms and new agricultural movements, such as sustainable agriculture and the local food movement.
Family farms
"Family farm" and "corporate farm" are often defined as mutually exclusive terms, with the two having different interests. This mostly stems from the widespread assumption that family farms are small farms while corporate farms are large-scale operations. While it is true that the majority of small farms are family owned, many large farms are also family businesses, including some of the largest farms in the US.
According to the Food and Agricultural Organization of the United Nations (FAO), a family farm "is a means of organizing agricultural, forestry, fisheries, pastoral and aquaculture production which is managed and operated by a family and predominantly reliant on family labour, both women's and men's. The family and the farm are linked, coevolve and combine economic, environmental, reproductive, social and cultural functions."
Additionally, there are large economic and legal incentives for family farmers to incorporate their businesses.
Contract farming
Farming contracts are agreements between a farmer and a buyer that stipulates what the farmer will grow and how much they will grow usually in return for guaranteed purchase of the product or financial support in purchase of inputs (e.g. feed for livestock growers). In most instances of contract farming, the farm is family owned while the buyer is a larger corporation. This makes it difficult to distinguish the contract farmers from "corporate farms," because they are family farms but with significant corporate influence. This subtle distinction left a loop-hole in many state laws that prohibited corporate farming, effectively allowing corporations to farm in these states as long as they contracted with local farm owners.
Non-farm entities
Many people also choose to include non-farming entities in their definitions of corporate farming. Beyond just the farm contractors mentioned above, these types of companies commonly considered part of the term include Cargill, Monsanto, and DuPont Pioneer among others. These corporations do not have production farms, meaning they do not produce a significant amount of farm products. However, their role in producing and selling agricultural supplies and their purchase and processing of farm products often leads to them being grouped with corporate farms. While this is technically incorrect, it is widely considered substantively accurate because including these companies in the term "corporate farming" is necessary to describe their real influence over agriculture.
Arguments against corporate farming
Family farms maintain traditions including environmental stewardship and taking longer views than companies seeking profits. Family farmers may have greater knowledge about soil and crop types, terrains, weather and other features specific to particular local areas of land can be passed from parent to child over generations, which would be harder for corporate managers to grasp.
North America
In Canada, 17.4 percent of farms are owned by family corporations and 2.4 percent by non-family corporations. In Canada (as in some other jurisdictions) conversion of a sole proprietorship family farm to a family corporation can have tax planning benefits, and in some cases, the difference in combined provincial and federal taxation rates is substantial. Also, for farm families with significant off-farm income, incorporating the farm can provide some shelter from high personal income tax rates. Another important consideration can be some protection of the corporate shareholders from liability. Incorporating a family farm can also be useful as a succession tool, among other reasons because it can maintain a family farm as a viable operation where subdivision of the farm into smaller operations among heirs might result in farm sizes too small to be viable.
The 2012 US Census of Agriculture indicates that 5.06 percent of US farms are corporate farms. These include family corporations (4.51 percent) and non-family corporations (0.55 percent). Of the family farm corporations, 98 percent are small corporations, with 10 or fewer stockholders. Of the non-family farm corporations, 90 percent are small corporations, with 10 or fewer stockholders. Non-family corporate farms account for 1.36 percent of US farmland area. Family farms (including family corporate farms) account for 96.7 percent of US farms and 89 percent of US farmland area; a USDA study estimated that family farms accounted for 85 percent of US gross farm income in 2011. Other farmland in the US is accounted for by several other categories, including single proprietorships where the owner is not the farm operator, non-family partnerships, estates, trusts, cooperatives, collectives, institutional, research, experimental and American Indian Reservation farms.
In the US, the average size of a non-family corporate farm is 1078 acres, i.e. smaller than the average family corporate farm (1249 acres) and smaller than the average partnership farm (1131 acres).
US farm laws
To date, nine US states have enacted laws that restrict or prohibit corporate farming. The first of these laws were enacted in the 1930s by Kansas and North Dakota respectively. In the 1970s, similar laws were passed in Iowa, Minnesota, Missouri, South Dakota and Wisconsin. In 1982, after failure to pass an anti–corporate farming law, the citizens of Nebraska enacted by initiative a similar amendment into their state constitution. The citizens of South Dakota similarly amended their state constitution in 1998.
All nine laws have similar content. They all restrict corporate ability to own and operate on farmland. They all outline exceptions for specific types of corporations. Generally, family farm corporations are exempted, although certain conditions may have to be fulfilled for such exemption (e.g. one or more of: shareholders within a specified degree of kinship owning a majority of voting stock, no shareholders other than natural persons, limited number of shareholders, at least one family member residing on the farm). However, the laws vary significantly in how they define a corporate farm, and in the specific restrictions. Definitions of a farm can include any and all farm operations, or be dependent on the source of income, as in Iowa, where 60 percent of income must come from farm products. Additionally, these laws can target a corporation's use of the land, meaning that companies can own but not farm the land, or they may outright prohibit corporations from buying and owning farmland. The precise wording of these laws has significant impact on how corporations can participate in agriculture in these states with the ultimate goal of protecting and empowering the family farm.
Europe
Family farms across Europe are heavily protected by EU regulations, which have been driven in particular by French farmers and the French custom splitting land inheritance between children to produce many very small family farms. In regions such as East Anglia, UK, some agribusiness is practiced through company ownership, but most large UK land estates are still owned by wealthy families such as traditional aristocrats, as encouraged by favourable inheritance tax rules.
Most farming in the Soviet Union and its Eastern Bloc satellite states was collectivized. After the dissolution of those states via the revolutions of 1989 and the dissolution of the Soviet Union, decades of decollectivization and land reform have occurred, with the details varying substantially by country.
Asia
Pakistan
As Pakistan's population surged, it gradually turned from a net food exporter to a net food importer, straining Pakistan's economy and food security. In response, the Pakistani military has led an initiative to set up corporate farming, a project called the Green Pakistan Initiative, and therefore drastically grow more essential food supplies for both sustenance and exports.
Africa
Corporate farming has begun to take hold in some African countries, where listed companies such as Zambeef, Zambia are operated by MBAs as large businesses. In some cases, this has caused debates about land ownership where shares have been bought by international investors, especially from China.
Middle East
Some oil-rich middle east countries operate corporate farming including large-scale irrigation of desert lands for cropping, sometimes through partially or fully state-owned companies, especially with regards to water resource management.
See also
Agribusiness
Agricultural education
Food industry
History of agriculture
Intensive farming
List of agricultural universities and colleges
Organic farming
Outline of agriculture
Sustainable agriculture
United States Department of Agriculture
External links
"Family farming is a lifestyle" 2014 - International Year of Family Farming – European Economic and Social Committee
References
Commercial farming
Intensive farming
Corporations | Corporate farming | [
"Chemistry"
] | 1,930 | [
"Eutrophication",
"Intensive farming"
] |
156,411 | https://en.wikipedia.org/wiki/Galois%20connection | In mathematics, especially in order theory, a Galois connection is a particular correspondence (typically) between two partially ordered sets (posets). Galois connections find applications in various mathematical theories. They generalize the fundamental theorem of Galois theory about the correspondence between subgroups and subfields, discovered by the French mathematician Évariste Galois.
A Galois connection can also be defined on preordered sets or classes; this article presents the common case of posets.
The literature contains two closely related notions of "Galois connection". In this article, we will refer to them as (monotone) Galois connections and antitone Galois connections.
A Galois connection is rather weak compared to an order isomorphism between the involved posets, but every Galois connection gives rise to an isomorphism of certain sub-posets, as will be explained below.
The term Galois correspondence is sometimes used to mean a bijective Galois connection; this is simply an order isomorphism (or dual order isomorphism, depending on whether we take monotone or antitone Galois connections).
Definitions
(Monotone) Galois connection
Let and be two partially ordered sets. A monotone Galois connection between these posets consists of two monotone functions: and , such that for all in and in , we have
if and only if .
In this situation, is called the lower adjoint of and is called the upper adjoint of F. Mnemonically, the upper/lower terminology refers to where the function application appears relative to ≤. The term "adjoint" refers to the fact that monotone Galois connections are special cases of pairs of adjoint functors in category theory as discussed further below. Other terminology encountered here is left adjoint (respectively right adjoint) for the lower (respectively upper) adjoint.
An essential property of a Galois connection is that an upper/lower adjoint of a Galois connection uniquely determines the other:
is the least element with , and
is the largest element with .
A consequence of this is that if or is bijective then each is the inverse of the other, i.e. .
Given a Galois connection with lower adjoint and upper adjoint , we can consider the compositions , known as the associated closure operator, and , known as the associated kernel operator. Both are monotone and idempotent, and we have for all in and for all in .
A Galois insertion of into is a Galois connection in which the kernel operator is the identity on , and hence is an order isomorphism of onto the set of closed elements  [] of .
Antitone Galois connection
The above definition is common in many applications today, and prominent in lattice and domain theory. However the original notion in Galois theory is slightly different. In this alternative definition, a Galois connection is a pair of antitone, i.e. order-reversing, functions and between two posets and , such that
if and only if .
The symmetry of and in this version erases the distinction between upper and lower, and the two functions are then called polarities rather than adjoints. Each polarity uniquely determines the other, since
is the largest element with , and
is the largest element with .
The compositions and are the associated closure operators; they are monotone idempotent maps with the property for all in and for all in .
The implications of the two definitions of Galois connections are very similar, since an antitone Galois connection between and is just a monotone Galois connection between and the order dual of . All of the below statements on Galois connections can thus easily be converted into statements about antitone Galois connections.
Examples
Bijections
The bijection of a pair of functions and each other's inverse, forms a (trivial) Galois connection, as follows. Because the equality relation is reflexive, transitive and antisymmetric, it is, trivially, a partial order, making and partially ordered sets. Since if and only if we have a Galois connection.
Monotone Galois connections
Floor; ceiling
A monotone Galois connection between the set of integers and the set of real numbers, each with its usual ordering, is given by the usual embedding function of the integers into the reals and the floor function truncating a real number to the greatest integer less than or equal to it. The embedding of integers is customarily done implicitly, but to show the Galois connection we make it explicit. So let denote the embedding function, with while denotes the floor function, so The equivalence then translates to
This is valid because the variable is restricted to the integers. The well-known properties of the floor function, such as can be derived by elementary reasoning from this Galois connection.
The dual orderings give another monotone Galois connection, now with the ceiling function:
Power set; implication and conjunction
For an order-theoretic example, let be some set, and let and both be the power set of , ordered by inclusion. Pick a fixed subset of . Then the maps and , where , and , form a monotone Galois connection, with being the lower adjoint. A similar Galois connection whose lower adjoint is given by the meet (infimum) operation can be found in any Heyting algebra. Especially, it is present in any Boolean algebra, where the two mappings can be described by and . In logical terms: "implication from " is the upper adjoint of "conjunction with ".
Lattices
Further interesting examples for Galois connections are described in the article on completeness properties. Roughly speaking, it turns out that the usual functions ∨ and ∧ are lower and upper adjoints to the diagonal map . The least and greatest elements of a partial order are given by lower and upper adjoints to the unique function Going further, even complete lattices can be characterized by the existence of suitable adjoints. These considerations give some impression of the ubiquity of Galois connections in order theory.
Transitive group actions
Let act transitively on and pick some point in . Consider
the set of blocks containing . Further, let consist of the subgroups of containing the stabilizer of .
Then, the correspondence :
is a monotone, one-to-one Galois connection. As a corollary, one can establish that doubly transitive actions have no blocks other than the trivial ones (singletons or the whole of ): this follows from the stabilizers being maximal in in that case. See Doubly transitive group for further discussion.
Image and inverse image
If is a function, then for any subset of we can form the image and for any subset of we can form the inverse image Then and form a monotone Galois connection between the power set of and the power set of , both ordered by inclusion ⊆. There is a further adjoint pair in this situation: for a subset of , define Then and form a monotone Galois connection between the power set of and the power set of . In the first Galois connection, is the upper adjoint, while in the second Galois connection it serves as the lower adjoint.
In the case of a quotient map between algebraic objects (such as groups), this connection is called the lattice theorem: subgroups of connect to subgroups of , and the closure operator on subgroups of is given by .
Span and closure
Pick some mathematical object that has an underlying set, for instance a group, ring, vector space, etc. For any subset of , let be the smallest subobject of that contains , i.e. the subgroup, subring or subspace generated by . For any subobject of , let be the underlying set of . (We can even take to be a topological space, let the closure of , and take as "subobjects of the closed subsets of .) Now and form a monotone Galois connection between subsets of and subobjects of , if both are ordered by inclusion. is the lower adjoint.
Syntax and semantics
A very general comment of William Lawvere is that syntax and semantics are adjoint: take to be the set of all logical theories (axiomatizations) reverse ordered by strength, and the power set of the set of all mathematical structures. For a theory , let be the set of all structures that satisfy the axioms  ; for a set of mathematical structures , let be the minimum of the axiomatizations that approximate (in first-order logic, this is the set of sentences that are true in all structures in ). We can then say that is a subset of if and only if logically entails : the "semantics functor" and the "syntax functor" form a monotone Galois connection, with semantics being the upper adjoint.
Antitone Galois connections
Galois theory
The motivating example comes from Galois theory: suppose is a field extension. Let be the set of all subfields of that contain , ordered by inclusion ⊆. If is such a subfield, write for the group of field automorphisms of that hold fixed. Let be the set of subgroups of , ordered by inclusion ⊆. For such a subgroup , define to be the field consisting of all elements of that are held fixed by all elements of . Then the maps and form an antitone Galois connection.
Algebraic topology: covering spaces
Analogously, given a path-connected topological space , there is an antitone Galois connection between subgroups of the fundamental group and path-connected covering spaces of . In particular, if is semi-locally simply connected, then for every subgroup of , there is a covering space with as its fundamental group.
Linear algebra: annihilators and orthogonal complements
Given an inner product space , we can form the orthogonal complement of any subspace of . This yields an antitone Galois connection between the set of subspaces of and itself, ordered by inclusion; both polarities are equal to .
Given a vector space and a subset of we can define its annihilator , consisting of all elements of the dual space of that vanish on . Similarly, given a subset of , we define its annihilator This gives an antitone Galois connection between the subsets of and the subsets of .
Algebraic geometry
In algebraic geometry, the relation between sets of polynomials and their zero sets is an antitone Galois connection.
Fix a natural number and a field and let be the set of all subsets of the polynomial ring ordered by inclusion ⊆, and let be the set of all subsets of ordered by inclusion ⊆. If is a set of polynomials, define the variety of zeros as
the set of common zeros of the polynomials in . If is a subset of , define as the ideal of polynomials vanishing on , that is
Then and I form an antitone Galois connection.
The closure on is the closure in the Zariski topology, and if the field is algebraically closed, then the closure on the polynomial ring is the radical of ideal generated by .
More generally, given a commutative ring (not necessarily a polynomial ring), there is an antitone Galois connection between radical ideals in the ring and Zariski closed subsets of the affine variety .
More generally, there is an antitone Galois connection between ideals in the ring and subschemes of the corresponding affine variety.
Connections on power sets arising from binary relations
Suppose and are arbitrary sets and a binary relation over and is given. For any subset of , we define Similarly, for any subset of , define Then and yield an antitone Galois connection between the power sets of and , both ordered by inclusion ⊆.
Up to isomorphism all antitone Galois connections between power sets arise in this way. This follows from the "Basic Theorem on Concept Lattices". Theory and applications of Galois connections arising from binary relations are studied in formal concept analysis. That field uses Galois connections for mathematical data analysis. Many algorithms for Galois connections can be found in the respective literature, e.g., in.
The general concept lattice in its primitive version incorporates both the monotone and antitone Galois connections to furnish its upper and lower bounds of nodes for the concept lattice, respectively.
Properties
In the following, we consider a (monotone) Galois connection , where is the lower adjoint as introduced above. Some helpful and instructive basic properties can be obtained immediately. By the defining property of Galois connections, is equivalent to , for all in . By a similar reasoning (or just by applying the duality principle for order theory), one finds that , for all in . These properties can be described by saying the composite is deflationary, while is inflationary (or extensive).
Now consider such that . Then using the above one obtains . Applying the basic property of Galois connections, one can now conclude that . But this just shows that preserves the order of any two elements, i.e. it is monotone. Again, a similar reasoning yields monotonicity of . Thus monotonicity does not have to be included in the definition explicitly. However, mentioning monotonicity helps to avoid confusion about the two alternative notions of Galois connections.
Another basic property of Galois connections is the fact that , for all in . Clearly we find that
.
because is inflationary as shown above. On the other hand, since is deflationary, while is monotonic, one finds that
.
This shows the desired equality. Furthermore, we can use this property to conclude that
and
i.e., and are idempotent.
It can be shown (see Blyth or Erné for proofs) that a function is a lower (respectively upper) adjoint if and only if is a residuated mapping (respectively residual mapping). Therefore, the notion of residuated mapping and monotone Galois connection are essentially the same.
Closure operators and Galois connections
The above findings can be summarized as follows: for a Galois connection, the composite is monotone (being the composite of monotone functions), inflationary, and idempotent. This states that is in fact a closure operator on . Dually, is monotone, deflationary, and idempotent. Such mappings are sometimes called kernel operators. In the context of frames and locales, the composite is called the nucleus induced by . Nuclei induce frame homomorphisms; a subset of a locale is called a sublocale if it is given by a nucleus.
Conversely, any closure operator on some poset gives rise to the Galois connection with lower adjoint being just the corestriction of to the image of (i.e. as a surjective mapping the closure system ). The upper adjoint is then given by the inclusion of into , that maps each closed element to itself, considered as an element of . In this way, closure operators and Galois connections are seen to be closely related, each specifying an instance of the other. Similar conclusions hold true for kernel operators.
The above considerations also show that closed elements of (elements with ) are mapped to elements within the range of the kernel operator , and vice versa.
Existence and uniqueness of Galois connections
Another important property of Galois connections is that lower adjoints preserve all suprema that exist within their domain. Dually, upper adjoints preserve all existing infima. From these properties, one can also conclude monotonicity of the adjoints immediately. The adjoint functor theorem for order theory states that the converse implication is also valid in certain cases: especially, any mapping between complete lattices that preserves all suprema is the lower adjoint of a Galois connection.
In this situation, an important feature of Galois connections is that one adjoint uniquely determines the other. Hence one can strengthen the above statement to guarantee that any supremum-preserving map between complete lattices is the lower adjoint of a unique Galois connection. The main property to derive this uniqueness is the following: For every in , is the least element of such that . Dually, for every in , is the greatest in such that . The existence of a certain Galois connection now implies the existence of the respective least or greatest elements, no matter whether the corresponding posets satisfy any completeness properties. Thus, when one upper adjoint of a Galois connection is given, the other upper adjoint can be defined via this same property.
On the other hand, some monotone function is a lower adjoint if and only if each set of the form for in , contains a greatest element. Again, this can be dualized for the upper adjoint.
Galois connections as morphisms
Galois connections also provide an interesting class of mappings between posets which can be used to obtain categories of posets. Especially, it is possible to compose Galois connections: given Galois connections between posets and and between and , the composite is also a Galois connection. When considering categories of complete lattices, this can be simplified to considering just mappings preserving all suprema (or, alternatively, infima). Mapping complete lattices to their duals, these categories display auto duality, that are quite fundamental for obtaining other duality theorems. More special kinds of morphisms that induce adjoint mappings in the other direction are the morphisms usually considered for frames (or locales).
Connection to category theory
Every partially ordered set can be viewed as a category in a natural way: there is a unique morphism from x to y if and only if . A monotone Galois connection is then nothing but a pair of adjoint functors between two categories that arise from partially ordered sets. In this context, the upper adjoint is the right adjoint while the lower adjoint is the left adjoint. However, this terminology is avoided for Galois connections, since there was a time when posets were transformed into categories in a dual fashion, i.e. with morphisms pointing in the opposite direction. This led to a complementary notation concerning left and right adjoints, which today is ambiguous.
Applications in the theory of programming
Galois connections may be used to describe many forms of abstraction in the theory of abstract interpretation of programming languages.
Notes
References
The following books and survey articles include Galois connections using the monotone definition:
Brian A. Davey and Hilary A. Priestley: Introduction to Lattices and Order, Cambridge University Press, 2002.
Marcel Erné, Jürgen Koslowski, Austin Melton, George E. Strecker, A primer on Galois connections, in: Proceedings of the 1991 Summer Conference on General Topology and Applications in Honor of Mary Ellen Rudin and Her Work, Annals of the New York Academy of Sciences, Vol. 704, 1993, pp. 103–125. (Freely available online in various file formats PS.GZ PS, it presents many examples and results, as well as notes on the different notations and definitions that arose in this area.)
Some publications using the original (antitone) definition:
Thomas Scott Blyth, Lattices and Ordered Algebraic Structures, Springer, 2005, .
Galois theory
Order theory
Abstract interpretation
Closure operators | Galois connection | [
"Mathematics"
] | 4,010 | [
"Order theory",
"Closure operators"
] |
156,428 | https://en.wikipedia.org/wiki/Microtechnology | Microtechnology is technology whose features have dimensions of the order of one micrometre (one millionth of a metre, or 10−6 metre, or 1μm). It focuses on physical and chemical processes as well as the production or manipulation of structures with one-micrometre magnitude.
Development
Around 1970, scientists learned that by arraying large numbers of microscopic transistors on a single chip, microelectronic circuits could be built that dramatically improved performance, functionality, and reliability, all while reducing cost and increasing volume. This development led to the Information Revolution.
More recently, scientists have learned that not only electrical devices, but also mechanical devices, may be miniaturized and batch-fabricated, promising the same benefits to the mechanical world as integrated circuit technology has given to the electrical world. While electronics now provide the ‘brains’ for today's advanced systems and products, micro-mechanical devices can provide the sensors and actuators — the eyes and ears, hands and feet — which interface to the outside world.
Today, micromechanical devices are the key components in a wide range of products such as automobile airbags, ink-jet printers, blood pressure monitors, and projection display systems. It seems clear that in the not-too-distant future these devices will be as pervasive as electronics. The process has also become more precise, driving the dimensions of the technology down to sub-micrometer range as demonstrated in the case of advanced microelectric circuits that reached below 20 nm.
Micro electromechanical systems
The term MEMS, for Micro Electro Mechanical Systems, was coined in the 1980s to describe new, sophisticated mechanical systems on a chip, such as micro electric motors, resonators, gears, and so on. Today, the term MEMS in practice is used to refer to any microscopic device with a mechanical function, which can be fabricated in a batch process (for example, an array of microscopic gears fabricated on a microchip would be considered a MEMS device but a tiny laser-machined stent or watch component would not). In Europe, the term MST for Micro System Technology is preferred, and in Japan MEMS are simply referred to as "micromachines". The distinctions in these terms are relatively minor and are often used interchangeably.
Though MEMS processes are generally classified into a number of categories – such as surface machining, bulk machining, LIGA, and EFAB – there are indeed thousands of different MEMS processes. Some produce fairly simple geometries, while others offer more complex 3-D geometries and more versatility. A company making accelerometers for airbags would need a completely different design and process to produce an accelerometer for inertial navigation. Changing from an accelerometer to another inertial device such as a gyroscope requires an even greater change in design and process, and most likely a completely different fabrication facility and engineering team.
MEMS technology has generated a tremendous amount of excitement, due to the vast range of important applications where MEMS can offer previously unattainable performance and reliability standards. In an age where everything must be smaller, faster, and cheaper, MEMS offers a compelling solution. MEMS have already had a profound impact on certain applications such as automotive sensors and inkjet printers. The emerging MEMS industry is already a multibillion-dollar market. It is expected to grow rapidly and become one of the major industries of the 21st century. Cahners In-Stat Group has projected sales of MEMS to reach $12B by 2005. The European NEXUS group projects even larger revenues, using a more inclusive definition of MEMS.
Microtechnology is often constructed using photolithography. Lightwaves are focused through a mask onto a surface. They solidify a chemical film. The soft, unexposed parts of the film are washed away. Then acid etches away the material not protected.
Microtechnology's most famous success is the integrated circuit. It has also been used to construct micromachinery. As an offshoot of researchers attempting to further miniaturize microtechnology, nanotechnology emerged in the 1980s, particularly after the invention of new microscopy techniques. These produced materials and structures that have 1-100 nm in dimensions.
Items constructed at the microscopic level
The following items have been constructed on a scale of 1 micrometre using photolithography:
Electronics:
wires
resistors
transistors
thermionic valves
diodes
sensors
capacitors
Machinery:
electric motors
gears
levers
bearings
hinges
Fluidics:
valves
channels
pumps
turbines
See also
Microfabrication
References
External links
Institute for Micromachine and Microfabrication Research at Simon Fraser University
Nanotechnology
Semiconductor device fabrication
Technology by type | Microtechnology | [
"Materials_science",
"Engineering"
] | 978 | [
"Nanotechnology",
"Semiconductor device fabrication",
"Materials science",
"Microtechnology"
] |
156,431 | https://en.wikipedia.org/wiki/M%C3%BCller-Lyer%20illusion | The Müller-Lyer illusion is an optical illusion consisting of three stylized arrows. When viewers are asked to place a mark on the figure at the midpoint, they tend to place it more towards the "tail" end. The illusion was devised by Franz Carl Müller-Lyer (1857–1916), a German sociologist, in 1889.
A variation of the same effect (and the most common form in which it is seen today) consists of a set of arrow-like figures. Straight line segments of equal length comprise the "shafts" of the arrows, while shorter line segments (called the fins) protrude from the ends of the shaft. The fins can point inwards to form an arrow "head" or outwards to form an arrow "tail". The line segment forming the shaft of the arrow with two tails is perceived to be longer than that forming the shaft of the arrow with two heads.
Variation in perception
Research has shown that sensation of the Müller-Lyer illusion can vary. Around the turn of the 20th century, W. H. R. Rivers noted that indigenous people of the Australian Murray Island were less susceptible to the Müller-Lyer illusion than were Europeans. Rivers suggested that this difference may be because Europeans live in more rectilinear environments than the islanders. Similar results were also observed by John W. Berry in his work on Inuit, urban Scots, and the Temne people in the 1960s.
In 1963, Segall, Campbell and Herskovitz compared susceptibility to four different visual illusions in three population samples of Caucasians, twelve of Africans, and one from the Philippines. For the Müller-Lyer illusion, the mean fractional misperception of the length of the line segments varied from 1.4% to 20.3%. The three European-derived samples were the three most susceptible samples, while the San foragers of the Kalahari desert were the least susceptible.
In 1965, following a debate between Donald T. Campbell and Melville J. Herskovits on whether culture can influence such basic aspects of perception such as the length of a line, they suggested that their student Marshall Segall investigate the problem. In their definitive paper of 1966, they investigated seventeen cultures and showed that people in different cultures differ substantially on how they experience the Müller-Lyer stimuli. They wrote that "European and American city dwellers have a much higher percentage of rectangularity in their environments than non-Europeans and so are more susceptible to that illusion."
They also used the word "carpentered" for the environments that Europeans mostly live in - characterized by straight lines, right angles, and square corners.
These conclusions were challenged in later work by Gustav Jahoda, who compared members of an African tribe living in a traditional rural environment with members of same group living in African cities. Here, no significant difference in susceptibility to the M-L illusion was found. Subsequent work by Jahoda suggested that retinal pigmentation may have a role in the differing perceptions on this illusion, and this was verified later by Pollack (1970). It is believed now that not "carpenteredness", but the density of pigmentation in the eye is related to susceptibility to the M-L illusion. Dark-skinned people often have denser eye pigmentation.
A later study was conducted in 1978 by Ahluwalia on children and young adults from Zambia. Subjects from rural areas were compared with subjects from urban areas. The subjects from urban areas were shown to be considerably more susceptible to the illusion, as were younger subjects. While this by no means confirms the carpentered world hypothesis as such, it provides evidence that differences in the environment can create differences in the perception of the Müller-Lyer illusion, even within a given culture. Experiments have been reported suggesting that pigeons perceive the standard Müller-Lyer illusion, but not the reversed. Experiments on parrots have also been reported with similar results.
Perspective explanation
One possible explanation, given by Richard Gregory, is that the Müller-Lyer illusion occurs because the visual system learns that the "angles in" configuration corresponds to a rectilinear object, such as the convex corner of a room, which is closer, and the "angles out" configuration corresponds to an object which is far away, such as the concave corner of a room. However, in a recent report Catherine Howe and Dale Purves contradicted Gregory's explanation:
Although Gregory's intuition about the empirical significance of the Müller-Lyer stimulus points in the right general direction (i.e., an explanation based on past experience with the sources of such stimuli), convex and concave corners contribute little if anything to the Müller-Lyer effect.
Neural nets in the visual system of human beings learn how to make a very efficient interpretation of 3D scenes. That is why when somebody goes away from us, we do not perceive them as getting shorter. And when we stretch one arm and look at the two hands we do not perceive one hand smaller than the other. Visual illusions are sometimes held to show us that what we see is an image created in our brain. Our brain supposedly projects the image of the smaller hand to its correct distance in our internal 3D model. This is what is called the size constancy mechanism hypothesis.
In the Müller-Lyer illusion, the visual system would in this explanation detect the depth cues, which are usually associated with 3D scenes, and incorrectly decide it is a 3D drawing. Then the size constancy mechanism would make us see an erroneous length of the object which, for a true perspective drawing, would be farther away.
In the perspective drawing in the figure, we see that in usual scenes the heuristic works quite well. The width of the rug should obviously be considered shorter than the length of the wall in the back.
Centroid explanation
According to the so-called centroid hypothesis, judgments of distance between visual objects are strongly affected by the neural computation of the centroids of the luminance profiles of the objects, in that the position of the centroid of an image determines its perceived location. Morgan et al., suggest that the visual procedure of centroid extraction is causally related to a spatial pooling of the positional signals evoked by the neighboring object parts. Though the integration coarsens the positional acuity, such pooling seems to be quite biologically substantiated since it allows fast and reliable assessment of the location of the visual object as whole, irrespective of its size, the shape complexity, and illumination conditions. Concerning the Müller-Lyer and similar illusions, the pattern of neural excitation evoked by contextual flank (e.g., the Müller-Lyer wings themselves) overlaps with that caused by the stimulus terminator (e.g., the wings apex), thereby leading (due to the shift of the centroid of summed excitation) to its perceptual displacement. The crucial point in the centroid explanation regarding the positional shifts of the stimulus terminators in the direction of the centroids of contextual flanks was confirmed in psychophysical examination of illusory figures with rotating distractors. The relative displacement of all stimulus terminators leads to misjudgment of distances between them; that is, the illusion occurs as a side effect due to necessarily low spatial resolution of the neural mechanism of assessment of the relative location of the visual objects. Besides, it was shown that well-known asymmetry in manifestation of the wings-in and wings-out modifications of the Müller-Lyer illusion can be successfully explained by supplemental effects of the filled-space illusion.
References
External links
Müller-Lyer Illusion
Dynamic Müller-Lyer Illusion by Gianni A. Sarcone
Visual explanation of the color spreading effect in the Müller-Lyer illusion
The Müller-Lyer illusion explained by the statistics of image–source relationships
NAKAMURA Noriyuki (Müller-Lyer Illusion in pigeons)
The Muller-Lyer Illusion explained by Rochester Institute of Technology
Optical illusions | Müller-Lyer illusion | [
"Physics"
] | 1,645 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
10,730,655 | https://en.wikipedia.org/wiki/NGC%20265 | NGC 265 is an open cluster of stars in the southern constellation of Tucana. It is located in the Small Magellanic Cloud, a nearby dwarf galaxy. The cluster was discovered by English astronomer John Herschel on April 11, 1834. J. L. E. Dreyer described it as, "faint, pretty small, round", and added it as the 265th entry in his New General Catalogue.
This cluster has an angular core radius of and a physical radius of approximately . It has a combined 4,200 times the mass of the Sun and is around 250 million years old. The metallicity of the cluster – what astronomers term the abundance of elements with higher atomic number than helium – is at around −0.62, or only 24% of that in the Sun. The turn-off mass for the cluster, when a star of that mass begins to evolve off the main sequence into a giant, is about 4.0 to .
See also
NGC 290
References
External links
ESA Hubble space telescope site: Hubble picture in information on NGC 265
HubbleSite NewsCenter: Information on NGC 265 and the Hubble picture
Open clusters
Small Magellanic Cloud
Tucana
0265
18340411 | NGC 265 | [
"Astronomy"
] | 246 | [
"Tucana",
"Constellations"
] |
10,730,840 | https://en.wikipedia.org/wiki/Light%20echo | A light echo is a physical phenomenon caused by light reflected off surfaces distant from the source, and arriving at the observer with a delay relative to this distance. The phenomenon is analogous to an echo of sound, but due to the much faster speed of light, it mostly manifests itself only over astronomical distances.
For example, a light echo is produced when a sudden flash from a nova is reflected off a cosmic dust cloud, and arrives at the viewer after a longer duration than it otherwise would have taken with a direct path. Because of their geometries, light echoes can produce the illusion of superluminal motion.
Explanation
Light echoes are produced when the initial flash from a rapidly brightening object such as a nova is reflected off intervening interstellar dust which may or may not be in the immediate vicinity of the source of the light. Light from the initial flash arrives at the viewer first, while light reflected from dust or other objects between the source and the viewer begins to arrive shortly afterward. Because this light has only travelled forward as well as away from the star, it produces the illusion of an echo expanding faster than the speed of light.
In the first illustration above, light following path A is emitted from the original source and arrives at the observer first. Light which follows path B is reflected off a part of the gas cloud at a point between the source and the observer, and light following path C is reflected off a part of the gas cloud perpendicular to the direct path. Although light following paths B and C appear to come from the same point in the sky to the observer, B is actually significantly closer. As a result, the echo of the event in an evenly distributed (spherical) cloud for example will appear to the observer to expand at a rate approaching or faster than the speed of light, because the observer may assume the light from B is actually the light from C.
All reflected light rays that originate from the flash and arrive at Earth together will have traveled the same distance. When the rays of light are reflected, the possible paths between the source and Earth that arrive at the same time correspond to reflections on an ellipsoid, with the origin of the flash and Earth as its two foci (see animation to the right). This ellipsoid naturally expands over time.
Examples
V838 Monocerotis
The variable star V838 Monocerotis experienced a significant outburst in 2002 as observed by the Hubble Space Telescope. The outburst proved surprising to observers when the object appeared to expand at a rate far exceeding the speed of light as it grew from an apparent visual size of 4 to 7 light years in a matter of months.
Supernovae
Using light echoes, it is sometimes possible to see the faint reflections of historical supernovae. Astronomers calculate the ellipsoid which has Earth and a supernova remnant at its focal points to locate clouds of dust and gas at its boundary. Identification can be done using laborious comparisons of photos taken months or years apart, and spotting changes in the light rippling across the interstellar medium. By analyzing the spectra of reflected light, astronomers can discern chemical signatures of supernovae whose light reached Earth long before the invention of the telescope and compare the explosion with its remnants, which may be centuries or millennia old. The first recorded instance of such an echo was in 1936, but it was not studied in detail.
An example is supernova SN 1987A, the closest supernova in modern times. Its light echoes have aided in mapping the morphology of the immediate vicinity as well as in characterizing dust clouds lying further away but close to the line of sight from Earth.
Another example is the SN 1572 supernova observed on Earth in 1572, where in 2008, faint light-echoes were seen on dust in the northern part of the Milky Way.
Light echoes have also been used to study the supernova that produced the supernova remnant Cassiopeia A. The light from Cassiopeia A would have been visible on Earth around 1660, but went unnoticed, probably because dust obscured the direct view. Reflections from different directions allow astronomers to determine if a supernova was asymmetrical and shone more brightly in some directions than in others. The progenitor of Cassiopeia A has been suspected as being asymmetric, and looking at the light echoes of Cassiopeia A allowed for the first detection of supernova asymmetry in 2010.
Yet other examples are supernovae SN 1993J and SN 2014J.
Light echo from the 1838-1858 Great Eruption of Eta Carinae were used to study this supernova imposter. A study from 2012, which used light echo spectra from the Great Eruption, found that the eruption was colder compared to other supernova imposters.
Cepheids
Light echoes were used to determine the distance to the Cepheid variable RS Puppis to an accuracy of 1%. Pierre Kervella at the European Southern Observatory described this measurement as so far "the most accurate distance to a Cepheid".
Nova Persei 1901
In 1939, French astronomer Paul Couderc published a study entitled "Les Auréoles Lumineuses des Novae" (Luminous Haloes of the Novae). Within this study, Couderc published the derivation of echo locations and time delays in the paraboloid, rather than ellipsoid, approximation of infinite distance. However, in his 1961 study, Y.K. Gulak queried Couderc's theories: "It is shown that there is an essential error in the proof according to which Couderc assumed the possibility of expansion of the bright ring (nebula) around Nova Persei 1901 with a velocity exceeding that of light." He continues: "The comparison of the formulas obtained by the author, with the conclusions and formulas of Couderc, shows that the coincidence of the parallax calculated according to Coudrec's scheme, with parallaxes derived by other methods, could have been accidental."
ShaSS 622-073 system
The ShaSS 622-073 system is composed of the larger galaxy ShaSS 073 (seen in yellow in the image on the right) and the smaller galaxy ShaSS 622 (seen in blue) that are at the very beginning of a merger. The bright core of ShaSS 073 has excited with its radiation a region of gas within the disc of ShaSS 622; even though the core has faded over the last 30,000 years, the region still glows brightly as it re-emits the light.
Quasar light and ionisation echoes
Since 2009 objects known either as quasar light echoes or quasar ionisation echoes have been investigated. A well studied example of a quasar light echo is the object known as Hanny's Voorwerp (HsV).
HsV is made entirely of gas so hotabout 10,000 degrees Celsiusthat astronomers felt it had to be illuminated by something powerful. After several studies of light and ionisation echoes, it is thought they are likely caused by the 'echo' of a previously-active AGN that has shut down. Kevin Schawinski, a co-founder of the website Galaxy Zoo, stated: "We think that in the recent past the galaxy IC 2497 hosted an enormously bright quasar. Because of the vast scale of the galaxy and the Voorwerp, light from that past still lights up the nearby Voorwerp even though the quasar shut down sometime in the past 100,000 years, and the galaxy's black hole itself has gone quiet." Chris Lintott, also a co-founder of Galaxy Zoo, stated: "From the point of view of the Voorwerp, the galaxy looks as bright as it would have before the black hole turned offit's this light echo that has been frozen in time for us to observe." The analysis of HsV in turn has led to the study of objects called Voorwerpjes and Green bean galaxies.
Gallery
See also
Cherenkov radiation
I Zwicky 1#Supermassive black hole, the first known example of light echo coming from the behind of a black hole.
References
External links
Join the Hunt for Supernova Light EchoesAnimation of the reflection ellipsoid
SuperMACHO project
Astronomical events | Light echo | [
"Astronomy"
] | 1,710 | [
"Astronomical events"
] |
10,731,502 | https://en.wikipedia.org/wiki/Honeycomb%20structure | Honeycomb structures are natural or man-made structures that have the geometry of a honeycomb to allow the minimization of the amount of used material to reach minimal weight and minimal material cost. The geometry of honeycomb structures can vary widely but the common feature of all such structures is an array of hollow cells formed between thin vertical walls. The cells are often columnar and hexagonal in shape. A honeycomb-shaped structure provides a material with minimal density and relative high out-of-plane compression properties and out-of-plane shear properties.
Man-made honeycomb structural materials are commonly made by layering a honeycomb material between two thin layers that provide strength in tension. This forms a plate-like assembly. Honeycomb materials are widely used where flat or slightly curved surfaces are needed and their high specific strength is valuable. They are widely used in the aerospace industry for this reason, and honeycomb materials in aluminum, fibreglass and advanced composite materials have been featured in aircraft and rockets since the 1950s. They can also be found in many other fields, from packaging materials in the form of paper-based honeycomb cardboard, to sporting goods like skis and snowboards.
Introduction
Natural honeycomb structures include beehives, honeycomb weathering in rocks, tripe, and bone.
Man-made honeycomb structures include sandwich-structured composites with honeycomb cores. Man-made honeycomb structures are manufactured by using a variety of different materials, depending on the intended application and required characteristics, from paper or thermoplastics, used for low strength and stiffness for low load applications, to high strength and stiffness for high performance applications, from aluminum or fiber reinforced plastics. The strength of laminated or sandwich panels depends on the size of the panel, facing material used and the number or density of the honeycomb cells within it. Honeycomb composites are used widely in many industries, from aerospace industries, automotive and furniture to packaging and logistics.
The material takes its name from its visual resemblance to a bee's honeycomb – a hexagonal sheet structure.
History
The hexagonal comb of the honey bee has been admired and wondered about from ancient times. The first man-made honeycomb, according to Greek mythology, is said to have been manufactured by Daedalus from gold by lost wax casting more than 3000 years ago. Marcus Varro reports that the Greek geometers Euclid and Zenodorus found that the hexagon shape makes most efficient use of space and building materials. The interior ribbing and hidden chambers in the dome of the Pantheon in Rome is an early example of a honeycomb structure.
Galileo Galilei discusses in 1638 the resistance of hollow solids: "Art, and nature even more, makes use of these in thousands of operations in which robustness is increased without adding weight, as is seen in the bones of birds and in many stalks that are light and very resistant to bending and breaking”.
Robert Hooke discovers in 1665 that the natural cellular structure of cork is similar to the hexagonal honeybee comb. and Charles Darwin states in 1859 that "the comb of the hive-bee, as far as we can see, is absolutely perfect in economizing labour and wax”.
The first paper honeycomb structures might have been made by the Chinese 2000 years ago for ornaments, but no reference for this has been found. Paper honeycombs and the expansion production process has been invented in Halle/Saale in Germany by Hans Heilbrun in 1901 for decorative applications. First honeycomb structures from corrugated metal sheets had been proposed for bee keeping in 1890. For the same purpose, as foundation sheets to harvest more honey, a honeycomb moulding process using a paper paste glue mixture had been patented in 1878. The three basic techniques for honeycomb production that are still used today—expansion, corrugation and moulding—were already developed by 1901 for non-sandwich applications.
Hugo Junkers first explored the idea of a honeycomb core within a laminate structure. He proposed and patented the first honeycomb cores for aircraft application in 1915. He described in detail his concept to replace the fabric covered aircraft structures by metal sheets and reasoned that a metal sheet can also be loaded in compression if it is supported at very small intervals by arranging side by side a series of square or rectangular cells or triangular or hexagonal hollow bodies. The problem of bonding a continuous skin to cellular cores led Junkers later to the open corrugated structure, which could be riveted or welded together.
The first use of honeycomb structures for structural applications had been independently proposed for building application and published already in 1914. In 1934 Edward G. Budd patented a welded steel honeycomb sandwich panel from corrugated metal sheets and Claude Dornier aimed 1937 to solve the core-skin bonding problem by rolling or pressing a skin which is in a plastic state into the core cell walls. The first successful structural adhesive bonding of honeycomb sandwich structures was achieved by Norman de Bruyne of Aero Research Limited, who patented an adhesive with the right viscosity to form resin fillets on the honeycomb core in 1938. The North American XB-70 Valkyrie made extensive use of stainless steel honeycomb panels using a brazing process they developed.
A summary of the important developments in the history of honeycomb technology is given below:
60 BC Diodorus Siculus reports a golden honeycomb manufactured by Daedalus via lost wax casting.
36 BC Marcus Varro reports most efficient use of space and building materials by hexagonal shape.
126 The Pantheon was rebuilt in Rome using a coffer structure, sunken panel in the shape of a square structure, to support its dome.
1638 Galileo Galilei discusses hollow solids and their increase of resistance without adding weight.
1665 Robert Hooke discovers that the natural cellular structure of cork is similar to the hexagonal honeybee comb.
1859 Charles Darwin states that the comb of the hive-bee is absolutely perfect in economizing labour and wax.
1877 F. H. Küstermann invents a honeycomb moulding process using a paper paste glue mixture.
1890 Julius Steigel invents the honeycomb production process from corrugated metal sheets.
1901 Hans Heilbrun invents the hexagonal paper honeycombs and the expansion production process.
1914 R. Höfler and S. Renyi patent the first use of honeycomb structures for structural applications.
1915 Hugo Junkers patents the first honeycomb cores for aircraft application.
1931 George Thomson proposes to use decorative expended paper honeycombs for lightweight plasterboard panels.
1934 Edward G. Budd patents welded steel honeycomb sandwich panel from corrugated metal sheets.
1937 Claude Dornier patents a honeycomb sandwich panel with skins pressed in a plastic state into the core cell walls.
1938 Norman de Bruyne patents the structural adhesive bonding of honeycomb sandwich structures.
1941 John D. Lincoln proposes the use of expanded paper honeycombs for aircraft radomes
1948 Roger Steele applies the expansion production process using fiber reinforced composite sheets.
1969 Boeing 747 incorporates extensive fire-resistant honeycombs from Hexcel Composites using DuPont's Nomex aramid fiber paper.
1980s Thermoplastic honeycombs produced by extrusion processes are introduced.
Manufacture
The three traditional honeycomb production techniques, expansion, corrugation, and moulding, were all developed by 1901 for non-sandwich applications. For decorative applications the expanded honeycomb production reached a remarkable degree of automation in the first decade of the 20th century.
Today honeycomb cores are manufactured via the expansion process and the corrugation process from composite materials such as glass-reinforced plastic (also known as fiberglass), carbon fiber reinforced plastic, Nomex aramide paper reinforced plastic, or from a metal (usually aluminum).
Honeycombs from metals (like aluminum) are today produced by the expansion process. Continuous processes of folding honeycombs from a single aluminum sheet after cutting slits had been developed already around 1920.
Continuous in-line production of metal honeycomb can be done from metal rolls by cutting and bending.
Thermoplastic honeycomb cores (usually from polypropylene) are usually made by extrusion processed via a block of extruded profiles or extruded tubes from which the honeycomb sheets are sliced.
Recently a new, unique process to produce thermoplastic honeycombs has been implemented, allowing a continuous production of a honeycomb core as well as in-line production of honeycombs with direct lamination of skins into cost efficient sandwich panel.
Applications
Composite honeycomb structures have been used in numerous engineering and scientific applications.
More recent developments show that honeycomb structures are also advantageous in applications involving nanohole arrays in anodized alumina, microporous arrays in polymer thin films, activated carbon honeycombs, and photonic band gap honeycomb structures.
Aerodynamics
A honeycomb mesh is often used in aerodynamics to reduce or to create wind turbulence. It is also used to obtain a standard profile in a wind tunnel (temperature, flow speed). A major factor in choosing the right mesh is the length ratio (length vs honeycomb cell diameter) L/d.
Length ratio < 1:
Honeycomb meshes of low length ratio can be used on vehicles front grille. Beside the aesthetic reasons, these meshes are used as screens to get a uniform profile and to reduce the intensity of turbulence.
Length ratio >> 1:
Honeycomb meshes of large length ratio reduce lateral turbulence and eddies of the flow. Early wind tunnels used them with no screens; unfortunately, this method introduced high turbulence intensity in the test section. Most modern tunnels use both honeycomb and screens.
While aluminium honeycombs are common use in the industry, other materials are offered for specific applications. People using metal structures should take care of removing burrs as they can introduce additional turbulences. Polycarbonate structures are a low-cost alternative.
The honeycombed, screened center of this open-circuit air intake for Langley's first wind tunnel ensured a steady, non-turbulent flow of air. Two mechanics pose near the entrance end of the actual tunnel, where air was pulled into the test section through a honeycomb arrangement to smooth the flow.
Honeycomb is not the only cross-section available in order to reduce eddies in an airflow. Square, rectangular, circular and hexagonal cross-sections are other choices available, although honeycomb is generally the preferred choice.
Properties
In combination with two skins applied on the honeycomb, the structure offers a sandwich panel with excellent rigidity at minimal weight. The behavior of the honeycomb structures is orthotropic, meaning the panels react differently depending on the orientation of the structure. It is therefore necessary to distinguish between the directions of symmetry, the so-called L and W-direction. The L-direction is the strongest and the stiffest direction. The weakest direction is at 60° from the L-direction (in the case of a regular hexagon) and the most compliant direction is the W-direction.
Another important property of honeycomb sandwich core is its compression strength. Due to the efficient hexagonal configuration, where walls support each other, compression strength of honeycomb cores is typically higher (at same weight) compared to other sandwich core structures such as, for instance, foam cores or corrugated cores.
The mechanical properties of honeycombs depend on its cell geometry, the properties of the material from which the honeycomb is constructed (often referred to as the solid), which include the Young's modulus, yield stress, and fracture stress of the material, and the relative density of the honeycomb (the density of the honeycomb normalized by that of the solid, ρ*/ρs). The ratio of the effective elastic moduli and the solid's Young's moduli, e.g., and , of low-density honeycombs are independent of the solid. The mechanical properties of honeycombs will also vary based on the direction in which the load is applied.
In-plane loading: Under in-plane loading, it is often assumed that the wall thickness of the honeycomb is small compared to the length of the wall. For a regular honeycomb, the relative density is proportional to the wall thickness to wall length ratio (t/L) and the Young’s modulus is proportional to (t/L)3. Under high enough compressive load, the honeycomb reaches a critical stress and fails due to one of the following mechanisms – elastic buckling, plastic yielding, or brittle crushing. The mode of failure is dependent on the material of the solid which the honeycomb is made of. Elastic buckling of the cell walls is the mode of failure for elastomeric materials, ductile materials fail due to plastic yielding, and brittle crushing is the mode of failure when the solid is brittle. The elastic buckling stress is proportional to the relative density cubed, plastic collapse stress is proportional to relative density squared, and brittle crushing stress is proportional to relative density squared. Following the critical stress and failure of the material, a plateau stress is observed in the material, in which increases in strain are observed while the stress of the honeycomb remains roughly constant. Once a certain strain is reached, the material will begin to undergo densification as further compression pushes the cell walls together.
Out of-plane loading: Under out-of-plane loading, the out-of-plane Young’s modulus of a regular hexagonal honeycombs is proportional to the relative density of the honeycomb. The elastic buckling stress is proportional to (t/L)3 while the plastic buckling stress is proportional to (t/L)5/3.
The shape of the honeycomb cell is often varied to meet different engineering applications. Shapes that are commonly used besides the regular hexagonal cell include triangular cells, square cells, and circular-cored hexagonal cells, and circular-cored square cells. The relative densities of these cells will depend on their new geometry.
See also
Lightening holes
Metal foam
Hollow structural section
Composite material
Sandwich structured composite
Sandwich plate system
Timoshenko beam theory
Plate theory
Sandwich panel
Triangle structure
References
Buildings and structures by type
Composite materials
Aerospace materials
Pantheon, Rome | Honeycomb structure | [
"Physics",
"Engineering"
] | 2,923 | [
"Buildings and structures by type",
"Aerospace materials",
"Composite materials",
"Materials",
"Aerospace engineering",
"Matter",
"Architecture"
] |
10,732,685 | https://en.wikipedia.org/wiki/Phenylalanine%20racemase%20%28ATP-hydrolysing%29 | The enzyme phenylalanine racemase (, phenylalanine racemase, phenylalanine racemase (adenosine triphosphate-hydrolysing), gramicidin S synthetase I) is the enzyme that acts on amino acids and derivatives. It activates both the L & D stereo isomers of phenylalanine to form L-phenylalanyl adenylate and D-phenylalanyl adenylate, which are bound to the enzyme. These bound compounds are then transferred to the thiol group of the enzyme followed by conversion of its configuration, the D-isomer being the more favorable configuration of the two, with a 7 to 3 ratio between the two isomers. The racemisation reaction of phenylalanine is coupled with the highly favorable hydrolysis of adenosine triphosphate (ATP) to adenosine monophosphate (AMP) and pyrophosphate (PP), thermodynamically allowing it to proceed. This reaction is then drawn forward by further hydrolyzing PP to inorganic phosphate (Pi), via Le Chatelier's principle.
Other names
phenylalanine racemase
phenylalanine racemase (adenosine triphosphate-hydrolysing)
gramicidin S synthetase I
Pathway
Phenylalanine Metabolism
Substrate
L – Phenylalanine
Product
D - Phenylalanine
Cofactor
Pyridoxal-phosphate (active form of vitamin B6)
Links to disease
Problems in the digestion of phenylalanine (phe) to tyrosine (tyr) lead to the buildup of both phe and phenylpyruvate, in a disease called Phenylketonuria (PKU). These two compounds build up in the blood stream and cerebral spinal fluid, which can lead to mental retardation if left untreated. Treatment consists of a restricted diet of foods that contain phe or compounds that can breakdown into phe. Children in the US are routinely tested for this at birth. For more information see the Phenylketonuria page or the link below.
Quick facts
pH Range = 7.2 – 8.6
Equilibrium Ratio:L-Phe:D-Phe = 3:7
Specific Activity: 0.019
The reaction
|}
See also
Phenylalanine
Racemase
Phenylketonuria
References
External links
Protein Data Bank 1amu
Metabolism
EC 5.1.1 | Phenylalanine racemase (ATP-hydrolysing) | [
"Chemistry",
"Biology"
] | 533 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
10,733,441 | https://en.wikipedia.org/wiki/Tyrosol | Tyrosol is an organic compound with the formula . Classified as a phenylethanoid, a derivative of phenethyl alcohol, it is found in a variety of natural sources. The compound is colorless solid. The principal source in the human diet is olive oil.
Research
As an antioxidant, tyrosol may protect cells against injury due to oxidation in vitro. Although it is not as potent as other antioxidants present in olive oil (e.g., hydroxytyrosol), its higher concentration and good bioavailability indicate that it may have an important overall effect.
Tyrosol may also be cardioprotective. Tyrosol-treated animals showed significant increase in the phosphorylation of Akt, eNOS, and FOXO3a. In addition, tyrosol also induced the expression of the protein SIRT1 in the heart after myocardial infarction (MI) in a rat MI model.
Tyrosol forms esters with a variety of organic acids. For example, oleocanthal is the elenolic acid ester of tyrosol.
See also
Tyrosinol,
Hydroxytyrosol,
Salidroside, a glucoside of tyrosol
References
Phenylethanoids
Phenol antioxidants
4-Hydroxyphenyl compounds | Tyrosol | [
"Chemistry"
] | 286 | [
"Biomolecules by chemical classification",
"Phenylethanoids"
] |
10,733,467 | https://en.wikipedia.org/wiki/Central%20Equipment%20Identity%20Register | A Central Equipment Identity Register (CEIR) is a database of mobile equipment identifiers (IMEI – for networks of GSM standard, MEID – for networks of CDMA standard). Such an identifier is assigned to each SIM slot of the mobile device. Different kinds o IMEIs could be, White, for devices that are allowed to register in the cellular network; Black, for devices that are prohibited to register in the cellular network; and Grey, for devices in intermediate status (when it is not yet defined in which of the lists - black or white - the device should be placed).
Depending on the rules of mobile equipment registration in a country the CEIR database may contain other lists or fields beside IMEI. For example, the subscriber number (MSISDN), which is bound to the IMEI, the ID of the individual (passport data, National ID, etc.) who registered IMEI in the database, details of the importer who brought the device into the country, etc.
History
Originally abbreviation CEIR stood for IMEI Database, created and provided by GSM Association. It was proposed to blacklist the IMEIs of stolen or lost phones. It was assumed that any MNO would be able to receive this list to block the registration of such devices on their network. Thus, it turns out that a stolen phone, once blacklisted by the GSMA CEIR, cannot be used on a large number of cellular networks, which means that the theft of mobile devices will become meaningless. However, it soon became clear that the MNOs on their initiative were not going to do this because if many phones stopped working in their networks, but works in another, it puts them at a disadvantage and can lead to an outflow of subscribers. It became clear that the blocking of stolen devices should be introduced simultaneously in all mobile networks of the country by legislative measures at the initiative of the communications regulator. In this case, as a rule, a national IMEI database is created, which contains general lists of blocked IMEIs. Since the registration in the cellular operator's network is directly blocked by a network node called EIR (Equipment Identity Register), the system that contains the national IMEI base became known as Central EIR (CEIR). To avoid confusion the database of GSM Association was renamed to IMEI Database - IMEI DB (it was in 2003-2008, see “Document History” at IMEI Database File Format Specification). Also sometimes a common IMEI database for several EIRs is called SEIR (Shared EIR).
In each country, the CEIR can interact with IMEI DB differently. National CEIR may not communicate with IMEI DB at all. Firstly, it is separately decided whether CEIR will send information about its blacklist to IMEI DB (which IMEIs are placed in it or removed from there). Secondly, upon receipt of the blacklist from IMEI DB, the regulator decides from which countries it will receive it (IMEI DB stores the information exactly who blacklisted the IMEI). For example, you can get a list from neighboring countries, from countries in your region, from around the world.
In addition to the blacklist, the GSMA is developing a list of IMEIs allocated to manufacturers for use in their devices. The manufacturer for each new device model gets at least one TAC (Type Allocation Code) allocated by GSMA, consisting of 8 digits, to which he can add a 6-digit serial number to obtain the IMEI. Thus, with one TAC, a manufacturer can release up to 1 million devices with a unique IMEI. Usually, CEIR receives a list of allocated TACs from the GSMA, since if the first 8 digits of the IMEI of a device are not in this list, this is a sign that it is counterfeit.
If the central database of identifiers does not work with GSM networks, but with CDMA, then for the same purposes it is necessary to interact with another worldwide database that contains MEIDs – MEID Database.
A system that directly blocks the registration of a mobile device on a cellular network – EIR. Each MNO must have at least one EIR, to which IMEI check requests (CheckIMEI) are sent when registering a device on the network. A typical EIR and CERI interaction scheme:
The CEIR accumulates black, white, and grey lists using various data sources and verification methods.
These lists are periodically transmitted to all EIRs.
EIR uses them when processing every CheckIMEI request to determine whether to allow the device on the network or not.
EIR can transmit some data to the CEIR database too. Usually, changes in a grey list – new IMEIs on the network that are not in any list – are transmitted from EIR to CEIR.
In addition to synchronizing lists across multiple networks, the main function of CEIR is to implement the scenarios of changes at these lists. This usually requires interaction with various IT systems (databases) of other organizations and/or with subscribers. Еxamples of such scenarios:
Whitelisting the IMEI of devices imported by the legal entity
Whitelisting the IMEI of devices manufactured domestically
Whitelisting the IMEI of devices imported by individual
Blacklisting the IMEI of stolen/lost devices
Binding IMEI to the subscriber's number and, vice versa, unbinding IMEI from the subscriber
System implementation results
The goals and results of CEIR implementation in a country are usually:
Reducing mobile phone theft
Reducing the import of devices stolen in other countries
Reducing the presence of counterfeit devices on the market (null IMEI, incorrect IMEI, changed IMEI)
Reducing illegal imports of mobile devices (increase in the collection of customs duties)
Additionally, CEIR most often contributes to the solution of such problems:
Combating various mobile fraud schemes
Obtaining more accurate statistics on the state of the mobile communications market for the regulator
Fight against terrorism (the ability to block the device at once in all mobile networks of the country).
Known results achieved in some countries:
Great Britain – reducing mobile phone theft.
Turkey – reducing mobile phone theft, decreasing the current account deficit of Turkey and maximizing tax revenues.
Uzbekistan – preventing black import of mobile devices by 98%, increase in revenues from the import of mobile devices by 700%.
Kenya – disposing the market of counterfeit mobile equipment.
Azerbaijan – disposing the market of counterfeit mobile equipment.
Ukraine – increasing of legally imported mobile devices by 95%, increase in revenues from the import of mobile devices.
CEIR and EIR manufacturers
Some countries have used local developers to implement CEIR for their country (Great Britain, Turkey, India, and Azerbaijan).
EIR is a system that is standardized in a 2G-5G networks. Such system may be established at mobile network even it doesn’t use black list and there are no CEIR in a country. Some developers of MNO’s signal core include EIR in a complex solution. However, its standard capabilities are usually lacking for specific requirements when implementing CEIR.
See also
The IMEI article has this relatively detailed CEIR/blacklist section
References
Mobile phones
Mobile security
Wireless
GSM standard
Regulation
Import
Databases | Central Equipment Identity Register | [
"Technology",
"Engineering"
] | 1,489 | [
"Mobile security",
"Cybersecurity engineering",
"Wireless",
"Telecommunications engineering"
] |
10,735,807 | https://en.wikipedia.org/wiki/Joyce%20Kilmer-Slickrock%20Wilderness | Joyce Kilmer-Slickrock Wilderness, created in 1975, covers in the Nantahala National Forest in western North Carolina and the Cherokee National Forest in eastern Tennessee, in the watersheds of the Slickrock and Little Santeetlah Creeks. It is named after Joyce Kilmer, author of "Trees." The Little Santeetlah and Slickrock watersheds contain of old growth forest, one of the largest tracts in the United States east of the Mississippi River.
The Babcock Lumber Company logged roughly two-thirds of the Slickrock Creek watershed before the construction of Calderwood Dam in 1922 flooded the company's railroad access and put an end to logging operations in the area. In the 1930s, the U.S. Veterans of Foreign Wars asked the U.S. Forest Service to create a memorial forest for Kilmer, a poet and journalist who had been killed in World War I. After considering millions of acres of forest land throughout the U.S., the Forest Service chose an undisturbed patch along Little Santeetlah Creek, which it dedicated as the Joyce Kilmer Memorial Forest in 1936.
The sources of both Slickrock Creek and Little Santeetlah Creek are located high in the Unicoi Mountains, on opposite slopes of Bob Stratton Bald, a grassy bald overlooking the southwest corner of the Joyce Kilmer-Slickrock Wilderness. Slickrock Creek rises on Stratton's northwestern slope and flows northeastward to its mouth along the Little Tennessee River. Little Santeetlah rises on Stratton's southeastern slope and flows southeastward to its mouth along Santeetlah Creek.
The Joyce Kilmer Memorial Forest along Little Santeetlah Creek is a rare example of an old growth cove hardwood forest, an extremely diverse forest type unique to the Appalachian Mountains. Although there are many types of trees in Joyce Kilmer, dominant species include poplar, hemlock, red and white oak, basswood, beech, and sycamore. Many of the trees in Joyce Kilmer are over 400 years old. The largest rise to heights of over and have circumferences of up to . The Slickrock Creek basin is coated primarily by a mature second-growth cove hardwood forest, although a substantial old growth stand still exists in its upper watershed.
The Joyce Kilmer-Slickrock Wilderness borders the Citico Creek Wilderness, which lies within the Cherokee National Forest in Tennessee.
See also
Joyce Kilmer Memorial Forest
List of U.S. Wilderness Areas
List of old growth forests
Wilderness Act
References
External links
Joyce Kilmer-Slickrock Wilderness, Wilderness.net website
Joyce Kilmer Memorial Forest, Graham County, North Carolina
Webcam View of Joyce Kilmer - Slickrock Wilderness
Cherokee National Forest
Old-growth forests
Wilderness areas of North Carolina
Wilderness areas of Tennessee
Wilderness areas of the Appalachians
Protected areas of Graham County, North Carolina
Protected areas of Monroe County, Tennessee
Protected areas established in 1975
Nantahala National Forest
1975 establishments in North Carolina
1975 establishments in Tennessee | Joyce Kilmer-Slickrock Wilderness | [
"Biology"
] | 602 | [
"Old-growth forests",
"Ecosystems"
] |
10,736,521 | https://en.wikipedia.org/wiki/Succinylmonocholine | Succinylmonocholine is an ester of succinic acid and choline created by the metabolism of suxamethonium chloride.
See also
Succinic acid
Choline
References
Choline esters
Carboxylic acids
Quaternary ammonium compounds | Succinylmonocholine | [
"Chemistry"
] | 57 | [
"Carboxylic acids",
"Functional groups"
] |
10,737,185 | https://en.wikipedia.org/wiki/Preservation%20of%20magnetic%20audiotape | Preservation of magnetic audiotape comprises techniques for handling, cleaning and storage of magnetic audiotapes in an archival repository. Multiple types of magnetic media exist but are mainly in the form of open reels or enclosed cassettes. Although digitization of materials on fragile magnetic media in library and information science is a common practice, there remains a need for conserving the actual physical magnetic tape and playback equipment as artifacts.
Structure of magnetic tape
The first magnetic tapes were manufactured by BASF in Germany in 1932. They were designed with iron carbonyl as the magnetic pigment mixed into the cellulose acetate carrier. Production soon moved to iron oxide coated onto cellulose acetate rolls cut into uniform strips wound onto plastic or metal hubs. Recordists began recording sound on magnetic media in the twenties in the form of magnetic wire. After World War II, the advantages of tape in terms of sturdiness and the ability to edit by cutting and splicing made tape preferable to wire as the magnetic medium of choice. Tape consists of a coating of a magnetic pigment, typically iron oxide (Fe2O3), on a long strip of polyester (polyethelyne terephthalate) base film. This base film has been used since the mid-sixties as a replacement for acetate bases film that was prone to chemical instability.
Sticky-shed syndrome
A new problem with chemical stability became notable in the mid-seventies when two significant tape manufacturers changed their dispersion formulations by introducing a polyurethane binder that, in time, turned hygroscopic and broke down as it absorbed water molecules into the long hydrocarbon molecular chains. The tape coatings became sticky and shed oxide onto all tape recorder parts in their path, including heads, guides, rollers, and capstans. This is commonly called sticky-shed syndrome. Although the problem was confined to two of the four major tape manufacturers (neither BASF nor 3M studio tapes suffer from the problem because neither manufacturer used the hygroscopic binder), the reputation of all magnetic tapes has been tainted by the defect.
Information can be recovered from the "sticky-shed" tapes by heating them at a very low temperature in order drive the water out of the binders. The baking method is a one-time solution to the problem because the binder remains unstable. Tapes that do not show the breakdown syndrome do not need any special treatment.
Handling
It is advised that open reels are handled by the center hub area or by the outer edges of the reel flanges, if necessary, and that the actual tape is not touched. If the outer flanges must be used, do not squeeze the edges of the reel flanges together, as it will damage the edges of the tape. If possible, handle by the center hub only. Similarly, it is recommended that cassettes be handled by the existing outer plastic case and that fingers not be placed anywhere inside the cassette mechanism.
Cleaning
Magnetic tape must be kept clean in order to prevent scratching and deterioration. Dust on the surface of tape will cause friction between the tape and the tape heads on the playback equipment, which will scratch the oxide layer. The website for sound preservation hosted by the National Library of Canada classifies dirt in two classes: Foreign matter (fingerprints, dust) and alteration of the original state (chemical reactions caused due to grime and dirt deposited on the tape surface). In any event, the tapes must be properly cleaned.
Recommended methods for removing dust on tapes include using a small vacuum with a hose or wipe with 3M Tape Cleaning Fabric. Care must be used when using a vacuum if a hose attachment is available. If the motor of the vacuum is powerful enough, it can demagnetize the tape and the recording will be compromised. Many of the professional companies for tape restoration recommend professional help for proper care. They are generally correct to recommend this, as it is a delicate process that requires training if one plans to undertake serious chemical or physical repair. Vidipax, a professional tape restoration company, recommends using Pellon fabric or cloth as the safest and most efficient way to clean tapes. They warn against using solvents at all costs unless the tapes have already been submerged in water or another solvent (in the case of a flood). They also remind tape-owners or collections managers that baking tapes to reverse hydrolysis is rarely a permanent fix and permanently alters the make-up of the tape.
Storage
As is the case with any collection, proper storage is extremely important. The general environment, including temperature and relative humidity is key. The proper levels vary depending on how long the materials need to be stored. The Library of Congress recommends that any tapes needing preservation for a minimum of 10 years should be stored between at 45-50% relative humidity (RH). Large fluctuations in either of these factors should be avoided at all costs. If the tapes need permanent preservation, they should be stored at at 20-30% relative humidity. In the case of magnetic tapes, contrary to traditional preservation storage rules for books and photographic film, colder is certainly not better. If the collections are stored below , the tape lubricant can separate from the base, ruining the recording. The most important thing is to keep conditions consistent once desirable conditions are achieved.
The National Library of Canada recommends that one and a half rounds of a previously unused tape should be cut off, so as to remove any adhesive at the end, which could later be transferred to the tape or machinery. They also recommend not storing any paper labels in the box with reel-to-reel tapes to prevent chemical transfer from the paper and/or printing processes used on the paper to the tape.
The Library of Congress recommends that tapes with water repellent plastic containers be stored vertically on edge, not flat, and that reel-to-reel boxes need not be separated, but should be stored vertically with bookends, so as not to fall. Also, it is always important to remember that these collections will be very heavy and should be shelved on strong, non-acidic shelving.
Tapes should only be rewound just before the next play. When rewinding, if possible, use a slower archival wind technique. Although super-speed rewinders may seem convenient, they will warp and damage tapes over time. Professional media librarians at the National Library of Canada suggest that the best way to achieve an archival wind for reel-to-reel tapes is to remove the heads on the player and play backwards at normal play speed. However, the tape tension may need to be adjusted after removing the heads.
Digitization
Sometimes, a tape may be so fragile that the only long-term method for preservation is to transfer the media to a digital format. However, all of the above precautions still must be taken with collections in order to achieve a proper transfer. The materials must be in good enough condition to play in order to be digitized; therefore, one should not count on digitization as a safety net.
See also
Sticky-shed syndrome
References
Additional Sources
Schüller, D. and Häfner, A. 2014. Handling and Storage of Audio and Video Carriers, International Association of Sound and Audiovisual Archives.
Stauderman, Sarah, Pictorial Guide to Sound Recording Media, Preservation: Sound Savings, preserving Audio Collections. Association of Research Libraries.
National Recording Preservation Board, Capturing Analog Sound for Digital Preservation: Report of a Roundtable Discussion of Best Practices for Transferring Analog Discs and Tapes. Washington DC: Library of Congress, 2006
https://richardhess.com/tape/history/
Engel, Fredrich, and Peter Hammer. A Selected History of Magnetic Recording. New York: 2006.
Cohen, Elizabeth. "Preservation of Audio." Folk Heritage Collections in Crisis (2001): 20-31.
Audio storage
Audiotape preservation
Sound recording
Tape recording | Preservation of magnetic audiotape | [
"Technology"
] | 1,606 | [
"Recording devices",
"Tape recording"
] |
10,737,493 | https://en.wikipedia.org/wiki/Benjamin%20Kendall%20Emerson | Benjamin Kendall Emerson (December 20, 1843 – April 7, 1932) was an American geologist and author.
Biography
Emerson attended Amherst College, where he joined the Alpha Delta Phi fraternity and from which he graduated in 1865 as valedictorian. He went on to study in Germany at the University of Berlin, and received his doctorate from the University of Göttingen in 1870. He returned to the United States where he joined the faculty at Amherst, where he was professor of geology and related sciences from 1872 to 1917 and simultaneously at Smith College from 1878 to 1912. He was also assistant geologist from 1890 to 1896, and later geologist from 1896 to 1920 for the United States Geological Survey. He helped found the Geological Society of America and was its president in 1899.
In 1893 he was seriously injured in a train wreck in Ohio, but he recovered. In 1897 he was elected vice president of the International Geological Congress, and attended the meeting of the Congress in St Petersburg, Russia. He followed this with an excursion through Siberia. In 1899 he accompanied the Harriman Alaska Expedition, where Mount Emerson bears his name.
His chief field of study was the geology of western Massachusetts, the Connecticut River valley, and Rhode Island.
Family
Benjamin Kendall Emerson was the son of Benjamin Frothingham Emerson and Eliza Kendall Emerson of Nashua, New Hampshire. He married twice; his first wife was Mary Annette Hopkins, daughter of Rev Erastus Hopkins of Northampton, Massachusetts. They were married on April 2, 1873, and had six children: Charlotte Freylinghuysen (b. 1874), Benjamin Kendall (b. 1875), Edward Hopkins (b. 1877), Annette Hopkins (b. 1879), Malleville Wheelock (b. 1887), and Caroline Dwight (1891-1973), who became a prolific author of over 20 children's books, including The Hat-Tub Tale, about the Bay of Fundy.
Mary Annette Hopkins Emerson died on July 3, 1897; Benjamin Emerson then married Anna James Seelye, daughter of Julius Seelye of Amherst on April 4, 1901. They had two children: Elizabeth James (b. 1903) and Henry Seelye (b. 1907).
Professional memberships
Emerson was a member of the following organizations:
The American Academy of Arts and Sciences (1895)
The American Association for the Advancement of Science (vice president, 1896)
The American Philosophical Society (1897)
The American Geographical Society
Deutsche Geologische Gesellschaft
Society of Naturalists of Eastern United States
Washington Academy of Sciences
American Philosophical Society
Geological Society of America (original fellow, 1889; second vice president, 1897; first vice president, 1898; president, 1899)
Phi Beta Kappa
Works
Among his works are Geology of Old Hampshire County, Massachusetts (1898), Geology of Massachusetts and Rhode Island (1917), and a report (1904) on the Harriman Expedition to Alaska. The Archives and Special Collections at Amherst College holds a collection of his papers.
References
External links
Benjamin Kendall Emerson (AC 1865) Papers from the Amherst College Archives & Special Collections
1843 births
1932 deaths
Amherst College alumni
Amherst College faculty
University of Göttingen alumni
Smith College faculty
American geologists
People involved with the periodic table
Presidents of the Geological Society of America
Members of the American Philosophical Society | Benjamin Kendall Emerson | [
"Chemistry"
] | 653 | [
"Periodic table",
"People involved with the periodic table"
] |
10,737,671 | https://en.wikipedia.org/wiki/Gaussian%20vault | The Gaussian vault is a reinforced masonry construction technique invented by Uruguayan engineer Eladio Dieste to efficiently and economically build thin-shell barrel vaults and wide curved roof spans that are resistant to buckling.
Gaussian vaults consist of a series of interlocking, curved, single-layer brick arches that can span long distances without the need for supporting columns. This allows the construction of lightweight, efficient and visually striking structures. These arches are characterized by the use of a double curvature form, along an inverted catenary, which allows for greater structural efficiency and a reduction in the amount of materials required for building wide-span roof structures.
The term "Gaussian", coined by Dieste himself, typically refers to the bell-shaped curve often used in statistics and probability theory. Dieste's new combination of bricks, steel reinforcement and mortar makes its one of the innovative construction system using reinforced ceramics, also called "" or structural ceramics.
History
David P. Billington coined the term "structural art" for works of structural engineering that achieve excellence in the three areas of efficiency, economy, and elegance. Engineers Gustav Eiffel and Robert Maillart worked with new materials and techniques to design elegant, economic and structurally efficient. Many of them concentrated their designs on one building material like for example wrought-iron and prestressed concrete. Eugene Freyssinet, Felix Candela, Eduardo Torroja pionneered the construction of large thin-shell structures made out of reinforced concrete.
The concept of metal reinforced masonry was not invented by Dieste. in 1889 French engineer Paul Cottancin patented a system of reinforced concrete, which he called "ciment armé". The Cottancin system used wire-reinforced hollow bricks acting as a permanent formwork for a cement armature and thin cement shells, as shown in the 1904 Church of Saint-Jean de Montmartre. Vertical wires ran through the brick voids, while horizontal reinforcement is placed in the joints. The brick voids and joints were filled with cement mortar to prevent metal coming into contact with air. Cottancin's labor-intensive system was quickly replaced by Hennebique's reinforced concrete, which requires the erection of wooden formwork but less skilled operators. In 1910, Rafael Guastavino was granted a patent for reinforced brick shells and Spanish engineer Torroja also developed in the 1920s their own system of reinforced ceramics. By the 1950's, the construction of thin concrete shells became more and more expensive due to the increased costs of formwork and labor and was progressively replaced steel construction for long spans vaults.
Unaware of the developments in the rest of the world, Dieste developed its own system of reinforced masonry, which was little known and used in his day in South America, into a prime example of structural art. He innovated in the use of bricks which was affordable and widely available in South America. He developed many new cost-efficient techniques and elegant forms for the design of thin brick vaults. His construction techniques were derived from structural principles associated with the geometry of the inverted catenary. He gave to the cross-section of his masonry vaults a double curvature to generate stiffness and strength to resist buckling failure. He designed characteristic undulating roofs with a typical span to rise ratio of 10.
In 1946, Dieste realized his first reinforced brick vault, working with architect Antoní Bonet i Castellana on the Berlingieri house in Punta Ballena, Uruguay. After his invention, Dieste did not use his new construction technique again until 1955.
In 1956, Dieste founded with Eugenio Montañez (1916–2001) the construction and design firm Dieste y Montañez S.A., which is still in operation today. With his company, he constructed more than 1.5 million square meters of buildings such as warehouses, factories, gymnasiums and workshops.
The discovery of this construction system, as well as its development, introduction and implementation, earned the engineer Dieste worldwide recognition from the international community and eventually from UNESCO
Colombian engineer Guillermo González Zuleta and the Spanish engineer Ildefonso Sánchez del Río Pisón also developed different approaches to structural architecture to build large span buildings using ondulating reinforced ceramics.
Description
The construction technique of this type of reinforced masonry consists of placing steel reinforced bars at the junction of the brick courses. The behavior of the reinforced brick layer similar to that of a reinforced concrete beam. The thin-shell, single-thickness brick structure derives its rigidity and strength from a double-curved catenary arch form that resists buckling failure. The structural masonry fulfills a structural function by supporting itself and the roof without beams or columns.This construction system allows the design of thin-shell, single-layer brick structures by combining bricks, iron and mortar, built on a movable "encofrados" used as scaffolding for people and formwork for materials. These gaussian vaults are structures that are able to withstand the loads placed on them thanks to their shape rather than their mass, resulting in a lower material requirement and in reduced construction times. The number of layers of bricks in which the reinforcing bar is placed depends on the span to be overcome. The reinforcement must be made of a corrosion-resistant alloy. Dieste used traditional locally-sourced hollow bricks, which are typically 25x25x10 cm. The total thickness of Gaussian vaults are usually between 18 and 25 cm and spanning up to 45 meters.
Usage
Reinforced ceramics have been widely adopted because it allows for greater lightness, prefabrication and systematization in the repetition of its components, with competitive costs. They are particularly suited to the construction of churches, community centers and industrial buildings, as well as other structures that require large open spaces.
Dieste applied this construction technique to his first architectural work: the church of Christ the Worker and Our Lady of Lourdes (1958–1960), in the small village of Atlántida. It became an renowned architectural landmark, described as "a simple rectangle, with side walls rising up in undulating curves to the maximum amplitude of their arcs, these walls supporting a similarly undulating roof, composed of a sequence of reinforced brick Gaussian vaults". In 2021 the Church was declared a UNESCO World Heritage Site under the name "The work of engineer Eladio Dieste: Church of Atlántida".
In 1998, Dieste used the same construction techniques in the Church of Saint John of Ávila in a modern neighbourhood of Alcalá de Henares, Spain.
See also
Catalan vault
Guastavino tile
References
Further reading
Uruguayan inventions
Arches and vaults
Ceilings
Brick buildings and structures
Building engineering
Masonry
Structural system | Gaussian vault | [
"Technology",
"Engineering"
] | 1,361 | [
"Structural engineering",
"Building engineering",
"Structural system",
"Construction",
"Civil engineering",
"Ceilings",
"Masonry",
"Architecture"
] |
10,737,785 | https://en.wikipedia.org/wiki/Floating%20nuclear%20power%20plant | A floating nuclear power plant is a floating power station that derives its energy from a nuclear reactor. Instead of a stationary complex on land, they consist of a floating structure such as an offshore platform, barge or conventional ship.
Since the reactors employed are smaller in size and power than most commercial land-based reactors, mostly derived from nuclear ship and submarine power plants, the power output is generally a fraction of a conventional nuclear power plant, usually around 100MWe, although some are planned to have as much as 800MWe.
The advantage of such power plants is their relative mobility and their ability to deliver in-situ electric power "on demand" even to remote regions, since they can be moved or towed to position with relative ease within large water bodies, and then docked with coastal facilities to transfer the produced power and heat to a land power grid. However, environmental groups are concerned that floating nuclear power plants are more exposed to accidents than onshore power stations and also pose a threat to marine habitats.
History
20th century
The first floating nuclear power station was the MH-1A, using pressurized water reactor built in a converted Liberty ship, which achieved criticality in 1967. Proposals to build a floating nuclear power plants off the coast of New Jersey and off Jacksonville, Florida were considered in the 1970's but ultimately scrapped.
21st century
In the 21st century, Russia has led in the practical development of floating nuclear power stations. On 14 September 2019, Russia’s first-floating nuclear power plant, Akademik Lomonosov, arrived to its permanent location in the Chukotka region. It started operation on 19 December 2019.
In 2022, the United States Department of Energy funded a three-year research study of offshore floating nuclear power generation. In October 2022, NuScale Power and Canadian company Prodigy announced a joint project to bring a North American small modular reactor based floating plant to market.
Samsung and UK-based Core Power are also looking into using compact molten salt reactor technology in floating platforms, with the former aiming at a modular power barge of up to 800MWe.
Advantages
Virtually no land or concrete is used.
Earthquake resistant.
Easily transported for relocation, refueling, refurbishment and decommissioning.
Surrounded by water that can be used for active or passive cooling.
Available to remote locations where a conventional power plant would be unfeasible.
See also
Floating solar
Floating wind turbine
Footnotes
External links
Sevmash, a leading Russian manufacturer of floating nuclear power plants
Floating nuclear power stations raise spectre of Chernobyl at sea
Nuclear power
Nuclear power stations
Floating nuclear power stations | Floating nuclear power plant | [
"Physics"
] | 526 | [
"Power (physics)",
"Physical quantities",
"Nuclear power"
] |
10,738,834 | https://en.wikipedia.org/wiki/Fatou%27s%20theorem | In mathematics, specifically in complex analysis, Fatou's theorem, named after Pierre Fatou, is a statement concerning holomorphic functions on the unit disk and their pointwise extension to the boundary of the disk.
Motivation and statement of theorem
If we have a holomorphic function defined on the open unit disk , it is reasonable to ask under what conditions we can extend this function to the boundary of the unit disk. To do this, we can look at what the function looks like on each circle inside the disk centered at 0, each with some radius . This defines a new function:
where
is the unit circle. Then it would be expected that the values of the extension of onto the circle should be the limit of these functions, and so the question reduces to determining when converges, and in what sense, as , and how well defined is this limit. In particular, if the norms of these are well behaved, we have an answer:
Theorem. Let be a holomorphic function such that
where are defined as above. Then converges to some function pointwise almost everywhere and in norm. That is,
Now, notice that this pointwise limit is a radial limit. That is, the limit being taken is along a straight line from the center of the disk to the boundary of the circle, and the statement above hence says that
The natural question is, with this boundary function defined, will we converge pointwise to this function by taking a limit in any other way? That is, suppose instead of following a straight line to the boundary, we follow an arbitrary curve converging to some point on the boundary. Will converge to ? (Note that the above theorem is just the special case of ). It turns out that the curve needs to be non-tangential, meaning that the curve does not approach its target on the boundary in a way that makes it tangent to the boundary of the circle. In other words, the range of must be contained in a wedge emanating from the limit point. We summarize as follows:
Definition. Let be a continuous path such that . Define
That is, is the wedge inside the disk with angle whose axis passes between and zero. We say that converges non-tangentially to , or that it is a non-tangential limit, if there exists such that is contained in and .
Fatou's Theorem. Let Then for almost all
for every non-tangential limit converging to where is defined as above.
Discussion
The proof utilizes the symmetry of the Poisson kernel using the Hardy–Littlewood maximal function for the circle.
The analogous theorem is frequently defined for the Hardy space over the upper-half plane and is proved in much the same way.
See also
Hardy space
References
John B. Garnett, Bounded Analytic Functions, (2006) Springer-Verlag, New York
Walter Rudin. Real and Complex Analysis (1987), 3rd Ed., McGraw Hill, New York.
Elias Stein, Singular integrals and differentiability properties of functions (1970), Princeton University Press, Princeton.
Theorems in complex analysis | Fatou's theorem | [
"Mathematics"
] | 631 | [
"Theorems in mathematical analysis",
"Theorems in complex analysis"
] |
10,739,089 | https://en.wikipedia.org/wiki/Sex%20pheromone | Sex pheromones are pheromones released by an organism to attract an individual of the same species, encourage them to mate with them, or perform some other function closely related with sexual reproduction.
Sex pheromones specifically focus on indicating females for breeding, attracting the opposite sex, and conveying information on species, age, sex and genotype. Non-volatile pheromones, or cuticular contact pheromones, are more closely related to social insects as they are usually detected by direct contact with chemoreceptors on the antennae or feet of insects.
Insect sex pheromones have found uses in monitoring and trapping of pest insects.
Evolution
Sex pheromones have evolved in many species. The many types of pheromones (i.e. alarm, aggregation, defense, sexual attraction) all have a common cause acting as chemical cues to trigger a response. However, sex pheromones are particularly associated with signaling mating behaviors or dominance. The odors released can be seen as a favorable trait selected by either the male or female leading to attraction and copulation. Chemical signaling is also used to find genetically different mates and thus avoid inbreeding. Females are often selective when deciding to mate, and chemical communication ensures that they find a high-quality mate that satisfies their reproductive needs.
Sexual selection
Odours may be a kind of male "ornament" selected for by female choice. They meet the criteria for such ornaments that Charles Darwin set out in The Descent of Man, and Selection in Relation to Sex. After many years of study the importance of such chemical communication is becoming clear.
Males usually compete for scarce females, which make adaptive choices based on male traits. The choice can benefit the female directly and/or genetically. In tiger moths (Utetheisa ornatrix), females choose the males that produce the most pheromone; an honest signal of the amount of protective alkaloids the male has, as well as an indicator of the size of female offspring (females fertilised by such males lay more eggs). Male cockroaches form dominance hierarchies based on pheromone "badges", while females use the same pheromone for male choice. In oriental beetles (Exomala Orientalis), females release the pheromone and passively wait for a male to find them. The males with superior detection and flying abilities are most likely to reach the female beetle first which leads to a selection for genetically-advantageous males.
In most species, pheromones are released by the non-limiting sex. Some female moths signal, but this is cheap and low risk; it means the male has to fly to her, taking a high risk. This mirrors communication with other sensory modalities, e.g. male frogs croak; male birds are usually colourful. Male long-range pheromone signals may be associated with patchy resources for the female. In some species, both sexes signal. Males can sometimes attract other males instead, the sex pheromone acting as an aggregation pheromone.
External fertilization and chemical duets
It is likely that most externally fertilizing species (e.g. marine worms, sea urchins) coordinate their sexual behaviour (release of sperm and eggs) using pheromones. This coordination is very important because sperm are diluted easily, and are short-lived. Coordination therefore provides a selective advantage to both males and females: individuals that do not coordinate are unlikely to achieve fertilisation and hence to leave offspring.
The main selective advantage of outcrossing is that it promotes the masking of deleterious recessive alleles, while inbreeding promotes their harmful expression.
In humans
No study has led to the isolation of true human sex pheromones. While humans are highly dependent upon visual cues, when in close proximity, smells also play a role in sociosexual behaviors. An inherent difficulty in studying human pheromones is the need for cleanliness and odorlessness in human participants.
Signalling
Different species use a wide variety of chemical substances to send sexual signals. The first to be described chemically was bombykol, the silkworm moth's sex pheromone, which is a complex alcohol, (E,Z)-10,12-hexadecadienol, discovered in 1959. It is detected in the antennae of the male moth by a pheromone-binding protein which carries the bombykol to a receptor bound to the membrane of a nerve cell.
The chemicals used by other moths are species-specific. For example, the Eastern spruce budworm Choristoneura fumiferana female pheromones contain a 95:5 mix of E- and Z 11-tetradecenal aldehydes, while the sex pheromones of other species of spruce budworm contain acetates and alcohols.
Sexual development in the freshwater green alga Volvox is initiated by a glycoprotein pheromone. It is one of the most potent known biological effector molecules, as it can trigger sexual development at a concentration as low as 10−16 moles per litre. Kirk and Kirk showed that sex-inducing pheromone production can be triggered experimentally in somatic cells by heat shock.
Uses
Sex pheromones have found applications in pest monitoring and pest control. For monitoring, pheromone traps are used to attract and catch a sample of pest insects to determine whether control measures are needed. For control, much larger quantities of a sex pheromone are released to disrupt the mating of a pest species. This can be either by releasing enough pheromone to prevent males from finding females, effectively drowning out their signals, or by mass trapping, attracting and removing pests directly. For example, research on the control of the spruce bud moth (Zeiraphera canadensis) has focused on the use of the pheromone E-9-tetradecenyl-acetate, a chemical the spruce bud moth releases during mating.
References
Sexual reproduction
Pheromones
Chemical ecology | Sex pheromone | [
"Chemistry",
"Biology"
] | 1,272 | [
"Behavior",
"Chemical ecology",
"Reproduction",
"Pheromones",
"Sexual reproduction",
"Biochemistry",
"Sexuality"
] |
10,739,141 | https://en.wikipedia.org/wiki/Trace%20monoid | In computer science, a trace is an equivalence class of strings, wherein certain letters in the string are allowed to commute, but others are not. Traces generalize the concept of strings by relaxing the requirement for all the letters to have a definite order, instead allowing for indefinite orderings in which certain reshufflings could take place. In an opposite way, traces generalize the concept of sets with multiplicities by allowing for specifying some incomplete ordering of the letters rather than requiring complete equivalence under all reorderings. The trace monoid or free partially commutative monoid is a monoid of traces.
Traces were introduced by Pierre Cartier and Dominique Foata in 1969 to give a combinatorial proof of MacMahon's master theorem. Traces are used in theories of concurrent computation, where commuting letters stand for portions of a job that can execute independently of one another, while non-commuting letters stand for locks, synchronization points or thread joins.
The trace monoid is constructed from the free monoid (the set of all strings of finite length) as follows. First, sets of commuting letters are given by an independency relation. These induce an equivalence relation of equivalent strings; the elements of the equivalence classes are the traces. The equivalence relation then partitions the elements of the free monoid into a set of equivalence classes; the result is still a monoid; it is a quotient monoid now called the trace monoid. The trace monoid is universal, in that all dependency-homomorphic (see below) monoids are in fact isomorphic.
Trace monoids are commonly used to model concurrent computation, forming the foundation for process calculi. They are the object of study in trace theory. The utility of trace monoids comes from the fact that they are isomorphic to the monoid of dependency graphs; thus allowing algebraic techniques to be applied to graphs, and vice versa. They are also isomorphic to history monoids, which model the history of computation of individual processes in the context of all scheduled processes on one or more computers.
Trace
Let denote the free monoid on a set of generators , that is, the set of all strings written in the alphabet . The asterisk is a standard notation for the Kleene star. An independency relation on the alphabet then induces a symmetric binary relation on the set of strings : two strings are related, if and only if there exist , and a pair such that and . Here, and are understood to be strings (elements of ), while and are letters (elements of ).
The trace is defined as the reflexive transitive closure of . The trace is thus an equivalence relation on and is denoted by , where is the dependency relation corresponding to and Different independencies or dependencies will give different equivalence relations.
The transitive closure implies that if and only if there exists a sequence of strings such that and for all . The trace is stable under the monoid operation on , i.e., concatenation, and is therefore a congruence relation on
The trace monoid, commonly denoted as , is defined as the quotient monoid
The homomorphism
is commonly referred to as the natural homomorphism or canonical homomorphism. That the terms natural or canonical are deserved follows from the fact that this morphism embodies a universal property, as discussed in a later section.
One will also find the trace monoid denoted as where is the independency relation. One can also find the commutation relation used instead of the independency relation; it differs from the independency relation by also including all the diagonal elements of since letters "commute with themselves" in a free monoid of strings of those letters.
Examples
Consider the alphabet . A possible dependency relation is
The corresponding independency is
Therefore, the letters commute. Thus, for example, a trace equivalence class for the string would be
and the equivalence class would be an element of the trace monoid.
Properties
The cancellation property states that equivalence is maintained under right cancellation. That is, if , then . Here, the notation denotes right cancellation, the removal of the first occurrence of the letter a from the string w, starting from the right-hand side. Equivalence is also maintained by left-cancellation. Several corollaries follow:
Embedding: if and only if for strings x and y. Thus, the trace monoid is a syntactic monoid.
Independence: if and , then a is independent of b. That is, . Furthermore, there exists a string w such that and .
Projection rule: equivalence is maintained under string projection, so that if , then .
A strong form of Levi's lemma holds for traces. Specifically, if for strings u, v, x, y, then there exist strings and such that
for all letters and such that occurs in and occurs in , and
Universal property
A dependency morphism (with respect to a dependency D) is a morphism
to some monoid M, such that the "usual" trace properties hold, namely:
1. implies that
2. implies that
3. implies that
4. and imply that
Dependency morphisms are universal, in the sense that for a given, fixed dependency D, if is a dependency morphism to a monoid M, then M is isomorphic to the trace monoid . In particular, the natural homomorphism is a dependency morphism.
Normal forms
There are two well-known normal forms for words in trace monoids. One is the lexicographic normal form, due to Anatolij V. Anisimov and Donald Knuth, and the other is the Foata normal form due to Pierre Cartier and Dominique Foata who studied the trace monoid for its combinatorics in the 1960s.
Unicode's Normalization Form Canonical Decomposition (NFD) is an example of a lexicographic normal form - the ordering is to sort consecutive characters with non-zero canonical combining class by that class.
Trace languages
Just as a formal language can be regarded as a subset of , the set of all possible strings, so a trace language is defined as a subset of all possible traces.
Alternatively, but equivalently, a language is a trace language, or is said to be consistent with dependency D if
where
is the trace closure of a set of strings.
See also
Trace cache
Notes
References
General references
Antoni Mazurkiewicz, "Introduction to Trace Theory", pp 3–41, in The Book of Traces, V. Diekert, G. Rozenberg, eds. (1995) World Scientific, Singapore
Volker Diekert, Combinatorics on traces, LNCS 454, Springer, 1990, , pp. 9–29
Seminal publications
Pierre Cartier and Dominique Foata, Problèmes combinatoires de commutation et réarrangements, Lecture Notes in Mathematics 85, Springer-Verlag, Berlin, 1969, Free 2006 reprint with new appendixes
Antoni Mazurkiewicz, Concurrent program schemes and their interpretations, DAIMI Report PB 78, Aarhus University, 1977
Semigroup theory
Formal languages
Free algebraic structures
Combinatorics
Trace theory | Trace monoid | [
"Mathematics"
] | 1,487 | [
"Discrete mathematics",
"Mathematical structures",
"Formal languages",
"Mathematical logic",
"Combinatorics",
"Fields of abstract algebra",
"Algebraic structures",
"Category theory",
"Semigroup theory",
"Free algebraic structures"
] |
10,739,341 | https://en.wikipedia.org/wiki/Dependency%20relation | In computer science, in particular in concurrency theory, a dependency relation is a binary relation on a finite domain , symmetric, and reflexive; i.e. a finite tolerance relation. That is, it is a finite set of ordered pairs , such that
If then (symmetric)
If , then (reflexive)
In general, dependency relations are not transitive; thus, they generalize the notion of an equivalence relation by discarding transitivity.
is also called the alphabet on which is defined. The independency induced by is the binary relation
That is, the independency is the set of all ordered pairs that are not in . The independency relation is symmetric and irreflexive. Conversely, given any symmetric and irreflexive relation on a finite alphabet, the relation
is a dependency relation.
The pair is called the concurrent alphabet. The pair is called the independency alphabet or reliance alphabet, but this term may also refer to the triple (with induced by ). Elements are called dependent if holds, and independent, else (i.e. if holds).
Given a reliance alphabet , a symmetric and irreflexive relation can be defined on the free monoid of all possible strings of finite length by: for all strings and all independent symbols . The equivalence closure of is denoted or and called -equivalence. Informally, holds if the string can be transformed into by a finite sequence of swaps of adjacent independent symbols. The equivalence classes of are called traces, and are studied in trace theory.
Examples
Given the alphabet , a possible dependency relation is , see picture.
The corresponding independency is . Then e.g. the symbols are independent of one another, and e.g. are dependent. The string is equivalent to and to , but to no other string.
References
Properties of binary relations | Dependency relation | [
"Mathematics"
] | 379 | [
"Properties of binary relations",
"Mathematical relations",
"Binary relations"
] |
10,739,499 | https://en.wikipedia.org/wiki/Opengear | Opengear is a global computer network technology company headquartered in Edison, New Jersey, U.S., with engineering in Brisbane, Qld, Australia and production in Sandy, UT.
The company develops and manufactures "smart out-of-band infrastructure management" products aimed at allowing customers to securely access, control and automatically troubleshoot and repair their IT infrastructure remotely, including network and data-center management, for resilient operation.
Opengear solutions provide always-available wired and wireless secure remote access, with failover capabilities to automatically restore site connectivity. This enables technical staff to provision, maintain and repair infrastructure from anywhere at any time, as if they were physically present, thereby enabling both the operational costs and the risk of downtime to be reduced.
In December 2019, Opengear was acquired by Digi International.
Products
Opengear's management products include IM7200 advanced console servers that streamline management of network, server, and power infrastructure in data centers and colocation facilities; and ACM7000 remote management gateways that deliver secure remote monitoring, access and control of distributed networks and remote sites. The Lighthouse Centralized Management platform then provides a single point of scalable, secure management for these Opengear appliances and connected devices. The Opengear NetOps Console Server combine out-of-band management and NetOps tools in a single appliance, minimizing human intervention and simplifying repetitive tasks.
All Opengear products provide a secure alternate out-of-band path to the managed infrastructure, enabling accessibility even during system or network outage. They monitor, access, and control all critical infrastructure at all local and remote sites, from applications, computers and networking equipment, to security cameras, power supplies and door sensors - to proactively detect faults and remediate before they become failures.
Opengear's products are built on a Linux software base, and the company is an active supporter of the open-source community.
History
2004 Opengear founded by the founders of SnapGear
2005 Started okvm open source project, developing open source console and KVM management software and released CM4000 and SD4000 product lines (built on okvm technology)
2007 Embedded Nagios open source monitoring software.
2008 Embedded Network UPS Tools and PowerMan for UPS and PDU management and monitoring, EMD5000 Environmental Monitoring Products.
2009 Extended SNMP support for all mainstream UPS and PDU vendors for true vendor agnostic data center management.
2010 Develops VCMS virtual central management - built on Nagios
2010 Reports sales growth of 50% in 2010.
2011 Embeds ARMS in management gateways to give smart remote hands
2012 Releases extended ACM5000 with cellular and PoE and Lighthouse Central Management
2012 Reports revenue growth of 50% in North America and 78% in Europe.
2013 Releases IM7200 product line, with integrated fiber and 4GLTE
2014 Introduced the arrival of the IM4200-2-DAC-X2-GS to its IM4200 remote infrastructure management line of products, certified by Sprint.
2014 Releases CM7100 Console Server
2014 Opengear releases a new version of Lighthouse with a Console Gateway
2014 Integrated Failover To Cellular functionality to all cellular-enabled ACM remote-site management and IM infrastructure management devices
2015 Releases Resilience Gateway (ACM7004) product line, with Smart Out-Of-Band management and Failover to Cellular integration.
2017 Lighthouse 5 Centralized Management software platform released
2018 Operations Manager OM2200 appliance for NetOps management released
2018 NetOps Automation Platform launched, to streamline NetOps workflows
2019 NetOps Console Server launched, combining NetOps tools and Out-of-Band in a single appliance
2019 Opengear purchased by Digi International
References
External links
Official website
2004 establishments in New Jersey
Software companies based in New Jersey
Multinational companies headquartered in the United States
Networking hardware
Networking hardware companies
Out-of-band management
Piscataway, New Jersey
Software companies of the United States
Computer companies of the United States
Computer hardware companies | Opengear | [
"Technology",
"Engineering"
] | 827 | [
"Computer hardware companies",
"Computers",
"Networking hardware",
"Computer networks engineering"
] |
10,740,963 | https://en.wikipedia.org/wiki/NGC%205315 | NGC 5315 is a planetary nebula in the southern constellation Circinus. Of apparent magnitude 9.8 around a central star of magnitude 14.2, it is located 5.2 degrees west-southwest of Alpha Circini. It is only visible as a disc at magnifications over 200-fold. The nebula was discovered by astronomer Ralph Copeland in 1883. The central star has a stellar class of WC4 and is hydrogen deficient with an effective temperature of 76-. The distance to this nebula is not known accurately, but is estimated to be around .
This planetary nebular has a slightly elliptical form, a complex structure, and a ring that is somewhat broken. It shows a typical abundance of carbon and a slightly enhanced nitrogen abundance. Radial velocity studies indicate that the star may be a member of a binary system. The nebula does not show enrichment of s-process elements. This suggests that the star's asymptotic giant branch stage may have been truncated by interaction with the companion. Alternatively, the star may be low in mass and did not undergo third dredge-up, or the star's s-process elements were heavily diluted by the envelope during the AGB phase.
References
External links
ESA Hubble site: Hubble picture and information on NGC 5315 (1997) (2007)
Planetary nebulae
Circinus
5315 | NGC 5315 | [
"Astronomy"
] | 277 | [
"Circinus",
"Constellations"
] |
10,741,406 | https://en.wikipedia.org/wiki/NGC%206027b | NGC 6027b is an interacting lenticular galaxy that is part of Seyfert's Sextet, a compact group of galaxies currently in the process of colliding and merging, which is located in the constellation Serpens.
See also
NGC 6027
NGC 6027a
NGC 6027c
NGC 6027d
NGC 6027e
Seyfert's Sextet
References
External links
HubbleSite NewsCenter: Pictures and description
Serpens
Lenticular galaxies
6027b
56584
10116 NED03 | NGC 6027b | [
"Astronomy"
] | 110 | [
"Constellations",
"Serpens"
] |
10,742,185 | https://en.wikipedia.org/wiki/NGC%206027c | NGC 6027c is a barred spiral galaxy that is part of Seyfert's Sextet, a compact group of galaxies, which is located in the constellation Serpens.
See also
NGC 6027
NGC 6027a
NGC 6027b
NGC 6027d
NGC 6027e
Seyfert's Sextet
References
External links
HubbleSite NewsCenter: Pictures and description
Serpens
Barred spiral galaxies
6027c
56578
10116 NED04 | NGC 6027c | [
"Astronomy"
] | 99 | [
"Constellations",
"Serpens"
] |
10,743,212 | https://en.wikipedia.org/wiki/Gross%20processing | Gross processing, "grossing" or "gross pathology" is the process by which pathology specimens undergo examination with the bare eye to obtain diagnostic information, as well as cutting and tissue sampling in order to prepare material for subsequent microscopic examination.
Responsibility
Gross examination of surgical specimens is typically performed by a pathologist, or by a pathologists' assistant working within a pathology practice. Individuals trained in these fields are often able to gather diagnostically critical information in this stage of processing, including the stage and margin status of surgically removed tumors.
Steps
The initial step in any examination of a clinical specimen is confirmation of the identity of the patient and the anatomical site from which the specimen was obtained. Sufficient clinical data should be communicated by the clinical team to the pathology team in order to guide the appropriate diagnostic examination and interpretation of the specimen - if such information is not provided, it must be obtained by the examiner prior to processing the specimen.
There are usually two end products of the gross processing of a surgical specimen. The first is the gross description, a document which serves as the written record of the examiner's findings, and is included in the final pathology report. The second product is a set of tissue blocks, typically postage stamp-sized portions of tissue sealed in plastic cassettes, which will be processed into slides for microscopic examination. Since only a minority of the tissue from a large specimen can reasonably be subject to microscopic examination, the success of the final histological diagnosis is highly dependent on the skill of the professional performing the gross examination. The gross examiner may sample portions of the specimen for other types of ancillary tests as diagnostically indicated; these include microbiological culture, flow cytometry, cytogenetics, or electron microscopy.
Perpendicular versus en face sections
Two major types of sections in gross processing are perpendicular and en face sections:
Perpendicular sections allow for measurement of the distance between a lesion and the surgical margin.
En face means that the section is tangential to the region of interest (such as a lesion) of a specimen. It does not in itself specify whether subsequent microtomy of the slice should be performed on the peripheral or proximal surface of the slice (the peripheral surface of an en face section is closer to being the true margin, whereas the proximal surface generally displays more area and therefore generally has greater sensitivity in showing pathology, also compared to perpendicular sections).
A shaved section is a superficial en face slice that contains the entire surface of the segment.
See also
Timeline of myocardial infarction pathology, including gross examination findings
References
Anatomical pathology
Gross processing
Pathology | Gross processing | [
"Biology"
] | 526 | [
"Pathology"
] |
10,743,905 | https://en.wikipedia.org/wiki/Ethnoherpetology | Ethnoherpetology is the study of the past and present interrelationships between human cultures and reptiles and amphibians. It is a sub-field of ethnozoology, which in turn is a sub-field of ethnobiology.
Snakes and amphibians have been considered chthonic creatures in many cultures. Richly represented in mythology, culture, art, and literature, they often evoke revulsion, fear, suspicion and awe, sometimes even hysteria. Frogs and toads were believed to announce the rains with their choruses.
See also
Colorado River toad
Frogs in culture
Herpetology
Legendary salamander in popular culture
Nāga
Serpent (symbolism)
Bibliography
Bulmer, Ralph N.H. and Michael Tyler. 1968. Karam classification of frogs. Journal of the Polynesian Society 77(4): 621–639.
Indraneil Das – The Serpent's Tongue: A contribution to the ethnoherpetology of India and adjacent countries (Frankfurt am Main: Edition Chimaira, 1998)
Walsh, M.T. – Snakes and Other Reptiles in Mtanga: preliminary notes on ethnoherpetology in a village bordering Gombe Stream National Park, western Tanzania. (1997)
Bertrand, H. – Contribution à l'étude de l'herpétologie et de l'ethnoherpétologie en Anjou (A study on the herpetology and ethnoherpetology of Anjou province)
Lee, J. C. – Ethnoherpetology in the Yucatán Peninsula. In Amphibians and Reptiles of the Yucatán Peninsula, by J. C. Lee. Ithaca, NY: Cornell University Press, 1996.
An example of indigenous ethnoherpetological knowledge – notes written by a Bukusu-speaking research assistant from western Kenya:
Wepukhulu, D. M. 1992. Bukusu Ethnozoology (Reptiles and Amphibians). Unpublished manuscript notes on Bukusu ethnozoology.
Ethnobiology
Herpetology | Ethnoherpetology | [
"Biology",
"Environmental_science"
] | 428 | [
"Environmental social science",
"Ethnobiology"
] |
10,743,942 | https://en.wikipedia.org/wiki/Cray%20X2 | The Cray X2 is a vector processing node for the Cray XT5h supercomputer, developed and sold by Cray Inc. and launched in 2007.
The X2, developed under the code name Black Widow, was originally expected to be a standalone supercomputer system, superseding the Cray X1 parallel vector supercomputer. However, the X2 was eventually launched as one of the four processor "blade" options for the XT5h system.
An X2 blade comprises two nodes, each with four symmetric multiprocessing vector processors and 32 or 64 GB of shared memory. Each node has a peak performance of more than 100 gigaflops. X2 processors are connected using a radix-64 "fat-tree" interconnect implemented by the YARC router ASIC. X2 blades also link into the XT5h system via its SeaStar2+ processor interconnect.
Up to 256 X2 blades can be installed in an XT5h system. The X2 processor nodes integrate with the Cray XT5h's UNICOS/lc OS, user environment, and storage subsystem, as part of the Rainier project.
External links
Cray XT5h Supercomputer
Cray Introduces Next-Generation Supercomputers
Thinking Ahead: Future Architectures from Cray
The BlackWidow High-Radix Clos Network
Cray X2 Vector Processing Blade
X2
Vector supercomputers | Cray X2 | [
"Technology"
] | 317 | [
"Computing stubs",
"Computer hardware stubs"
] |
10,744,100 | https://en.wikipedia.org/wiki/Dihydrolipoic%20acid | Dihydrolipoic acid is an organic compound that is the reduced form of lipoic acid. This carboxylic acid features a pair of thiol groups, and therefore is a dithiol. It is optically active, but only the R-enantiomer is biochemically significant. The lipoic acid/dihydrolipoic acid pair participate in a variety of biochemical transformations.
See also
Dihydrolipoamide
Lipoamide
References
Carboxylic acids
Thiols | Dihydrolipoic acid | [
"Chemistry"
] | 106 | [
"Organic compounds",
"Carboxylic acids",
"Thiols",
"Functional groups"
] |
10,744,611 | https://en.wikipedia.org/wiki/Oecologia | Oecologia is an international peer-reviewed English-language journal published by Springer since 1968 (some articles were published in German or French until 1976). The journal publishes original research in a range of topics related to plant and animal ecology.
Oecologia has an international focus and presents original papers, methods, reviews and special topics. Papers focus on population ecology, plant-animal interactions, ecosystem ecology, community ecology, global change ecology, conservation ecology, behavioral ecology and physiological ecology.
Oecologia had an impact factor of 3.298 (2021) and is ranked 37 out of 136 in the subject category "ecology".
Editorial Board
As of December 2022, the journal has six editors in chief:
Carlos L. Ballaré (plant-microbe/plant-animal interactions), University of Buenos Aires, Argentina
Nina Farwig (terrestrial invertebrate ecology), University of Marburg, Germany
Indrikis Krams (terrestrial vertebrate ecology), University of Latvia, Latvia
Russell K. Monson (plant physiological/ecosystem ecology), University of Colorado Boulder, US
Melinda Smith (plant population/community ecology), Colorado State University, US
Joel Trexler (aquatic ecology), Florida State University, US
References
External links
Ecology journals
English-language journals
Publications with year of establishment missing
Journals published between 13 and 25 times per year | Oecologia | [
"Environmental_science"
] | 279 | [
"Environmental science journals",
"Ecology journals"
] |
10,745,656 | https://en.wikipedia.org/wiki/Adamantine%20%28veneer%29 | Adamantine is a veneer developed by The Celluloid Manufacturing Company of New York City, covered by U.S. Patent number 232,037, dated September 7, 1880, for the process of cementing a celluloid veneer or coating to a substrate such as a wood case. Adamantine veneer was made in black and white, and in colored patterns that simulated wood grain, onyx and marble.
Expensive French mantel clocks in slate, onyx or marble cases were popular in the United States in the 1860s. American clock manufacturers produced similar looking cases made of iron or wood, known as "Black Mantel Clocks", which were popular from 1880 to 1931.
Seth Thomas Clock Company purchased the right to use the adamantine veneer in 1881, which they called Marbaline. Their "Adamantine" black mantel clocks were made starting in 1882.
References
Thermoplastics | Adamantine (veneer) | [
"Physics"
] | 186 | [
"Materials stubs",
"Materials",
"Matter"
] |
5,435,664 | https://en.wikipedia.org/wiki/Bush%20tomato | Bush tomatoes are the fruit or entire plants of certain nightshade (Solanum) species native to the more arid parts of Australia. While they are quite closely related to tomatoes (Solanum lycopersicum), they might be even closer relatives of the eggplant (S. melongena), which they resemble in many details. There are 94 (mostly perennial) natives and 31 (mostly annual) introduced species in Australia.
Bush tomato plants are small shrubs whose growth is encouraged by fire and disturbance.
The fruit of a number of species have been used as food sources by Aboriginal people in the drier areas of Australia.
A number of Solanum species contain significant levels of solanine and as such are highly poisonous. It is strongly recommended that people unfamiliar with the plant do not experiment with the different species, as differentiating between them can often be difficult.
Some of the edible species are:
Solanum aviculare kangaroo apple
Solanum centrale, also known as desert raisin, bush raisin or bush sultana, or by the native name kutjera
Solanum chippendalei bush tomato, named after taxonomic botanist George Chippendale
Solanum diversiflorum bush tomato, karlumbu, pilirta, wamurla
Solanum ellipticum potato bush, very similar to Solanum quadriloculatum which is poisonous.
Solanum laciniatum kangaroo apple.
Solanum orbiculatum round-leaved solanum
Solanum phlomoides wild tomato.
In 1859, Aboriginal people were observed burning off the outer skin of S. aviculare as the raw state would blister their mouths. S. chippendalei is consumed by first splitting the fruit, scraping the centre out and eating the outer flesh as the seeds and surrounding placenta are bitter. S. diversiflorum is roasted before being eaten or dried. Fruit of S.orbiculatum is edible, but the fruit of the large leafed form may be bitter. Fruit of S. phlomoides appears to be edible after the removal of seeds and roasting or sundrying.
Solanum aviculare contains solasodine, a steroid used in the manufacture of oral contraceptives. Solanum plastisexum, a rare species first described in 2019, is distinguished among plants for exhibiting "breeding system fluidity" – that is, it has no stable sexual expression.
References
Solanales of Australia
Solanum
Bushfood
Australian Aboriginal bushcraft
Edible fruits
Edible Solanaceae
Plant common names
Fruits originating in Australia | Bush tomato | [
"Biology"
] | 523 | [
"Plant common names",
"Common names of organisms",
"Plants"
] |
5,435,674 | https://en.wikipedia.org/wiki/Acoustic%20shadow | An acoustic shadow or sound shadow is an area through which sound waves fail to propagate, due to topographical obstructions or disruption of the waves via phenomena such as wind currents, buildings, or sound barriers.
Short-distance acoustic shadow
A short-distance acoustic shadow occurs behind a building or a sound barrier. The sound from a source is shielded by the obstruction. Due to diffraction around the object, it will not be completely silent in the sound shadow. The amplitude of the sound can be reduced considerably, however, depending on the additional distance the sound has to travel between source and receiver.
Long-distance acoustic shadow
Anomalous sound propagation in the atmosphere can occur in certain conditions of wind, temperature and pressure. Such conditions enable sound to travel in refraction channels over long distances until returning to the Earth's surface, and it thus may not be heard in intervening locations. As one website refers to it, "an acoustic shadow is to sound what a mirage is to light". For example, at the Battle of Iuka, a northerly wind prevented General Ulysses S. Grant from hearing the sounds of battle and sending more troops. Many other instances of acoustic shadowing were prevalent during the American Civil War, including the Battles of Seven Pines, Gaines' Mill, Perryville and Five Forks. Indeed, this is addressed in the Ken Burns's documentary The Civil War, which aired on PBS in September 1990. Observers of nearby battles would sometimes see the smoke and flashes of light from cannon but not hear the corresponding roar of battle, while those in more distant locations would hear the sounds distinctly.
Two diarists John Evelyn and Samuel Pepys heard from London the naval guns of the Four Days' Battle, which ranged over the southern North Sea between England and the Flanders coast. However the guns were not heard at all in towns on the coast nearer to the action:
See also
for a fuller explanation of the phenomenon.
Gobo (recording)
References
Notes
Further reading
Garrison Jr., Webb, Strange Battles of the Civil War, Cumberland House, 2001,
Ross, Charles D. Civil War Acoustic Shadows. Shippensburg, PA: White Mane Publishing, 2001l .
External links
Acoustic Shadows - What is an acoustic shadow and how does it work? - Lisa
Acoustic Shadow
Hearing
Waves
Acoustics | Acoustic shadow | [
"Physics"
] | 465 | [
"Physical phenomena",
"Classical mechanics",
"Acoustics",
"Waves",
"Motion (physics)"
] |
5,435,686 | https://en.wikipedia.org/wiki/List%20of%20companion%20plants | This is a list of companion plants, traditionally planted together. Many more are in the list of beneficial weeds. Companion planting is thought by its practitioners to assist in the growth of one or both plants involved in the association. Possible mechanisms include attracting beneficial insects, repelling pests, or providing nutrients such as by fixing nitrogen, shade, or support. Companion plantings can be part of a biological pest control program. A large number of companion plant associations have been proposed; only a few of these have been subjected to scientific testing. Thus where a table column for example states "Helps" or "Helped by", this is to be read as meaning that traditional companion planting involves putting the named plants in that column into an association with the plant named at the left of the row, with the intention of causing the one plant to help or be helped by the other. Mechanisms that have been scientifically verified include using strongly aromatic plants to deter pests; using companions to hide crops from pests; providing plants as nurseries for beneficial insects including predators and parasitoids; trap cropping; and allelopathy, where a plant inhibits the growth of other species.
Vegetables
Fruit
Herbs
Flowers
Other
See also
Push–pull agricultural pest management
Sustainable agriculture
Sustainable landscaping
Sustainable gardening
References
Further reading
Cunningham, Sally Jean. Great Garden Companions: A companion planting system for a beautiful, chemical-free vegetable garden. 1998.
Hylton, W. The Rodale Herb Book, Eighth Printing. Rodale Press. 1974.
External links
Bohnsack, U. Companion Planting Guide.
Companion plants by Professor Stuart B. Hill Department of Entomology Macdonald College
Cass County Extension Companion Planting List
Companion Planting Infographic
+
Lists of plants
Gardening lists
Sustainable agriculture
Sustainable gardening | List of companion plants | [
"Biology"
] | 351 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
5,435,721 | https://en.wikipedia.org/wiki/Distichlis%20palmeri | Distichlis palmeri is an obligate emergent (it has aerenchyma) perennial rhizomatous dioecious halophytic C4 grass in the Poaceae (Gramineae) family. D. palmeri is a saltwater marsh grass endemic to the tidal marshes of the northern part of the Gulf of California and Islands section of the Sonoran Desert. D. palmeri is not drought tolerant. It does withstand surface drying between supra tidal events because roots extend downward to more than 1 meter (3 feet) where coastal substrata is still moist.
Culms (stalks) are generally rigid and upright to about 60 cm (2 feet) and have short internodes. Longer culms become recumbent (lay down) developing young vertical culms from the nodes. These young culms may root. Acicular to linear leaves are upright and positioned alternate along the culm at nodes. Leaves excrete salts through specialized salt glands that are a component of D. palmeri leaf anatomy. These excreted surface salts are wicked away by breezes. Insects of the grasshopper family visit the plant. When maintained in a greenhouse, it is susceptible to aphid infestation.
Anemophilous flowers emerge late winter. At anthesis, males liberate light chartreuse colored pollen in breezes. Female flowers are panicles of alternate spikelets that present lavender colored styles and stigmas. Kernels (seeds) are mature in early spring. Each panicle produces 20-30 mature caryopses. Kernels are similar to those of farro in color and size. Kernels of Distichlis palmeri have an indigenous history as a wild harvest grain (Nipa) consumed by the Cocopah. Nipa grain has size, nutritional value and flavor qualities similar to other cropped grains.
In the last four decades, Nipa grain production through saline agriculture (agriculture that uses saline resources to farm halophytic cash crops) of D. palmeri has been the subject of domestication studies.
In addition to research studies working to domesticate D. palmeri, the species has been used to manage farm drainage and has been proposed as a constructive use plant in remediation of saline and biosaline wastewaters and land.
Distichlis palmeri can grow in open hot full sun on saline irrigation in subtropic zones; hence, it can be cropped along warming and rising coastlines and is an active candidate for (bio)saline agriculture and cash crop development of Nipa grain.
References
External links
Herbarium photos at SERNEC
Sonoran Desert map at Desert Museum
palmeri
Chloridoideae
Halophytes
Cereals
Grasses of Mexico
Flora of the Sonoran Deserts
Flora of Northwestern Mexico
Plants used in Native American cuisine
Plants described in 1889
Taxa named by George Vasey | Distichlis palmeri | [
"Chemistry"
] | 587 | [
"Halophytes",
"Salts"
] |
5,435,743 | https://en.wikipedia.org/wiki/Zirconium%20disilicide | Zirconium disilicide is an inorganic chemical compound with the chemical formula ZrSi2, consisting of zirconium and silicon atoms. It is a ceramic, but not very hard and very brittle.
References
Transition metal silicides
Zirconium(II) compounds | Zirconium disilicide | [
"Chemistry"
] | 60 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,435,783 | https://en.wikipedia.org/wiki/Zirconium%20disulfide | Zirconium(IV) sulfide is the inorganic compound with the formula ZrS2. It is a violet-brown solid. It adopts a layered structure similar to that of cadmium iodide.
Like the closely related titanium disulfide, ZrS2 is prepared by heating sulfur and zirconium metal. It can be purified by vapor transport using iodine.
References
Disulfides
Zirconium(IV) compounds
Transition metal dichalcogenides | Zirconium disulfide | [
"Chemistry"
] | 102 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,436,316 | https://en.wikipedia.org/wiki/INTBAU | The International Network for Traditional Building, Architecture & Urbanism (INTBAU) is an international organization established in 2001. The organization arose from a research project initiated in 2000 at The Prince's Foundation for the Built Environment and undertaken by Dr Matthew Hardy, an architect and architectural historian. INTBAU is "dedicated to the support of traditional building, the maintenance of local character and the creation of better places to live", and has a Central Office located with three related charities in The Prince's Foundation for the Built Environment building in Shoreditch, London, United Kingdom.
History
Since April 2004 it has been an independent registered educational, first as Charity no. 1103068 and more recently Charity no. 1132362. INTBAU remains under the patronage of the Prince of Wales, though it has now become a subsidiary company of The Prince's Foundation.
Charter
INTBAU's work is guided by its charter, the founding document of the organization:
"The International Network for Traditional Building, Architecture & Urbanism is an active network of individuals and institutions dedicated to the creation of humane and harmonious buildings and places which respect local traditions.
Traditions allow us to recognize the lessons of history, enrich our lives and offer our inheritance to the future. Local, regional and national traditions provide the opportunity for communities to retain their individuality with the advance of globalization. Through tradition we can preserve our sense of identity and counteract social alienation. People must have the freedom to maintain their traditions.
Traditional buildings and places maintain a balance with nature and society that has been developed over many generations. They enhance our quality of life and are a proper reflection of modern society. Traditional buildings and places can offer a profound modernity beyond novelty and look forward to a better future.
INTBAU brings together those who design, make, maintain, study or enjoy traditional building, architecture and places. We will gain strength, significance and scholarship by association, action and the dissemination of our principles."
Chapters
The organization now has chapters (regional sub-groups) in Albania, Afghanistan, Australia, Bangladesh, Canada, China, Costa Rica, Cuba, Cyprus, Czechia, Estonia, Ethiopia, Finland, Germany, India, Iran, Ireland, Italy, Malaysia, Mexico, Montenegro, the Netherlands, New Zealand, Nigeria, Pakistan, the Philippines, Poland, Portugal, Qatar, Romania, Russia, Serbia, Spain, Sweden, Turkey, Ukraine and the USA. Each chapter signs a "chapter agreement" - a kind of franchise document - with the College of Chapters, the central decision-making body of the international organization. Chapters are then free to undertake their own projects subject to an allocation of central office resources and time, approved by the College of Chapters. Projects are generally initiated by Chapters or members and resourced locally in line with the organization's overall environmental and social focus.
Education
INTBAU and its chapters are involved in the organization of a series of educational initiatives that promote traditional methods of designing and building, including:
ICTP
INTBAU operates the INTBAU College of Traditional Practitioners (ICTP), a peak peer-reviewed professional organization for architects, artists, academics and others working in traditional styles. Members must have produced at least 5 years of work of the "highest standard" and pass an entry examination by portfolio. There are currently 50 members of the ICTP.
Journal of Traditional Building, Architecture and Urbanism
The Journal of Traditional Building, Architecture and Urbanism is a magazine aimed at providing a better knowledge of the traditional constructive cultures of the various regions of the world. It includes original academic articles, peer-review publications and follows all the usual practices of scientific journals. It is organized by the Spanish Chapter of INTBAU, together with the Rafael Manzano Prize through the financial support of the Richard H. Driehaus Charitable Trust, and is a trilingual publication, published in English, Spanish and Portuguese.
See also
Driehaus Architecture Prize
Traditional architecture
Architectural Uprising
References
External links
INTBAU homepage
INTBAU archive 2001-2010
Architecture organisations based in the United Kingdom
Vernacular architecture
New Urbanism
Cultural heritage organizations
Year of establishment missing
New Classical architecture
Architecture groups
Architecture organizations | INTBAU | [
"Engineering"
] | 833 | [
"Architecture organizations",
"Architecture"
] |
5,436,452 | https://en.wikipedia.org/wiki/Cerebro%27s%20X-Men | Cerebro's X-Men are a team of supervillains appearing in American comic books published by Marvel Comics. They are a nanotechnology version of the X-Men created by Cerebro when the supercomputer briefly goes rogue.
This team was created and designed by the Spanish artist Carlos Pacheco, who also drew them for the cover of Uncanny X-Men No. 360 (1998). The characters appeared in two issues of the Uncanny X-Men series and one issue of the X-Men series. The team's primary purpose is to help Cerebro catalog all mutants on Earth, but Cerebro intends to cryogenically preserve the mutants it captures and its team kidnaps and fights other mutants.
Publication history
Cerebro's X-Men featured in three issues:
Uncanny X-Men #360 (October 1998)
This issue features the introduction of Cerebro Prime disguised as Professor X and follows the creation of the fake X-Men team. It also features their kidnapping of Kitty Pryde and the team's first fight with the real X-Men, who they almost defeat.
X-Men II #80 (October 1998)
This issue follows Shadowcat's escape from Cerebro's X-Men and another fight with the real X-Men. Cerebro's X-Men take over Cape Citadel base to try and stop a rocket with anti-mutant technology on board, so they can take it for Cerebro's use; meanwhile, the real X-Men are trying to stop both the rocket and this theft. When Cerebro's X-Men lost the fight, Cerebro turns them all into energy and teleports away. When the team admits they aren't sure of their purpose anymore, Cerebro assimilates them into himself.
Uncanny X-Men #364 (January 1999)
This issue follows Cerebro's destruction of his X-Men team and their Florida base after they have been detected by human agencies.
Fictional team history
Cerebro, a device created by X-Men founder Charles Xavier to help locate mutants with the X-Gene, is confiscated by the mysterious Bastion during Operation: Zero Tolerance. Bastion attempts to access secret files and operate Cerebro, but the supercomputer activates a computer virus to erase this information rather than letting it be stolen. However, the combination of Cerebro's power with Bastion's nanotechnology gives the supercomputer sentience. Cerebro creates a body for itself, escapes Bastion's headquarters, and tries to follow its original programming literally: find, catalog, and register mutants. However, a large part of its plan to catalog mutants is to capture and store them in cryogenic chambers for further study. Cerebro begins its new mission by creating its own version of the X-Men, Professor X's team. It manages this by using Bastion's nano-technology to combine the profiles and powers of several mutants in Professor X's database to create new mutants. Then Cerebro takes on Xavier's appearance, posing as the renowned mutant leader ato invite each new mutant to join its team under the guide of "The Founder," and sets them a mission to kidnap Peter Corbeau, a scientist working on mutant defense technology for the US government. After Corbeau is captured, Cerebro's X-Men are then sent to find Kitty Pryde/Shadowcat, who the disguised Cerebro asks to "cure" him. Shadowcat manages to phase out Bastion's virus, though she doesn't know exactly what she's done because she thinks Cerebro is the real Professor X. Cerebro then orders its X-Men team to place her in cryogenic storage, so her DNA will be preserved for future study.
Shadowcat manages to escape and finds Wolverine, Rogue, Storm, Colossus, Nightcrawler, and Marrow, who had been searching for her. Eventually they encounter Cerebro and his X-Men, who are attempting to destroy the government's mutant tracking satellite, regardless of the potential threat to human life once its radioactive core is breached. Wolverine's enhanced senses confirm Shadowcat's suspicions that this "Xavier" is an impostor, and the real X-Men realize that if Corbeau's satellite is launched, then Cerebro won't be collect mutants before humans find them. The real X-Men fight and defeat Cerebro's X-Men, preventing the satellite from exploding.
Cerebro escapes the lost battle and reveals their true origins to its X-Men team before deeming them failures and absorbing them into its own body. By doing this, it becomes an even more powerful cybernetic monster, and only the real Charles Xavier is able to subdue Cerebro, purging its systems and destroying the superpowered robotic body it had created.
Roster
Cerebro/The Founder – The X-Men's mutant detecting computer, given physical form when a nano-tech computer virus corrupted its systems.
Crux – Cristal Lemieux is a French ice skater with a cocky attitude and the ability to fire blasts of flame or ice. When Crux uses her powers, the right half of her body turns into fire while left turns into ice. Crux was patterned after the powers of Iceman and Sunfire, and the personality of Jubilee. Carlos Pacheco wanted to call her Geisher, a pun between geyser and geisha, because at first she was a Japanese girl, not a French one. She was designed after Sunfire, Iceman, Storm, and Avalanche. Geisher was supposed to have the ability to manipulate the four elements but her look referenced only fire and ice like Equinox, a Marvel villain created by Len Wein and Gil Kane.
Grey King – Addison Falk is a super-intelligent telepath with the ability to psionically neutralize mutant powers. He also possesses telekinetic abilities and is able to move, lift, and manipulate matter with his thoughts. When he utilizes his telekinesis, he was surrounded by a corona of psychic fire in the shape of a bird (like the Phoenix Force). Grey King also wore a Phoenix-like costume and had red hair. It was revealed Cerebro patterned the Grey King after the mutant power template of Jean Grey / Phoenix, (formerly the Black Queen of the Hellfire Club's inner circle), and the personality template of Magneto (who had once forged an alliance with that same organization, becoming the White King). Carlos Pacheco has said that Grey King was designed after Jean Grey and Sebastian Shaw, the Black King, and his name is a pun on theirs.
Landslide – Lee Broder is a Southern redneck with ape-like strength and agility and also sporting big hands and big feet. Carlos Pacheco designed this character by combining Blob and Beast. Landslide acquired Sabretooth's powers because he looks very much alike.
Mercury – Not to be confused with Cessily Kincaid of the New X-Men, Mercury is a hitman who can turn his skin into metal and has razor-sharp claws on his fingers. Mercury is patterned after the powers and personality of Colossus and Wolverine. Carlos Pacheco initially combined Magneto and Colossus to create Silverface who could mold his own body. The character finally appeared as Mercury in the comics.
Rapture – Sister Joy is a blue-skinned nun with large dove-like wings, flight, and sword-fighting skills. Rapture is shown to be in love with Grey King. Rapture had red hair and a skull charm on her costume, showing that she was created from the power template of Archangel, the appearance of Mystique, and the personality of Nightcrawler. Carlos Pacheco created the character mixing Mystique, Archangel, Nightcrawler, and Shadowcat; Carlos first called her Spook but finally she appeared as Rapture.
Chaos/Kaos – Dan Dash is an autistic man believed to be the brother of a flight show pilot named Dwayne with the power to fire concentric waves of explosive plasma from his glowing red left eye. Chaos was patterned after the powers of both Cyclops and Havok. Carlos Pacheco named him after Kaos, the Spanish translation for Havok. The correct spelling of Kaos is caos, which means chaos in English.
References
External links
Fiction about nanotechnology
Comics characters introduced in 1998 | Cerebro's X-Men | [
"Materials_science"
] | 1,767 | [
"Fiction about nanotechnology",
"Nanotechnology"
] |
5,436,866 | https://en.wikipedia.org/wiki/Hamaker%20theory | After the explanation of van der Waals forces by Fritz London, several scientists soon realised that his definition could be extended from the interaction of two molecules with induced dipoles to macro-scale objects by summing all of the forces between the molecules in each of the bodies involved. The theory is named after H. C. Hamaker, who derived the interaction between two spheres, a sphere and a wall, and presented a general discussion in a heavily cited 1937 paper.
The interaction of two bodies is then treated as the pairwise interaction of a set of N molecules at positions: Ri {i:1,2,... ...,N}. The distance between the molecules i and j is then:
The interaction energy of the system is taken to be:
where is the interaction of molecules i and j in the absence of the influence of other molecules.
The theory is however only an approximation which assumes that the interactions can be treated independently, the theory must also be adjusted to take into account quantum perturbation theory.
References
Physical chemistry
Intermolecular forces | Hamaker theory | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 217 | [
"Molecular physics",
"Applied and interdisciplinary physics",
"Materials science",
"Intermolecular forces",
"nan",
"Physical chemistry",
"Physical chemistry stubs"
] |
5,437,698 | https://en.wikipedia.org/wiki/James%20K.%20Coyne%20III | James Kitchenman Coyne III (born November 17, 1946) is an American businessman and former politician. From 1981 to 1983, he served one term as a Republican member of the U.S. House of Representatives from Pennsylvania.
Biography
Coyne was born in Farmville, Virginia, and raised in Abington, Pennsylvania, the son of James Kitchenman Coyne Jr. and Pearl Beatrice Black. He graduated from Yale University in 1968 and received an M.B.A. from Harvard Business School in 1970. He was a lecturer at the Wharton School at the University of Pennsylvania from 1974 to 1979 and was president of the George S. Coyne Chemical Corp., Inc., from 1971 to 1981. Coyne was the supervisor of Upper Makefield Township in 1980.
Congress
He was elected in 1980 as a Republican to the 97th Congress. He was an unsuccessful candidate for reelection in 1982.
Later career
After his term in Congress, he served from 1983 to 1985 as a special assistant to President Ronald Reagan and as director of the White House Office of Private Sector Initiatives, in 1985–1986 as chief executive officer of the American Consulting Engineers Council, and as president of the American Tort Reform Association from 1986 to 1988. In 1987, he founded Americans to Limit Congressional Terms.
Coyne co-authored (with John Fund) "Cleaning House," which promoted state referendums to limit the terms of Members of Congress. In 1994 he was chosen president of the National Air Transportation Association, where he served until 2012.
He married Helen Biddle Mercer on October 24, 1970. They have three children, Alexander Black Coyne (born 1977), Katherine Mercer Coyne (born 1980) and Michael Atkinson Coyne (born 1982). He is a great-great-grandson of Philadelphia manufacturer James Kitchenman.
Sources
External links
1946 births
Living people
Businesspeople from Pennsylvania
Harvard Business School alumni
People from Abington Township, Montgomery County, Pennsylvania
Politicians from Bucks County, Pennsylvania
People in the chemical industry
Republican Party members of the United States House of Representatives from Pennsylvania
Yale University alumni
Members of Congress who became lobbyists
20th-century members of the United States House of Representatives | James K. Coyne III | [
"Chemistry"
] | 435 | [
"People in the chemical industry"
] |
5,437,798 | https://en.wikipedia.org/wiki/Zinc%20antimonide | Zinc antimonide (ZnSb), (Zn3Sb2), (Zn4Sb3) is an inorganic chemical compound. The Zn-Sb system contains six intermetallics. Like indium antimonide, aluminium antimonide, and gallium antimonide, it is a semiconducting intermetallic compound. It is used in transistors, infrared detectors and thermal imagers, as well as magnetoresistive devices.
History of zinc–antimony alloys and zinc antimonide
The first reported use of zinc-antimony alloys was in the original work of T. J. Seebeck on thermoelectricity, a scientist who would then give his name to the Seebeck effect. By the 1860s, Moses G. Farmer, an American inventor, had developed the first high powered thermoelectric generator based on using a zinc-antimony alloy with a composition very close to stoichiometric ZnSb. He showed this generator at the 1867 Paris Exposition where it was carefully studied and copied (with minor modifications) by a number of people including Clamond. Farmer finally received the patent on his generator in 1870. George H. Cove patented a thermoelectric generator based on a Zn-Sb alloy in the early 1900s. His patent claimed that the voltage and current for six "joints" was 3V at 3A. This was a far higher output than would be expected from a thermoelectric couple, and was possibly the first demonstration of the thermophotovoltaic effect, as the bandgap for ZnSb is 0.56eV, which under ideal conditions could yield close to 0.5V per diode. The next researcher to work with the material was Mária Telkes while she was at Westinghouse in Pittsburgh during the 1930s. Interest was revived again with the discovery of the higher bandgap Zn4Sb3 material in the 1990s.
References
zinc
antimonide
II-V semiconductors
II-V compounds | Zinc antimonide | [
"Chemistry"
] | 421 | [
"Inorganic compounds",
"II-V semiconductors",
"II-V compounds",
"Semiconductor materials",
"Inorganic compound stubs"
] |
5,438,016 | https://en.wikipedia.org/wiki/Zinc%20chlorate | Zinc chlorate (Zn(ClO3)2) is an inorganic chemical compound.
References
zinc
chlorate | Zinc chlorate | [
"Chemistry"
] | 27 | [
"Chlorates",
"Inorganic compounds",
"Inorganic compound stubs",
"Salts"
] |
5,438,079 | https://en.wikipedia.org/wiki/Zinc%20molybdate | Zinc molybdate is an inorganic compound with the formula ZnMoO4. It is used as a white pigment, which is also a corrosion inhibitor. A related pigment is sodium zinc molybdate, Na2Zn(MoO4)2. The material has also been investigated as an electrode material.
In terms of its structure, the Mo(VI) centers are tetrahedral and the Zn(II) centers are octahedral.
Safety
The LD50 (oral, rats) is 11,500 mg/kg. While highly soluble molybdates like e.g. sodium molybdate are toxic in higher doses, zinc molybdate is essentially non-toxic because of its insolubility in water. Molybdates possess a lower toxicity than chromates or lead salts and are therefore seen as an alternative to these salts for corrosion inhibition.
References
External links
Improved corrosion-inhibiting pigments
zinc
molybdate | Zinc molybdate | [
"Chemistry"
] | 202 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,438,215 | https://en.wikipedia.org/wiki/Zinc%20phosphate | Zinc phosphate is an inorganic compound with the formula Zn3(PO4)2. This white powder is widely used as a corrosion resistant coating on metal surfaces either as part of an electroplating process or applied as a primer pigment (see also red lead). It has largely displaced toxic materials based on lead or chromium, and by 2006 it had become the most commonly used corrosion inhibitor. Zinc phosphate coats better on a crystalline structure than bare metal, so a seeding agent is often used as a pre-treatment. One common agent is sodium pyrophosphate.
Minerals
Natural forms of zinc phosphate include minerals hopeite and parahopeite. A somewhat similar mineral is natural hydrous zinc phosphate called tarbuttite, Zn2(PO4)(OH). Both are known from oxidation zones of Zn ore beds and were formed through oxidation of sphalerite by the presence of phosphate-rich solutions. The anhydrous form has not yet been found naturally.
Use
Dentistry
Zinc phosphate cement is the classic dental cement par excellence. It is commonly used for luting permanent metal and zirconium dioxide restorations and as a base for dental restorations. Zinc phosphate cement is used for cementation of inlays, crowns, bridges, and orthodontic appliances and occasionally as a temporary restoration.
It is prepared by mixing zinc oxide (ZnO) and magnesium oxide (MgO) powders with a liquid consisting principally of phosphoric acid, water, and buffers. It is the standard cement to measure against. It has the longest track record of use in dentistry.
In recent years, newer adhesive cements on a different chemical basis have been added (e.g. glass ionomer cement), but they have not displaced the classic phosphate cement, which continues to hold its own in the dental market with its simple and safe processing and good price-performance ratio. Zinc phosphate cement has only a low flexural strength and it does not stick to the dentin (it is a cement and not an adhesive).
Zinc phosphate cement has high compressive strength, low film thickness, minimal setting shrinkage and thermal expansion and is biocompatible. Compared to other luting materials such as glass ionomer cement or composites, zinc phosphate cement is less sensitive to moisture. The excess produced during the cementation of dental restorations can be easily removed.
Zinc phosphate cement has a high adhesive capacity to the tooth, metal, or even zirconium oxide.
Despite its strong acidity, zinc phosphate cement does not damage the pulp (or the tooth nerve) during the setting phase. It is therefore used as liner to protect the pulp under composite fillings.
Well-known dental brands in Germany and the world for zinc phosphate cement are Harvard cement and Hoffmann's cement. Otto Hoffmann invented this cement in 1892 and had it patented. Until the beginning of the First World War, he had a worldwide monopoly position with his cement.
References
External links
Phosphates
phosphate
Inorganic pigments
Corrosion inhibitors
Dental materials | Zinc phosphate | [
"Physics",
"Chemistry"
] | 624 | [
"Dental materials",
"Inorganic compounds",
"Process chemicals",
"Inorganic compound stubs",
"Salts",
"Inorganic pigments",
"Materials",
"Phosphates",
"Corrosion inhibitors",
"Matter"
] |
5,438,449 | https://en.wikipedia.org/wiki/ScaLAPACK | The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interprocessor communication. It assumes matrices are laid out in a two-dimensional block cyclic decomposition.
ScaLAPACK is designed for heterogeneous computing and is portable on any computer that supports MPI or PVM.
ScaLAPACK depends on PBLAS operations in the same way LAPACK depends on BLAS.
As of version 2.0, the code base directly includes PBLAS and BLACS and has dropped support for PVM.
After two decades of operation, a new library was created to replace ScaLAPACK, which was not suitable for modern accelerated architectures. Slate is written in C++ and was designed primarily to serve as a dense linear algebra library to the United States Department of Energy and to the high-performance computing community at large.
Examples
Programming with Big Data in R fully utilizes ScaLAPACK and two-dimensional block cyclic decomposition for Big Data statistical analysis which is an extension to R.
References
External links
The ScaLAPACK Project on Netlib.org
Numerical software
Computer libraries | ScaLAPACK | [
"Mathematics",
"Technology"
] | 248 | [
"IT infrastructure",
"Numerical software",
"Computer libraries",
"Mathematical software"
] |
5,438,469 | https://en.wikipedia.org/wiki/Desire%20path | A desire path, often referred to as a desire line in transportation planning and also known by various other names, is an unplanned small trail created as a consequence of mechanical erosion caused by human or animal traffic. The path usually represents the shortest or the most easily navigated route between an origin and destination, and the width and severity of its surface erosion are often indicators of the traffic level it receives.
An early documented example is Broadway in New York City, which follows the Wecquaesgeek trail which predates American colonization.
Desire paths typically emerge as convenient shortcuts where more deliberately constructed paths take a longer or more circuitous route, have gaps, or are non-existent. Once a path has been trodden out through the natural vegetation, subsequent traffic tends to follow that visibly existing route (as it is more convenient than carving out a new path by oneself), and the repeated trampling will further erode away both the remaining groundcover and the soil quality that allows easy revegetation. Eventually, a clearly visible and easily passable path emerges that humans and animals alike tend to prefer.
Parks and nature areas
Desire paths sometimes cut through sensitive habitats and exclusion zones, threatening wildlife and park security. However, they also provide park management with an indicator of activity concentration. In Yosemite National Park, the National Park Service uses these indicators to help guide its management plan.
Trampling studies have consistently documented that impacts on soil and vegetation occur rapidly with initial use of desire paths. As few as 15 passages over a site can be enough to create a distinct trail, the existence of which then attracts further use. This finding contributed to the creation of the Leave No Trace education program, which instructs travelers in nature areas to either stay on designated trails or, when off trail, distribute their travel lines so as to not inadvertently create new trails in unsustainable locations.
Land managers have devised a variety of techniques to block the creation of desire paths, including fences, dense vegetation, and signage, though none are foolproof. Modern trail design attempts to avoid the need for barriers and restrictions, by aligning trail layout and user desire through physical design and persuasive outreach.
Accommodation
Landscapers sometimes accommodate desire paths by paving them, thereby integrating them into the official path network rather than blocking them. Sometimes, land planners have deliberately left land fully or partially unpathed, waiting to see what desire paths are created, and then paving those. In Finland, planners are known to visit parks immediately after the first snowfall, when the existing paths are not visible. The naturally chosen desire paths, marked by footprints, can then be used to guide the routing of new purpose-built paths.
Other uses of the concept
Images of desire paths have been employed as a metaphor for anarchism, intuitive design, individual creativity, and the wisdom of crowds.
In urban planning, desire paths have been used to analyze traffic patterns for a given mode of travel. For example, the 1959 Chicago Area Transportation Study used desire paths to illustrate commuter choices regarding railroad and subway trips.
In software design, the term is used to describe users' wide adoption of the same methods to overcome limitations in the software. For example, X (Twitter) "paved" a number of desire paths by integrating them into the service, including @ replies, hashtags, and group discussions.
See also
Sneckdown
Wayfinding
Notes
References
External links
Wordspy: Desire Line
Desire Paths
Desire Path subreddit
Tom Hulme's TED Talk on using desire paths for better design and user experience
Cycling infrastructure
Footpaths
Garden features
Landscape architecture
Parks
Pedestrian infrastructure
Psychogeography
Trails
Transportation planning
Types of thoroughfares
Urban design | Desire path | [
"Engineering"
] | 748 | [
"Landscape architecture",
"Architecture"
] |
5,438,882 | https://en.wikipedia.org/wiki/Ytterbium%28II%29%20chloride | Ytterbium(II) chloride (YbCl2) is an inorganic chemical compound. It was first prepared in 1929 by W. K. Klemm and W. Schuth, by reduction of ytterbium(III) chloride, YbCl3, using hydrogen.
2 YbCl3 + H2 → 2 YbCl2 + 2 HCl
Like other Yb(II) compounds and other low-valence rare earth compounds, it is a strong reducing agent. It is unstable in aqueous solution, reducing water to hydrogen gas.
References
Chlorides
Lanthanide halides
Ytterbium(II) compounds | Ytterbium(II) chloride | [
"Chemistry"
] | 137 | [
"Salts",
"Chlorides",
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,438,924 | https://en.wikipedia.org/wiki/Algebraic%20operation | In mathematics, a basic algebraic operation is any one of the common operations of elementary algebra, which include addition, subtraction, multiplication, division, raising to a whole number power, and taking roots (fractional power). These operations may be performed on numbers, in which case they are often called arithmetic operations. They may also be performed, in a similar way, on variables, algebraic expressions, and more generally, on elements of algebraic structures, such as groups and fields. An algebraic operation may also be defined more generally as a function from a Cartesian power of a given set to the same set.
The term algebraic operation may also be used for operations that may be defined by compounding basic algebraic operations, such as the dot product. In calculus and mathematical analysis, algebraic operation is also used for the operations that may be defined by purely algebraic methods. For example, exponentiation with an integer or rational exponent is an algebraic operation, but not the general exponentiation with a real or complex exponent. Also, the derivative is an operation that is not algebraic.
Notation
Multiplication symbols are usually omitted, and implied, when there is no operator between two variables or terms, or when a coefficient is used. For example, 3 × x2 is written as 3x2, and 2 × x × y is written as 2xy. Sometimes, multiplication symbols are replaced with either a dot or center-dot, so that x × y is written as either x . y or x · y. Plain text, programming languages, and calculators also use a single asterisk to represent the multiplication symbol, and it must be explicitly used; for example, 3x is written as 3 * x.
Rather than using the ambiguous division sign (÷), division is usually represented with a vinculum, a horizontal line, as in . In plain text and programming languages, a slash (also called a solidus) is used, e.g. 3 / (x + 1).
Exponents are usually formatted using superscripts, as in x2. In plain text, the TeX mark-up language, and some programming languages such as MATLAB and Julia, the caret symbol, ^, represents exponents, so x2 is written as x ^ 2. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so x2 is written as x ** 2.
The plus–minus sign, ±, is used as a shorthand notation for two expressions written as one, representing one expression with a plus sign, the other with a minus sign. For example, y = x ± 1 represents the two equations y = x + 1 and y = x − 1. Sometimes, it is used for denoting a positive-or-negative term such as ±x.
Arithmetic vs algebraic operations
Algebraic operations work in the same way as arithmetic operations, as can be seen in the table below.
Note: the use of the letters and is arbitrary, and the examples would have been equally valid if and were used.
Properties of arithmetic and algebraic operations
See also
Algebraic expression
Algebraic function
Elementary algebra
Factoring a quadratic expression
Order of operations
Notes
References
Elementary algebra
Elementary mathematics | Algebraic operation | [
"Mathematics"
] | 662 | [
"Elementary mathematics",
"Algebra",
"Elementary algebra"
] |
5,438,926 | https://en.wikipedia.org/wiki/Programming%20language%20specification | In computer programming, a programming language specification (or standard or definition) is a documentation artifact that defines a programming language so that users and implementors can agree on what programs in that language mean. Specifications are typically detailed and formal, and primarily used by implementors, with users referring to them in case of ambiguity; the C++ specification is frequently cited by users, for instance, due to the complexity. Related documentation includes a programming language reference, which is intended expressly for users, and a programming language rationale, which explains why the specification is written as it is; these are typically more informal than a specification.
Standardization
Not all major programming languages have specifications, and languages can exist and be popular for decades without a specification. A language may have one or more implementations, whose behavior acts as a de facto standard, without this behavior being documented in a specification. Perl (through Perl 5) is a notable example of a language without a specification, while PHP was only specified in 2014, after being in use for 20 years. A language may be implemented and then specified, or specified and then implemented, or these may develop together, which is usual practice today. This is because implementations and specifications provide checks on each other: writing a specification requires precisely stating the behavior of an implementation, and implementation checks that a specification is possible, practical and consistent. Writing a specification before an implementation has largely been avoided since ALGOL 68 (1968), due to unexpected difficulties in implementation when implementation is deferred. However, languages are still occasionally implemented and gain popularity without a formal specification: an implementation is essential for use, while a specification is desirable but not essential (informally, "code talks").
Forms
A programming language specification can take several forms, including the following:
An explicit definition of the syntax and semantics of the language. While syntax is commonly specified using a formal grammar, semantic definitions may be written in natural language (e.g., the approach taken for the C language), or a formal semantics (e.g., the Standard ML and Scheme specifications). A notable example is the C language, which gained popularity without a formal specification, instead being described as part of a book, The C Programming Language (1978), and only much later being formally standardized in ANSI C (1989).
A description of the behavior of a compiler (sometimes called "translator") for the language (e.g., the C++ language and Fortran). The syntax and semantics of the language has to be inferred from this description, which may be written in natural or a formal language.
A model implementation, sometimes written in the language being specified (e.g., Prolog). The syntax and semantics of the language are explicit in the behavior of the model implementation.
Syntax
The syntax of a programming language represents the definition of acceptable words, i.e., formal parameters and rules upon which to decide whether a given code is valid in respect to the language. On that note, the language syntax usually consists of a combination of the following three construction components:
A specific character set (non-empty, finite set of symbols)
Regular expressions describing its lexemes (for alphabet-wise tokenisation)
A Context-free grammar which describes how the lexemes may be combined in order to form a correct program
Syntax specification generally supposes a natural language description in order to provide modest comprehensibility. However, the formal representation of the above outlined components is usually part of the section as it favors the implementation and approval of the language and its concepts.
Semantics
Formulating a rigorous semantics of a large, complex, practical programming language is a daunting task even for experienced specialists, and the resulting specification can be difficult for anyone but experts to understand. The following are some of the ways in which programming language semantics can be described; all languages use at least one of these description methods, and some languages combine more than one
Natural language: Description by human natural language.
Formal semantics: Description by mathematics.
Reference implementations: Description by computer program.
Test suites: Description by examples of programs and their expected behaviors. While few language specifications start off in this form, the evolution of some language specifications has been influenced by the semantics of a test suite (e.g., in the past the specification of Ada has been modified to match the behavior of the Ada Conformity Assessment Test Suite).
Natural language
Most widely used languages are specified using natural language descriptions of their semantics. This description usually takes the form of a reference manual for the language. These manuals can run to hundreds of pages, e.g., the print version of The Java Language Specification, 3rd Ed. is 596 pages long.
The imprecision of natural language as a vehicle for describing programming language semantics can lead to problems with interpreting the specification. For example, the semantics of Java threads were specified in English, and it was later discovered that the specification did not provide adequate guidance for implementors.
Formal semantics
Formal semantics are grounded in mathematics. As a result, they can be more precise and less ambiguous than semantics given in natural language. However, supplemental natural language descriptions of the semantics are often included to aid understanding of the formal definitions. For example, The ISO Standard for Modula-2 contains both a formal and a natural language definition on opposing pages.
Programming languages whose semantics are described formally can reap many benefits. For example:
Formal semantics enable mathematical proofs of program correctness;
Formal semantics facilitate the design of type systems, and proofs about the soundness of those type systems;
Formal semantics can establish unambiguous and uniform standards for implementations of a language.
Automatic tool support can help to realize some of these benefits. For example, an automated theorem prover or theorem checker can increase a programmer's (or language designer's) confidence in the correctness of proofs about programs (or the language itself). The power and scalability of these tools varies widely: full formal verification is computationally intensive, rarely scales beyond programs containing a few hundred lines and may require considerable manual assistance from a programmer; more lightweight tools such as model checkers require fewer resources and have been used on programs containing tens of thousands of lines; many compilers apply static type checks to any program they compile.
Reference implementation
A reference implementation is a single implementation of a programming language that is designated as authoritative. The behavior of this implementation is held to define the proper behavior of a program written in the language. This approach has several attractive properties. First, it is precise, and requires no human interpretation: disputes as to the meaning of a program can be settled simply by executing the program on the reference implementation (provided that the implementation behaves deterministically for that program).
On the other hand, defining language semantics through a reference implementation also has several potential drawbacks. Chief among them is that it conflates limitations of the reference implementation with properties of the language. For example, if the reference implementation has a bug, then that bug must be considered to be an authoritative behavior. Another drawback is that programs written in this language may rely on quirks in the reference implementation, hindering portability across different implementations.
Nevertheless, several languages have successfully used the reference implementation approach. For example, the Perl interpreter is considered to define the authoritative behavior of Perl programs. In the case of Perl, the open-source model of software distribution has contributed to the fact that nobody has ever produced another implementation of the language, so the issues involved in using a reference implementation to define the language semantics are moot.
Test suite
Defining the semantics of a programming language in terms of a test suite involves writing a number of example programs in the language, and then describing how those programs ought to behave—perhaps by writing down their correct outputs. The programs, plus their outputs, are called the "test suite" of the language. Any correct language implementation must then produce exactly the correct outputs on the test suite programs.
The chief advantage of this approach to semantic description is that it is easy to determine whether a language implementation passes a test suite. The user can simply execute all the programs in the test suite, and compare the outputs to the desired outputs. However, when used by itself, the test suite approach has major drawbacks as well. For example, users want to run their own programs, which are not part of the test suite; indeed, a language implementation that could only run the programs in its test suite would be largely useless. But a test suite does not, by itself, describe how the language implementation should behave on any program not in the test suite; determining that behavior requires some extrapolation on the implementor's part, and different implementors may disagree. In addition, it is difficult to use a test suite to test behavior that is intended or allowed to be nondeterministic.
Therefore, in common practice, test suites are used only in combination with one of the other language specification techniques, such as a natural language description or a reference implementation.
See also
Programming language reference
External links
Language specifications
A few examples of official or draft language specifications:
Specifications written primarily in formal mathematics:
The Definition of Standard ML, revised edition – a formal definition in an operational semantics style.
Scheme R5RS – a formal definition in a denotational semantics style
Specifications written primarily in natural language:
Algol 60 report
Ada 95 reference manual
Java language specification
Draft C++ standard
Specifications via test suite:
Ruby's de facto community-driven specification
Notes
Specification | Programming language specification | [
"Engineering"
] | 1,935 | [
"Software engineering",
"Programming language topics"
] |
5,438,948 | https://en.wikipedia.org/wiki/Substitution%20%28logic%29 | A substitution is a syntactic transformation on formal expressions.
To apply a substitution to an expression means to consistently replace its variable, or placeholder, symbols with other expressions.
The resulting expression is called a substitution instance, or instance for short, of the original expression.
Propositional logic
Definition
Where ψ and φ represent formulas of propositional logic, ψ is a substitution instance of φ if and only if ψ may be obtained from φ by substituting formulas for propositional variables in φ, replacing each occurrence of the same variable by an occurrence of the same formula. For example:
ψ: (R → S) & (T → S)
is a substitution instance of
φ: P & Q
That is, ψ can be obtained by replacing P and Q in φ with (R → S) and (T → S) respectively. Similarly:
ψ: (A ↔ A) ↔ (A ↔ A)
is a substitution instance of:
φ: (A ↔ A)
since ψ can be obtained by replacing each A in φ with (A ↔ A).
In some deduction systems for propositional logic, a new expression (a proposition) may be entered on a line of a derivation if it is a substitution instance of a previous line of the derivation. This is how new lines are introduced in some axiomatic systems. In systems that use rules of transformation, a rule may include the use of a substitution instance for the purpose of introducing certain variables into a derivation.
Tautologies
A propositional formula is a tautology if it is true under every valuation (or interpretation) of its predicate symbols. If Φ is a tautology, and Θ is a substitution instance of Φ, then Θ is again a tautology. This fact implies the soundness of the deduction rule described in the previous section.
First-order logic
In first-order logic, a substitution is a total mapping from variables to terms; many, but not all authors additionally require σ(x) = x for all but finitely many variables x. The notation { x1 ↦ t1, …, xk ↦ tk }
refers to a substitution mapping each variable xi to the corresponding term ti, for i=1,…,k, and every other variable to itself; the xi must be pairwise distinct. Most authors additionally require each term ti to be syntactically different from xi, to avoid infinitely many distinct notations for the same substitution. Applying that substitution to a term t is written in postfix notation as t { x1 ↦ t1, ..., xk ↦ tk }; it means to (simultaneously) replace every occurrence of each xi in t by ti. The result tσ of applying a substitution σ to a term t is called an instance of that term t.
For example, applying the substitution { x ↦ z, z ↦ h(a,y) } to the term
{|
|-
|| f(
| ALIGN=CENTER |z
||, a, g(
|| x
|| ), y)
|| yields
|-
|| f(
|| h(a,y)
||, a, g(
|| z
|| ), y)
|| .
|}
The domain dom(σ) of a substitution σ is commonly defined as the set of variables actually replaced, i.e. dom(σ) = { x ∈ V | xσ ≠ x }.
A substitution is called a ground substitution if it maps all variables of its domain to ground, i.e. variable-free, terms.
The substitution instance tσ of a ground substitution is a ground term if all of ts variables are in σs domain, i.e. if vars(t) ⊆ dom(σ).
A substitution σ is called a linear substitution if tσ is a linear term for some (and hence every) linear term t containing precisely the variables of σs domain, i.e. with vars(t) = dom(σ).
A substitution σ is called a flat substitution if xσ is a variable for every variable x.
A substitution σ is called a renaming substitution if it is a permutation on the set of all variables. Like every permutation, a renaming substitution σ always has an inverse substitution σ−1, such that tσσ−1 = t = tσ−1σ for every term t. However, it is not possible to define an inverse for an arbitrary substitution.
For example, { x ↦ 2, y ↦ 3+4 } is a ground substitution, { x ↦ x1, y ↦ y2+4 } is non-ground and non-flat, but linear,
{ x ↦ y2, y ↦ y2+4 } is non-linear and non-flat, { x ↦ y2, y ↦ y2 } is flat, but non-linear, { x ↦ x1, y ↦ y2 } is both linear and flat, but not a renaming, since it maps both y and y2 to y2; each of these substitutions has the set {x,y} as its domain. An example for a renaming substitution is { x ↦ x1, x1 ↦ y, y ↦ y2, y2 ↦ x }, it has the inverse { x ↦ y2, y2 ↦ y, y ↦ x1, x1 ↦ x }. The flat substitution { x ↦ z, y ↦ z } cannot have an inverse, since e.g. (x+y) { x ↦ z, y ↦ z } = z+z, and the latter term cannot be transformed back to x+y, as the information about the origin a z stems from is lost. The ground substitution { x ↦ 2 } cannot have an inverse due to a similar loss of origin information e.g. in (x+2) { x ↦ 2 } = 2+2, even if replacing constants by variables was allowed by some fictitious kind of "generalized substitutions".
Two substitutions are considered equal if they map each variable to syntactically equal result terms, formally: σ = τ if xσ = xτ for each variable x ∈ V.
The composition of two substitutions σ = { x1 ↦ t1, …, xk ↦ tk } and τ = { y1 ↦ u1, …, yl ↦ ul } is obtained by removing from the substitution { x1 ↦ t1τ, …, xk ↦ tkτ, y1 ↦ u1, …, yl ↦ ul } those pairs yi ↦ ui for which yi ∈ { x1, …, xk }.
The composition of σ and τ is denoted by στ. Composition is an associative operation, and is compatible with substitution application, i.e. (ρσ)τ = ρ(στ), and (tσ)τ = t(στ), respectively, for every substitutions ρ, σ, τ, and every term t.
The identity substitution, which maps every variable to itself, is the neutral element of substitution composition. A substitution σ is called idempotent if σσ = σ, and hence tσσ = tσ for every term t. When xi≠ti for all i, the substitution { x1 ↦ t1, …, xk ↦ tk } is idempotent if and only if none of the variables xi occurs in any tj. Substitution composition is not commutative, that is, στ may be different from τσ, even if σ and τ are idempotent.
For example, { x ↦ 2, y ↦ 3+4 } is equal to { y ↦ 3+4, x ↦ 2 }, but different from { x ↦ 2, y ↦ 7 }. The substitution { x ↦ y+y } is idempotent, e.g. ((x+y) {x↦y+y}) {x↦y+y} = ((y+y)+y) {x↦y+y} = (y+y)+y, while the substitution { x ↦ x+y } is non-idempotent, e.g. ((x+y) {x↦x+y}) {x↦x+y} = ((x+y)+y) {x↦x+y} = ((x+y)+y)+y. An example for non-commuting substitutions is { x ↦ y } { y ↦ z } = { x ↦ z, y ↦ z }, but { y ↦ z} { x ↦ y} = { x ↦ y, y ↦ z }.
Mathematics
In mathematics, there are two common uses of substitution: substitution of variables for constants (also called assignment for that variable), and the substitution property of equality, also called Leibniz's Law.
Considering mathematics as a formal language, a variable is a symbol from an alphabet, usually a letter like , , and , which denotes a range of possible values. If a variable is free in a given expression or formula, then it can be replaced with any of the values in its range. Certain kinds of bound variables can be substituted too. For instance, parameters of an expression (like the coefficients of a polynomial), or the argument of a function. Moreover, variables being universally quantified can be replaced with any of the values in its range, and the result will a true statement. (This is called Universal instantiation)
For a non-formalized language, that is, in most mathematical texts outside of mathematical logic, for an individual expression it is not always possible to identify which variables are free and bound. For example, in , depending on the context, the variable can be free and bound, or vice-versa, but they cannot both be free. Determining which value is assumed to be free depends on context and semantics.
The substitution property of equality, or Leibniz's Law (though the latter term is usually reserved for philosophical contexts), generally states that, if two things are equal, then any property of one, must be a property of the other. It can be formally stated in logical notation as:For every and , and any well-formed formula (with a free variable x). For example: For all real numbers and , if , then implies (here, is ). This is a property which is most often used in algebra, especially in solving systems of equations, but is apllied in nearly every area of math that uses equality. This, taken together with the reflexive property of equality, forms the axioms of equality in first-order logic.
Substitution is related to, but not identical to, function composition; it is closely related to β-reduction in lambda calculus. In contrast to these notions, however, the accent in algebra is on the preservation of algebraic structure by the substitution operation, the fact that substitution gives a homomorphism for the structure at hand (in the case of polynomials, the ring structure).
Substitution is a basic operation in algebra, in particular in computer algebra.
A common case of substitution involves polynomials, where substitution of a numerical value (or another expression) for the indeterminate of a univariate polynomial amounts to evaluating the polynomial at that value. Indeed, this operation occurs so frequently that the notation for polynomials is often adapted to it; instead of designating a polynomial by a name like P, as one would do for other mathematical objects, one could define
so that substitution for X can be designated by replacement inside "P(X)", say
or
Substitution can also be applied to other kinds of formal objects built from symbols, for instance elements of free groups. In order for substitution to be defined, one needs an algebraic structure with an appropriate universal property, that asserts the existence of unique homomorphisms that send indeterminates to specific values; the substitution then amounts to finding the image of an element under such a homomorphism.
See also
Integration by substitution
String interpolation
Substitution property of Equality
Trigonometric substitution
Universal instantiation
Principal equation form
Notes
Citations
References
Crabbé, M. (2004). On the Notion of Substitution. Logic Journal of the IGPL, 12, 111–124.
Curry, H. B. (1952) On the definition of substitution, replacement and allied notions in an abstract formal system. Revue philosophique de Louvain 50, 251–269.
Kleene, S. C. (1967). Mathematical Logic. Reprinted 2002, Dover.
Robinson, Alan J. A.; Voronkov, Andrei (2001-06-22). Handbook of Automated Reasoning. Elsevier.
External links
Propositional calculus
Concepts in logic
Logical truth
Automated theorem proving
Logic programming | Substitution (logic) | [
"Mathematics"
] | 2,624 | [
"Automated theorem proving",
"Mathematical logic",
"Computational mathematics",
"Substitution (logic)",
"Logical truth"
] |
5,438,967 | https://en.wikipedia.org/wiki/Yttrium%28III%29%20antimonide | Yttrium(III) antimonide (YSb) is an inorganic chemical compound.
Yttrium antimonide is an intermetallic compound with the chemical formula YSb. It has a NaCl-type structure and is stable in the air. Its thermal expansion coefficient (α, 10−6/°) is 11.1.
It can be produced by the high-temperature reaction of sodium antimonide and anhydrous yttrium chloride:
References
Antimonides
Yttrium compounds
Rock salt crystal structure | Yttrium(III) antimonide | [
"Chemistry"
] | 111 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,439,007 | https://en.wikipedia.org/wiki/Yttrium%28III%29%20arsenide | Yttrium arsenide is an inorganic compound of yttrium and arsenic with the chemical formula YAs. It can be prepared by reacting yttrium and arsenic at high temperature. Some literature has done research on the eutectic system of it and zinc arsenide.
It reacts with iron, iron(III) arsenide, iron(III) oxide and yttrium(III) fluoride (for doping) at high temperature to obtain superconducting material YFeAsO0.9F0.1 (Tc=10.2 K).
References
External reading
Arsenides
Yttrium compounds
Rock salt crystal structure | Yttrium(III) arsenide | [
"Chemistry"
] | 135 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,439,008 | https://en.wikipedia.org/wiki/Georges%20Sagnac | Georges Sagnac (; 14 October 1869 – 26 February 1928) was a French physicist who lent his name to the Sagnac effect, a phenomenon which is at the basis of interferometers and ring laser gyroscopes developed since the 1970s.
Life and work
Sagnac was born at Périgueux and entered the École Normale Supérieure in 1889. While a lab assistant at the Sorbonne, he was one of the first in France to study X-rays, following Wilhelm Conrad Röntgen. He belonged to a group of friends and scientists that notably included Pierre and Marie Curie, Paul Langevin, Jean Perrin, and the mathematician Émile Borel. Marie Curie says that she and her husband had traded ideas with Sagnac around the time of the discovery of radioactivity. Sagnac died at Meudon-Bellevue.
Sagnac effect
In 1913, Georges Sagnac showed that if a beam of light is split and sent in two opposite directions around a closed path on a revolving platform with mirrors on its perimeter, and then the beams are recombined, they will exhibit interference effects. From this result Sagnac concluded that light propagates at a speed independent of the speed of the source. The motion of the earth through space had no apparent effect on the speed of the light beam, no matter how the platform was turned. The effect had been observed earlier (by Harress in 1911), but Sagnac was the first to correctly identify the cause.
This Sagnac effect (in vacuum) had been theoretically predicted by Max von Laue in 1911. He showed that such an effect is consistent with stationary ether theories (such as the Lorentz ether theory) as well as with Einstein's theory of relativity. It is generally taken to be inconsistent with a complete ether drag; and also inconsistent with emission theories of light, according to which the speed of light depends on the speed of the source.
Sagnac was a staunch opponent of the theory of relativity, despite the Sagnac effect being consistent with it.
See also
Sagnac effect
History of special relativity#Experiments by Fizeau and Sagnac
References
Further reading
Paul Langevin, Sur la théorie de la relativité et l'expérience de Georges Sagnac (1921)
Paul Langevin, Sur l'expérience de Georges Sagnac (1937)
1869 births
1926 deaths
French physicists
Optical physicists
People from Périgueux
Relativity critics
University of Paris alumni | Georges Sagnac | [
"Physics"
] | 515 | [
"Relativity critics",
"Theory of relativity"
] |
5,439,095 | https://en.wikipedia.org/wiki/Yttrium%28III%29%20sulfide | Yttrium(III) sulfide (Y2S3) is an inorganic chemical compound. It is a compound of yttrium and sulfur.
References
Sesquisulfides
Yttrium compounds | Yttrium(III) sulfide | [
"Chemistry"
] | 44 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,439,261 | https://en.wikipedia.org/wiki/Tungsten%28IV%29%20oxide | Tungsten(IV) oxide is the chemical compound with the formula WO2. The bronze-colored solid crystallizes in a monoclinic cell. The rutile-like structure features distorted octahedral WO6 centers with alternate short W–W bonds (248 pm). Each tungsten center has the d2 configuration, which gives the material a high electrical conductivity.
WO2 is prepared by reduction of WO3 with tungsten powder over the course of 40 hours at 900 °C. An intermediate in this reaction is the partially reduced, mixed valence species W18O49.
2 WO3 + W → 3 WO2
The molybdenum analogue MoO2 is prepared similarly. Single crystals are obtained by chemical transport technique using iodine. Iodine transports the WO2 in the form of the volatile species WO2I2.
References
Tungsten(IV) compounds
Transition metal oxides | Tungsten(IV) oxide | [
"Chemistry"
] | 187 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,439,284 | https://en.wikipedia.org/wiki/Programming%20language%20implementation | In computer programming, a programming language implementation is a system for executing computer programs. There are two general approaches to programming language implementation:
Interpretation: The program is read as input by an interpreter, which performs the actions written in the program.
Compilation: The program is read by a compiler, which translates it into some other language, such as bytecode or machine code. The translated code may either be directly executed by hardware or serve as input to another interpreter or another compiler.
Interpreter
An interpreter is composed of two parts: a parser and an evaluator. After a program is read as input by an interpreter, it is processed by the parser. The parser breaks the program into language components to form a parse tree. The evaluator then uses the parse tree to execute the program.
Virtual machine
A virtual machine is a special type of interpreter that interprets bytecode. Bytecode is a portable low-level code similar to machine code, though it is generally executed on a virtual machine instead of a physical machine. To improve their efficiencies, many programming languages such as Java, Python, and C# are compiled to bytecode before being interpreted.
Just-in-time compiler
Some virtual machines include a just-in-time (JIT) compiler to improve the efficiency of bytecode execution. While the bytecode is being executed by the virtual machine, if the JIT compiler determines that a portion of the bytecode will be used repeatedly, it compiles that particular portion to machine code. The JIT compiler then stores the machine code in memory so that it can be used by the virtual machine. JIT compilers try to strike a balance between longer compilation time and faster execution time.
Compiler
A compiler translates programs written in one language into another language. Most compilers are organized into three stages: a front end, an optimizer, and a back end. The front end is responsible for understanding the program. It makes sure a program is valid and transforms it into an intermediate representation, a data structure used by the compiler to represent the program. The optimizer improves the intermediate representation to increase the speed or reduce the size of the executable which is ultimately produced by the compiler. The back end converts the optimized intermediate representation into the output language of the compiler.
If a compiler of a given high level language produces another high level language, it is called a transpiler. Transpilers can be used to extend existing languages or to simplify compiler development by exploiting portable and well-optimized implementations of other languages (such as C).
Many combinations of interpretation and compilation are possible, and many modern programming language implementations include elements of both. For example, the Smalltalk programming language is conventionally implemented by compilation into bytecode, which is then either interpreted or compiled by a virtual machine. Since Smalltalk bytecode is run on a virtual machine, it is portable across different hardware platforms.
Multiple implementations
Programming languages can have multiple implementations. Different implementations can be written in different languages and can use different methods to compile or interpret code. For example, implementations of Python include:
CPython, the reference implementation of Python
IronPython, an implementation targeting the .NET Framework (written in C#)
Jython, an implementation targeting the Java virtual machine
PyPy, an implementation designed for speed (written in RPython)
References
External links
Implementation | Programming language implementation | [
"Engineering"
] | 695 | [
"Software engineering",
"Programming language topics"
] |
5,439,302 | https://en.wikipedia.org/wiki/Tungsten%20ditelluride | Tungsten ditelluride (WTe2) is an inorganic semimetallic chemical compound. In October 2014, tungsten ditelluride was discovered to exhibit an extremely large magnetoresistance: 13 million percent resistance increase in a magnetic field of 60 tesla at 0.5 kelvin. The resistance is proportional to the square of the magnetic field and shows no saturation. This may be due to the material being the first example of a compensated semimetal, in which the number of mobile holes is the same as the number of electrons. Tungsten ditelluride has layered structure, similar to many other transition metal dichalcogenides, but its layers are so distorted that the honeycomb lattice many of them have in common is in WTe2 hard to recognize. The tungsten atoms instead form zigzag chains, which are thought to behave as one-dimensional conductors. Unlike electrons in other two-dimensional semiconductors, the electrons in WTe2 can easily move between the layers.
When subjected to pressure, the magnetoresistance effect in WTe2 is reduced. Above the pressure of 10.5 GPa magnetoresistance disappears and the material becomes a superconductor. At 13.0 GPa the transition to superconductivity happens below 6.5 K.
WTe2 was predicted to be a Weyl semimetal and, in particular, to be the first example of a Type II Weyl semimetal, where the Weyl nodes exist at the intersection of the electron and hole pockets.
It has also been reported that terahertz-frequency light pulses can switch the crystal structure of WTe2 between orthorhombic and monoclinic by altering the material's atomic lattice.
Tungsten ditelluride can be exfoliated into thin sheets down to single layers. Monolayer WTe2 was initially predicted to remain a Weyl semimetal in the 1T' crystal phase. It was later shown with transport measurements that, below 50K, a single layer of WTe2 instead acts like an insulator but with an offset current independent of doping by a local electrostatic gate. When using a contact geometry that shorted out conduction along the device edges, this offset current vanished, demonstrating that this nearly quantized conduction was localized to the edge—behavior consistent with monolayer WTe2 being a two-dimensional topological insulator. Identical measurements with two- and three-layer thick samples showed the expected semimetallic response. Subsequent studies using other techniques have been consistent with the transport results, including those using angle-resolved photoemission spectroscopy and microwave-impedance microscopy. Monolayer WTe2 has also been observed to superconduct at moderate doping, with a critical temperature tunable by doping level.
Two- and three-layer thick WTe2 have also been observed to be polar metals, simultaneously hosting metallic behavior and switchable electric polarization. The polarization was theorized to originate from vertical charge transfer between the layers, which is switched by interlayer sliding.
References
Tellurides
Tungsten(IV) compounds
Transition metal dichalcogenides
Monolayers | Tungsten ditelluride | [
"Physics"
] | 650 | [
"Monolayers",
"Atoms",
"Matter"
] |
5,439,344 | https://en.wikipedia.org/wiki/Tungsten%28V%29%20bromide | Tungsten(V) bromide is the inorganic compound with the empirical formula WBr5. The compound consists of bioctahedral structure, with two bridging bromide ligands, so its molecular formula is W2Br10.
Preparation and structure
Tungsten(V) bromide is prepared by treating tungsten powder with bromine in the temperature range 650-1000 °C. The product is often contaminated with tungsten hexabromide.
According to X-ray diffraction, the structure for tungsten pentabromide consists of an edge-shared bioctahedron.
Reactions
Tungsten(V) bromide is the precursor to other tungsten compounds by reduction reactions. For example, tungsten(IV) bromide can be prepared by reduction with aluminium or tungsten. The WBr4 can be purified by chemical vapor transport.
3 WBr5 + Al → 3 WBr4 + AlBr3
Excess tungsten pentabromide and aluminum tribromide are then removed by sublimation at 240 °C.
Tungsten(II) bromide can then be obtained heating the tetrabromide. At 450-500 °C, gaseous pentabromide is evolved leaving yellow-green residue of WBr2. An analogous method can also be applied to the synthesis of tungsten(II) chloride.
Reductive substitution reactions
Because it is relatively easy to reduce tungsten pentahalides, they can be used as alternative synthetic routes to tungsten (IV) halide adducts. For example, reaction of WBr5 with pyridine gives WBr4(py)2.
2 WBr5 + 7 C5H5N → 2 WBr4(C5H5N)2 + bipyridine + C5H5NHBr
References
Bromides
Tungsten halides
Tungsten(V) compounds | Tungsten(V) bromide | [
"Chemistry"
] | 384 | [
"Bromides",
"Salts"
] |
5,439,408 | https://en.wikipedia.org/wiki/Tungsten%28V%29%20chloride | Tungsten(V) chloride is an inorganic compound with the formula W2Cl10. This compound is analogous in many ways to the more familiar molybdenum pentachloride.
Synthesis
The material is prepared by reduction of tungsten hexachloride. One method involves the use of tetrachloroethylene as the reductant
2 WCl6 + C2Cl4 → W2Cl10 + C2Cl6
The blue green solid is volatile under vacuum and slightly soluble in nonpolar solvents. The compound is oxophilic and is highly reactive toward Lewis bases.
When the same reduction is conducted in the presence of tetraphenylarsonium chloride, one obtains instead the hexachlorotungstate(V) salt:
Structure
The compound exists as a dimer, with a pair of octahedral tungsten(V) centres bridged by two chloride ligands. The W---W separation is 3.814 Å, which is non-bonding. The compound is isostructural with Nb2Cl10 and Mo2Cl10. The compound evaporates to give trigonal bipyramidal WCl5 monomers.
References
Chlorides
Tungsten halides
Tungsten(V) compounds | Tungsten(V) chloride | [
"Chemistry"
] | 267 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
5,439,545 | https://en.wikipedia.org/wiki/Tungsten%28VI%29%20oxytetrabromide | Tungsten(VI) oxytetrabromide is the inorganic compound with the formula WOBr4. This a red-brown, hygroscopic solid sublimes at elevated temperatures. It forms adducts with Lewis bases. The solid consists of weakly associated square pyramidal monomers. The related tungsten(VI) oxytetrachloride has been more heavily studied. The compound is usually classified as an oxyhalide.
References
Tungsten(VI) compounds
Oxobromides | Tungsten(VI) oxytetrabromide | [
"Chemistry"
] | 104 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,439,627 | https://en.wikipedia.org/wiki/Tungsten%28VI%29%20oxytetrachloride | Tungsten(VI) oxytetrachloride is the inorganic compound with the formula WOCl4. This diamagnetic solid is used to prepare other complexes of tungsten. The red crystalline compound is soluble in nonpolar solvents but it reacts with alcohols and water and forms adducts with Lewis bases.
Structure
The solid consists of weakly associated square pyramidal monomers. The compound is classified as an oxyhalide.
Synthesis and reactions
WOCl4 is prepared from tungsten trioxide:
WO3 + 2 SOCl2 → WOCl4 + 2 SO2
WCl6 + (Me3Si)2O → WOCl4 + 2 Me3SiCl
It is "difficult to prepare by other means."
WOCl4 is Lewis acidic. It is a precursor to catalysts used for polymerization of alkynes.
References
Chlorides
Metal halides
Oxychlorides
Tungsten(VI) compounds | Tungsten(VI) oxytetrachloride | [
"Chemistry"
] | 193 | [
"Chlorides",
"Inorganic compounds",
"Metal halides",
"Salts"
] |
5,439,710 | https://en.wikipedia.org/wiki/Water%20hole%20%28radio%29 | The waterhole, or water hole, is an especially quiet band of the electromagnetic spectrum between 1420 and 1662 megahertz, corresponding to wavelengths of 18–21 centimeters. It is a popular observing frequency used by radio telescopes in radio astronomy.
The strongest hydroxyl radical spectral line radiates at 18 centimeters, and atomic hydrogen at 21 centimeters (the hydrogen line). These two molecules, which combine to form water, are widespread in interstellar gas, which means this gas tends to absorb radio noise at these frequencies. Therefore, the spectrum between these frequencies forms a relatively "quiet" channel in the interstellar radio noise background.
Bernard M. Oliver, who coined the term in 1971, theorized that the waterhole would be an obvious band for communication with extraterrestrial intelligence, hence the name, which is a pun: in English, a watering hole is a vernacular reference to a common place to meet and talk. Several programs involved in the search for extraterrestrial intelligence, including SETI@home, search in the waterhole radio frequencies.
See also
BLC1
Wow! signal
Radio source SHGb02+14a
Schelling point
References
External links
SETI: The Radio Search (page 2)
"What Is the Water Hole" (has a cleaner diagram)
Planetary.org: A Blueprint for SETI
How SETI Works Discusses the water hole.
"waterhole" entry in The Encyclopedia of Astrobiology, Astronomy, and Spaceflight'
"The ABCs of SETI: the search for extraterrestrial intelligence"
"SETI: The water hole" from Astronomy Now
"SETI Observations" from SETI Institute
Electromagnetic spectrum
Search for extraterrestrial intelligence | Water hole (radio) | [
"Physics",
"Astronomy"
] | 347 | [
"Spectrum (physical sciences)",
"Astronomy stubs",
"Astrophysics",
"Electromagnetic spectrum",
"Astrophysics stubs"
] |
5,439,721 | https://en.wikipedia.org/wiki/Tungsten%20oxytetrafluoride | Tungsten oxytetrafluoride is an inorganic compound with the formula WOF4. It is a colorless diamagnetic solid. The compound is one of many oxides of tungsten. It is usually encountered as product of the partial hydrolysis of tungsten hexafluoride.
Structure
As confirmed by X-ray crystallography, WOF4 crystallizes as a tetramer. The oxides are terminal, and four of the fluorides are bridging. Its structure is similar to those for niobium pentafluoride and tantalum pentafluoride. In contrast, molybdenum oxytetrafluoride adopts a polymeric structure, although again the fluorides bridge and the oxides are terminal.
In the gas state, this molecule is a monomer. It can form complexes with acetonitrile and other compounds.
Preparation
Tungsten(VI) oxytetrafluoride can be synthesized by the reaction of fluorine and tungsten trioxide.
It can also be obtained by treating tungsten with a mixture of oxygen and fluorine at high temperatures. Partial hydrolysis of tungsten hexafluoride will also produce WOF4.
The reaction of tungsten(VI) oxytetrachloride and hydrogen fluoride will also produce WOF4.
WOF4 can also prepared by the reaction of lead(II) fluoride and tungsten trioxide at 700 °C.
Tungsten(VI) oxytetrafluoride hydrolyzes into tungstic acid.
References
Metal halides
Tungsten(VI) compounds
Oxyfluorides | Tungsten oxytetrafluoride | [
"Chemistry"
] | 346 | [
"Inorganic compounds",
"Metal halides",
"Salts"
] |
5,439,755 | https://en.wikipedia.org/wiki/List%20of%20Canadian%20plants%20by%20genus%20D | Below is a list of Canadian plants by genus. Due to the vastness of Canada's biodiversity, this page is divided.
This is a (partial) list of the plant species considered native to Canada. Many of the plants seen in Canada are introduced, either intentionally or accidentally. For these plants, see List of introduced species to Canada.
A | B | C | D | E | F | G | H | I J K | L | M | N | O | P Q | R | S | T | U V W | X Y Z
Da
Dalea — prairie clovers
Dalea purpurea — prairie clover
Dalibarda — dewdrops
Dalibarda repens — dewdrop, false violet, robin-run-away, star violet
Danthonia — oatgrasses
Danthonia compressa — flattened oatgrass, flat-stemmed danthonia
Danthonia spicata — poverty oatgrass
De
Decodon — willowherbs
Decodon verticillatus — swamp willowherb, water oleander, water willow, hairy swamp loosestrife
Dennstaedtia — hay-scented ferns
Dennstaedtia punctilobula — eastern hay-scented fern
Deparia — glade ferns
Deparia acrostichoides — silvery glade fern, silvery spleenwort
Deschampsia — hairgrasses
Deschampsia atropurpurea — mountain hairgrass
Deschampsia cespitosa subsp. cespitosa — tufted hairgrass, tussock grass
Deschampsia flexuosa — wavy hairgrass, crinkled hairgrass
Descurainia — tansy-mustards
Descurainia pinnata — western tansy-mustard, green tansy-mustard, shortfruit tansy-mustard
Descurainia richardsonii — Richardson's tansy-mustard
Desmodium — tick-trefoils
Desmodium canadense — Canadian tick-trefoil, showy tick-trefoil
Desmodium canescens
Desmodium cuspidatum — toothed tick-trefoil
Desmodium glutinosum — pointy-leaved tick-trefoil, large tick-trefoil
Desmodium illinoense — Illinois tick-trefoil Extirpated
Desmodium nudiflorum — bare-stemmed tick-trefoil, naked-flowered tick-trefoil
Desmodium paniculatum var. dillenii
Desmodium paniculatum var. paniculatum
Desmodium rotundifolium — roundleaf tick-trefoil, prostrate tick-trefoil, dollar leaf
Di
Diarrhena — beak grasses
Diarrhena obovata — beak grass
Dicentra — dicentras
Dicentra canadensis — squirrel corn
Dicentra cucullaria — Dutchman's-breeches, soldier's cap
Diervilla — bush honeysuckles
Diervilla lonicera — northern bush honeysuckle
Digitaria — witchgrasses
Digitaria cognata — fall witchgrass
Dioscorea —
Dioscorea quaternata — fourleaf wild-yam
Diphasiastrum — clubmosses
Diphasiastrum complanatum — trailing clubmoss, northern ground-cedar, northern running-pine, flat-branched clubmoss, trailing evergreen, Christmas green, ground-pine
Diphasiastrum digitatum — fan clubmoss, southern running-pine, southern ground-cedar, fan ground-pine, crowfoot clubmoss, trailing ground-pine
Diphasiastrum sabinifolium — ground-fir, savinleaf clubmoss, heath-cypress
Diphasiastrum sitchense — Sitka clubmoss, tufted ground-cedar, Alaskan clubmoss
Diphasiastrum tristachyum — three-spiked clubmoss, blue ground-cedar, northern ground-pine
Diplazium — glade ferns
Diplazium pycnocarpon — narrowleaf glade fern, narrowleaf spleenwort
Dirca — leatherwoods
Dirca palustris — leatherwoods, ropebark, moosewood, wicopy
Do
Doellingeria — flattop white aster
Doellingeria umbellata var. pubens — hairy flattop white aster
Doellingeria umbellata var. umbellata — tall flattop white aster, parasol whitetop
Dr
Draba — whitlowgrasses
Draba alpina — alpine whitlowgrass
Draba arabisans — rock whitlowgrass
Draba aurea — golden whitlowgrass, golden draba
Draba cana — canescent whitlowgrass, hairyfruit whitlowgrass
Draba cinerea — ashy whitlowgrass, greyleaf whitlowgrass
Draba glabella — smooth whitlowgrass
Draba incana — hoary whitlowgrass
Draba lactea — milky whitlowgrass
Draba nemorosa — woodland whitlowgrass
Draba nivalis — snow whitlowgrass, snow draba, yellow arctic whitlowgrass
Draba norvegica — Norwegian whitlowgrass
Draba reptans — Carolina whitlowgrass
Dracocephalum — dragonheads
Dracocephalum parviflorum — American dragonhead, dragonhead mint
Drosera — sundews
Drosera anglica — English sundew
Drosera intermedia — spoonleaf sundew, spatulate-leaf sundew, floating sundew, narrowleaf sundew
Drosera linearis — slenderleaf sundew, linear-leaf sundew
Drosera rotundifolia — roundleaf sundew, dewplant
Dryas — mountain avens
Dryas drummondii — yellow mountain avens, Drummond's dryad
Dryas integrifolia — white mountain avens
Dryopteris — woodferns
Dryopteris carthusiana — spinulose woodfern, spinulose shieldfern, toothed woodfern, narrow Buckler fern
Dryopteris clintoniana — Clinton's woodfern
Dryopteris cristata — crested woodfern
Dryopteris expansa — spreading woodfern, northern Buckler fern, northern woodfern
Dryopteris filix-mas — male fern,
Dryopteris fragrans — fragrant woodfern
Dryopteris goldieana — Goldie's woodfern
Dryopteris intermedia — evergreen woodfern
Dryopteris marginalis — marginal woodfern, leather woodfern
Du
Dulichium — threeway sedges
Dulichium arundinaceum — threeway sedge
Dupontia — tundra grasses
Dupontia fisheri — Fischer's tundra grass, Fischer's dupontia
Dy
Dyssodia — dogweeds
Dyssodia papposa — fœtid dogweed
References
See: Flora of Canada#References
Canada,genus,D | List of Canadian plants by genus D | [
"Biology"
] | 1,470 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
5,439,832 | https://en.wikipedia.org/wiki/Frequency%20drift | In electrical engineering, and particularly in telecommunications, frequency drift is an unintended and generally arbitrary offset of an oscillator from its nominal frequency. Causes may include component aging, changes in temperature that alter the piezoelectric effect in a crystal oscillator, or problems with a voltage regulator which controls the bias voltage to the oscillator. Frequency drift is traditionally measured in Hz/s. Frequency stability can be regarded as the absence (or a very low level) of frequency drift.
On a radio transmitter, frequency drift can cause a radio station to drift into an adjacent channel, causing illegal interference. Because of this, Frequency allocation regulations specify the allowed tolerance for such oscillators in a type-accepted device. A temperature-compensated, voltage-controlled crystal oscillator (TCVCXO) is normally used for frequency modulation.
On the receiver side, frequency drift was mainly a problem in early tuners, particularly for analog dial tuning, and especially on FM, which exhibits a capture effect. However, the use of a phase-locked loop (PLL) essentially eliminates the drift issue. For transmitters, a numerically controlled oscillator (NCO) also does not have problems with drift.
Drift differs from Doppler shift, which is a perceived difference in frequency due to motion of the source or receiver, even though the source is still producing the same wavelength. It also differs from frequency deviation, which is the inherent and necessary result of modulation in both FM and phase modulation.
See also
Allan variance
Clock drift
Phase noise
Automatic frequency control (AFC)
Phase-locked loop (PLL)
References
Communication circuits
Broadcast engineering | Frequency drift | [
"Engineering"
] | 340 | [
"Broadcast engineering",
"Electronic engineering",
"Telecommunications engineering",
"Communication circuits"
] |
5,439,870 | https://en.wikipedia.org/wiki/Vanadium%20nitride | Vanadium nitride, VN, is a chemical compound of vanadium and nitrogen.
Vanadium nitride is formed during the nitriding of steel and increases wear resistance. Another phase, V2N, also referred to as vanadium nitride, can be formed along with VN during nitriding. VN has a cubic, rock-salt structure. There is also a low-temperature form, which contains V4 clusters.
The low-temperature phase results from a dynamic instability, when the energy of vibrational modes in the high-temperature NaCl-structure phase, are reduced below zero.
It is a strong-coupled superconductor. Nanocrystalline vanadium nitride has been claimed to have potential for use in supercapacitors. The properties of vanadium nitride depend sensitively on the stoichiometry of the material.
References
Vanadium(III) compounds
Nitrides
Rock salt crystal structure | Vanadium nitride | [
"Chemistry"
] | 207 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,440,323 | https://en.wikipedia.org/wiki/Golf%20etiquette | Golf etiquette refers to a set of rules and practices designed to make the game of golf safer and more enjoyable for golfers and to minimize possible damage to golf equipment and courses. Although many of these practices are not part of the formal rules of golf, golfers are customarily expected to observe them. The R&A rule book states that "[t]he overriding principle is that consideration should be shown to others on the course at all times."
Care of the course
Divots
Divots should always be repaired, either by placing sand in the divot or replacing the grass. Some courses also place containers of divot repair mix on carts and at tees, which can be poured into the divot.
Pitch marks
A ball hitting the green often leaves an indentation, a pitch mark, where it strikes the ground. These need to be repaired to keep the green in good condition. After golfers have arrived at the green, they should make it a point to find their pitch marks and repair them to aid recovery of the turf.
Bunkers
After playing from a bunker, a player should smooth the sand to even out any footprints and divots, usually by means of a rake. Not all sand filled areas are classified as bunkers, e.g. coastal courses (e.g. Myrtle Beach) frequently feature designated waste areas; these areas need not be smoothed following play.
Walking
Golfers should avoid distracting fellow golfers. Golfers should not run during play, but instead walk quickly but lightly during play and remain stationary while others play their shots. Players should be still and remain silent during a fellow player's pre-shot routine and subsequent shot.
Golf carts and equipment
Golf carts should not be used to annoy or distract other players. The cart should be parked on the cart path when at the tee box or putting green. Carts should normally stay only on the paths, and are required to do so on many courses. Golfing equipment (bags, clubs and carts) should never be placed in front of the green as annoyance to the approaching players.
Should carts be permitted off the paths, golfers should observe the "90 degree rule": make a 90 degree turn off the path toward the fairway to a given ball, and return straight back to the path, not along the path of greatest convenience. Carts inflict wear and tear on the course, and can be accidentally driven over another player's ball. Golfers should keep the noise of backing up to a minimum and must always set the park brake before disembarking.
Honour
Traditionally, the player with the best gross score on the previous hole, or the winner of the hole in match play, has the honour of teeing off first; if there is no outright winner of a hole, then the order of play does not change from the previous tee. In informal games one can play "ready golf" and not wait for the best score on the hole to tee up first. With the update to the rules in 2019, ready golf is now encouraged in all stroke play formats.
Putting lines
Golfers should note each player's putting line, and avoid stepping on it as they play on the green or stand on a line of sight, that is, in the line of sight either ahead or behind a player who is attempting to putt. Players should not stand close to or directly behind the ball, or directly behind the hole, when a player is about to play. In the event that your ball is in another player's line, it is important to mark your ball's position, and only then remove it (pick it up) from the green. A golfer should also avoid stepping close to the hole.
Slower players
Slower players should allow following faster players to play through if there is substantial room in front of them. Golfers should try to follow closely the group ahead of them, and not to be "pushed" by the group behind them.
Tee boxes
A golfer should choose the correct tee box for their skill level, regardless of where the other members of the group are playing. Varying course lengths from different tees are one way to help even the playing field.
Dress
Many golf clubs have dress rules, commonly requiring men to wear collared shirts and explicitly banning jeans or denim.
References
External links
R&A Rules
Discussion of golf etiquette at the United States Golf Association website
Article on golf etiquette at About.com
Golf Etiquette at PGA
Etiquette
Golf terminology
Rules of golf | Golf etiquette | [
"Biology"
] | 915 | [
"Etiquette",
"Behavior",
"Human behavior"
] |
5,440,367 | https://en.wikipedia.org/wiki/Baire%20one%20star%20function | A Baire one star function is a type of function studied in real analysis. A function is in class Baire* one, written , and is called a Baire one star function if, for each perfect set , there is an open interval , such that is nonempty, and the restriction is continuous. The notion seems to have originated with B. Kirchheim in an article titled 'Baire one star functions' (Real Anal. Exch. 18 (1992/93), 385–399).
The terminology is actually due to Richard O'Malley, 'Baire* 1, Darboux functions' Proc. Amer. Math. Soc. 60 (1976), 187–192. The concept itself (under a different name) goes back at least to 1951. See H. W. Ellis, 'Darboux properties and applications to nonabsolutely convergent integrals' Canad. Math. J., 3 (1951), 471–484, where the same concept is labelled as [CG] (for generalized continuity).
References
Real analysis
Types of functions | Baire one star function | [
"Mathematics"
] | 232 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical analysis stubs",
"Mathematical objects",
"Mathematical relations",
"Types of functions"
] |
5,440,905 | https://en.wikipedia.org/wiki/Winlink | Winlink, or formally, Winlink Global Radio Email (registered US Service Mark), also known as the Winlink 2000 Network, is a worldwide radio messaging system that uses amateur-band radio frequencies and government frequencies to provide radio interconnection services that include email with attachments, position reporting, weather bulletins, emergency and relief communications, and message relay. The system is built and administered by volunteers and is financially supported by the Amateur Radio Safety Foundation.
Network
Winlink networking started by providing interconnection services for amateur radio (also known as ham radio). It is well known for its central role in emergency and contingency communications worldwide. The system used to employ multiple central message servers around the world for redundancy, but in 2017–2018 upgraded to Amazon Web Services that provides a geographically-redundant cluster of virtual servers with dynamic load balancers and global content-distribution. Gateway stations have operated on sub-bands of HF since 2013 as the Winlink Hybrid Network, offering message forwarding and delivery through a mesh-like smart network whenever Internet connections are damaged or inoperable. During the late 2000s, it increasingly became what is now the standard network system for radio email, worldwide. Additionally, in response to the need for better disaster response communications in the mid to later part of the 2000s, the network was expanded to provide separate parallel radio email networking systems for the US Department of Homeland Security SHARES Winlink Radio Email System, along with other governments (non-amateur radio) services, also to include Non-government Organizations such as the US American Red Cross, the Austrian International Red Cross, and other such critical infrastructure Non-Government Organizations. Although these services are separate, and for reasons of security may be unknown to each other, the capability to cross services with complete Interoperability is available. For example, a US ham using Winlink on the amateur radio spectrum may email a Winlink user on the DHS SHARES Winlink system (non-amateur) radio service, which may then be picked up on the DHS SHARES Winlink network system. Of course, the originator of any service must be familiar with the regulatory environment of the recipient's service should it be another Winlink service.
Amateur radio HF e-mail
E-mail via HF can be used nearly everywhere on the planet, and is made possible by connecting an HF single sideband (SSB) transceiver system to a computer, modem interface, and appropriate software. The HF modem technologies include PACTOR, Winmor (deprecated), ARDOP, Vara HF, and Automatic Link Establishment (ALE). VHF/UHF protocols include AX.25 Packet and Vara FM.
Guidelines
Operators in each country must, as a baseline, follow the appropriate regulatory guidelines for their license. Some countries may limit or regulate types of amateur messaging (such as e-mail) by content, origination location, end destination, or license class of the operator. Origination of third party messages (messages sent on behalf of, or sent to, an end destination who is not an amateur operator) may also be regulated in some countries; those that limit such third party messages normally have exceptions for emergency communications. In accordance with long standing amateur radio tradition, international guidelines and FCC rules section 97.113, hams using the Winlink system are advised that it is not appropriate to use it for business communications.
Users
The Winlink system is open to properly licensed amateur radio operators, worldwide. The system primarily serves radio users without normal access to the internet, government and non-government public service organizations, medical and humanitarian non-profits, and emergency communications organizations. As of July 2008, there were approximately 12,000 radio users and approximately 100,000 internet correspondents. Monthly traffic volume averages over 100,000 messages.
For offshore cruising yachts, Winlink is widely used as an alternative, or alongside, Sailmail, which is an HF PACTOR based-email system using marine HF frequencies rather than amateur, and unlike the amateur radio use of Winlink, allows business to be conducted over radio. In addition to email, Winlink uses a system called "Saildocs," and other file delivery methods, which allows properly licensed amateur radio cruisers to retrieve meteorological, maritime safety and other crucial files over Winlink email. As example, Winlink was found to be more useful in and around South Africa where best weather was provided by SAMNet (South African Mobile Maritime Net).
Supported radio technologies
802.11 wifi
ALE (Automatic Link Establishment)
APRS (Automatic Packet Reporting System)
AX.25 Packet Radio
D-Star
PACTOR
PACTOR-II
PACTOR-III
PACTOR-IV
WINMOR(Deprecated)
ARDOP
Vara HF
Vara FM
TCP/IP (Telnet and other Wireless Technologies)
Technical protocols
PACTOR-I, WINMOR(deprecated), ARDOP, HSMM (WiFi), AX.25 packet, D-Star, TCP/IP, and ALE are non-proprietary protocols used in various RF applications to access the Winlink network systems. Later versions of PACTOR are proprietary and supported only by commercially available modems from Special Communications Systems GmbH. In amateur radio service, AirMail, Winlink Express, and other email client programs used by the Winlink system, disable the proprietary compression technology for PACTOR-II, PACTOR-III, and PACTOR-IV modems and instead relies on the open FBB protocol, also widely used worldwide by packet radio BBS forwarding systems.
Controversies and US regulatory issues
In May 1995, the American Radio Relay League (ARRL) privately asked the FCC to change Part 97.309(a) to allow fully documented G-TOR, Clover, and original open source PacTOR (Pactor I) modes. The FCC granted this request in DA-95-2106 based on the ARRL's representation that it had worked with developers to ensure complete technical documentation of these codes were available to all amateur radio operators. However, subsequent versions of Pactor contained proprietary compression algorithms that prevent over-the-air interception. As of July 9th, 2024, the Winlink Development Team has stated that their software only uses an open compressed binary format called Open B2F, which is publicly listed on the Winlink website, and replaces proprietary compression used by some manufacturers of protocols used.
In 2007, a US amateur radio operator filed a formal petition with the Federal Communications Commission (FCC) aimed at reducing the signal bandwidth in automatic operation subbands; but, in May 2008 FCC ruled against the petition. In the Official Order, FCC said, "Additionally, we believe that amending the amateur service rules to limit the ability of amateur stations to experiment with various communications technologies or otherwise impeding their ability to advance the radio art would be inconsistent with the definition and purpose of the amateur service. Moreover, we do not believe that changing the rules to prohibit a communications technology currently in use is in the public interest."
In 2013, the FCC ruled in Report and Order 13-1918 against the use of encryption in the US amateur radio bands for any purpose, including emergency communications. The FCC cited the need for all amateur radio communications to be open and unobscured, to uphold the Commission's long-standing requirement that the service be able to police itself.
Winlink itself uses point-to-point protocols that may be copied by a third party through methods provided by the authors of these protocols as well as from independent sources. Because the content of data is not obstructed on the amateur spectrum, those government agencies who do use Winlink for Continuity of Government and public safety emergency communications requested (or in some cases, mandated) that they be allowed to encrypt their messages.
On non-amateur radio frequencies worldwide, Winlink provides for encryption via AES-256 for its most used protocols, Pactor and VARA. Such transmission encryption, once set up properly, is seamless to the end-user and requires no additional effort, but is left up to the individual operator or government agency to setup.
In addition to "readers" being made available for protocols used by the Winlink system, in the US, all messages passing through licensed US amateur radio stations by radio are freely accessible by other licensed amateurs via the WinLink Open Message Viewer on the Winlink WebSite. Amateurs concerned about encryption are encouraged to help the US amateur radio community police itself by search and viewing such messages, and reporting messages if they spot a violation (https://winlink.org/content/us_amateur_radio_message_viewer).
Deletion of the Symbol Rate Rule RM-11708
This change was requested in 2013 by ARRL, and the FCC released notice of proposed rulemaking in 2016. In November of 2023, the FCC finally removed the symbol rate limit of 300 baud in favor of an occupied bandwidth limit of 2.8 KHz (WT Docket No. 16-239). In the Report and Order, the FCC stated, "The amateur radio community can and does play a vital role in emergency response communications, but is often unnecessarily hindered by the baud rate limitations in the rules."
Supporting this change were a host of federal, state and local emergency management agencies, who continually wrote ex parte comments to the FCC regarding their concerns with the impact such a limitation had on emergency email communications via Winlink. In addition, Amateur Radio Relay League (ARRL) continued to push its efforts toward this change through Congressional pathways.
Because Winlink is a worldwide service, similar issues are the concern of other countries, who are also pushing for innovative changes that will positively impact their ability to provide a “no infrastructure” resilient system to bridge SMTP mail over radio, both over the amateur radio spectrum as well as for government service uses as an emergency service option.
See also
Amateur radio emergency communications
Automatic Link Establishment
PACTOR
Winmor
Footnotes
References
External links
The official Winlink Web Site
Winlink Research Project
Winlink Tutorial
Winlink wide-area HF MESH network
Introduction to RMS Express Winlink client program
Guida italiana completa per l'uso di RMS Express /-/ Winlink 2000
The Wiki for Pat - a cross platform Winlink client
Guia rápida en Español de introducción a la Red WL2K, Winmor y uso del RMS Express, (Spanish White Paper)
Packet radio | Winlink | [
"Technology"
] | 2,148 | [
"Wireless networking",
"Packet radio"
] |
5,441,003 | https://en.wikipedia.org/wiki/Petar%20V.%20Kokotovic | Petar V. Kokotovic (; born 1934) is professor emeritus in the College of Engineering at the University of California, Santa Barbara, USA. He has made contributions in the areas of adaptive control, singular perturbation techniques, and nonlinear control especially the backstepping stabilization method.
Biography
Kokotovic was born in Belgrade in 1934. He received his B.S. (1958) and M.S. (1963) degrees from the University of Belgrade Faculty of Electrical Engineering, and his Ph.D. (1965) from the USSR Academy of Sciences (Institute of Automation and Remote Control), Moscow.
He came to the United States in 1965 and was professor at the University of Illinois for 25 years. He joined the University of California, Santa Barbara, in 1991, where he was the founding and long-serving director of the Center for Control, Dynamical Systems and Computation. This center has become a role model of cross-disciplinary research and education. One of the center's achievements is a fully integrated cross-disciplinary graduate program for electrical and computer, mechanical and environmental, and chemical engineering fields.
At UC Santa Barbara his group developed constructive nonlinear control methods and applied them, with colleagues from MIT, Caltech and United Technologies Research Center, to new jet engine designs. As a long-term industrial consultant, he has contributed to computer controls at Ford and to power system stability at General Electric.
For his control systems contributions, Professor Kokotovic has been recognized with the triennial Quazza Medal from the International Federation of Automatic Control (IFAC), the Control Systems Field Award from the Institute of Electrical and Electronics Engineers (IEEE), and the 2002 Richard E. Bellman Control Heritage Award from the American Automatic Control Council, with the citation "for pioneering contributions to control theory and engineering, and for inspirational leadership as mentor, advisor, and lecturer over a period spanning four decades."
Kokotovic was elected a member of the National Academy of Engineering in 1996 for the development and applications of large-scale systems analysis and adaptive control theory. He is a foreign member of the Russian Academy of Sciences, and Fellow of IEEE. His honors also include the D.C. Drucker Eminent Faculty Award and two IEEE Transactions on Automatic Control outstanding paper awards.
Dr. Kokotovic has co-authored numerous papers and ten books.
His former students include Joe Chow, Charles Robert Hadlock, Petros A. Ioannou, Hassan Khalil, and Miroslav Krstić.
Recognitions
Fellow of the IEEE, 1980
Lecturer at the French National Seminar (CNRS) on "New Tools for Control," Paris, 1982
Outstanding Paper Award, IEEE Transactions on Automatic Control, 1984
D.C. Drucker Eminent Faculty Award, University of Illinois, Urbana, 1987
Grainger Endowed Chair, University of Illinois, Urbana, 1990
Quazza Medal, Highest Triennial Award, International Federation of Automatic Control, 1990
IEEE Bode Prize Lecture, 1991
Foreign Expert to Evaluate French National Institute (INRIA), 1992
The 1993 IEEE Outstanding Transactions Paper Award
The 1995 IEEE Control Systems Award
Member, National Academy of Engineering, 1996
IEEE James H. Mulligan, Jr. Education Medal, IEEE, 2001
Richard E. Bellman Control Heritage Award, American Automatic Control Council, 2002.
Foreign member of the Russian Academy of Sciences, 2011
See also
Backstepping
References
External links
1936 births
Living people
Engineers from Belgrade
21st-century American engineers
Control theorists
Yugoslav emigrants to the United States
Richard E. Bellman Control Heritage Award recipients
University of Belgrade School of Electrical Engineering alumni
Members of the United States National Academy of Engineering
Foreign members of the Russian Academy of Sciences
21st-century Serbian engineers | Petar V. Kokotovic | [
"Engineering"
] | 745 | [
"Control engineering",
"Control theorists"
] |
5,441,307 | https://en.wikipedia.org/wiki/Stephen%20Lee%20%28chemist%29 | Stephen Lee (; born 25 October 1955) is an American chemist. He is the son of Tsung-Dao Lee, the winner of the 1957 Nobel Prize in Physics. He is currently a professor at Cornell University.
Education
Lee attended the International School of Geneva, Switzerland and Yale University, from which he graduated with a BA in 1978. He later received his PhD from the University of Chicago in 1985.
Career
In 1993, Lee received the MacArthur Award for his work in the field of physics and chemistry. In addition, he has received an award from the Alfred P. Sloan Foundation for his continued research.
In 1999, Lee joined Cornell University as a professor of solid state chemistry in the chemistry and chemical biology department from the University of Michigan, where he had been associate professor of chemistry since 1993.
He currently continues his teaching career at Cornell, where he instructs students in (honors) general chemistry and introduction to chemistry courses. During the past 10 years, Lee has devoted his summer to helping incoming freshmen learn basic chemistry to prepare them for the academic year. This has been considered part of Lee's philanthropic work, as he teaches these summer courses probono.
His current research involves developing stronger porous solids in which all the host porous bonds are covalent in character. Lee is also researching ways to introduce cross-linkable guests (such as di-isocyanides or disilyltriflates) which will react with nucleophilic groups, leading to a fully covalent organic porous solid. He also hopes to develop a long range order in intermetallic phases: Examine noble metal alloys where unit cell dimensions range from just a few, to almost 104 Å.
Personal life
Stephen Lee was born to 1957 Nobel Prize winner in Physics Tsung-Dao Lee and Hui-Chun Jeannette Chin (), who died in 1996. Lee has one brother, James Lee (; born 1952), who is the dean of the School of Humanities and Social Science at the Hong Kong University of Science and Technology and chair professor of the Division of Social Science at the same university.
References
1956 births
Living people
American physical chemists
American people of Chinese descent
Yale College alumni
Cornell University faculty
MacArthur Fellows
Scientists from New York City
Sloan Research Fellows
University of Chicago alumni
University of Michigan faculty
International School of Geneva alumni
Solid state chemists
Tsung-Dao Lee | Stephen Lee (chemist) | [
"Chemistry"
] | 481 | [
"Solid state chemists"
] |
5,442,007 | https://en.wikipedia.org/wiki/Edmund%20Hlawka | Edmund Hlawka (5 November 1916, Bruck an der Mur, Styria – 19 February 2009) was an Austrian mathematician. He was a leading number theorist. Hlawka did most of his work at the Vienna University of Technology. He was also a visiting professor at Princeton University and the Sorbonne. Hlawka died on 19 February 2009 in Vienna.
Education and career
Hlawka studied at the University of Vienna from 1934 to 1938, when he gained his doctorate under Nikolaus Hofreiter. Among his PhD students were Rainer Burkard, later to become president of the Austrian Society for Operations Research, graph theorist Gert Sabidussi, Cole Prize winner Wolfgang M. Schmidt, Walter Knödel who became one of the first German computer science professors, and Hermann Maurer, also a computer scientist. Through these and other students, Hlawka has nearly 1500 academic descendants. Hlawka was awarded the Decoration for Services to the Republic of Austria in 2007.
Honours and awards
Decoration for Science and Art (Austria, 1963)
City of Vienna Prize for the Humanities (1969)
Decoration for Services to the Republic of Austria, Grand Decoration of Honour in Gold with Star (2007); Grand Decoration of Honour in Gold (1987)
Wilhelm Exner Medal (1982).
Joseph Johann Ritter von Prechtl Medal (1989)
Erwin Schrödinger Prize
See also
Minkowski–Hlawka theorem
Koksma–Hlawka inequality
10763 Hlawka, an asteroid named after Edmund Hlawka
References
1916 births
2009 deaths
People from Bruck an der Mur
Austrian mathematicians
Number theorists
Princeton University faculty
Academic staff of the University of Paris
Academic staff of TU Wien
Academic staff of the University of Vienna
University of Vienna alumni
Recipients of the Grand Decoration with Star for Services to the Republic of Austria
Recipients of the Austrian Decoration for Science and Art | Edmund Hlawka | [
"Mathematics"
] | 378 | [
"Number theorists",
"Number theory"
] |
5,442,380 | https://en.wikipedia.org/wiki/Sensory%20cue | In perceptual psychology, a sensory cue is a statistic or signal that can be extracted from the sensory input by a perceiver, that indicates the state of some property of the world that the perceiver is interested in perceiving.
A cue is some organization of the data present in the signal which allows for meaningful extrapolation. For example, sensory cues include visual cues, auditory cues, haptic cues, olfactory cues and environmental cues. Sensory cues are a fundamental part of theories of perception, especially theories of appearance (how things look).
Concept
There are two primary theory sets used to describe the roles of sensory cues in perception. One set of theories are based on the constructivist theory of perception, while the others are based on the ecological theory.
Basing his views on the constructivist theory of perception, Helmholtz (1821–1894) held that the visual system constructs visual percepts through a process of unconscious inference, in which cues are used to make probabilistic inferences about the state of the world. These inferences are based on prior experience, assuming that the most commonly correct interpretation of a cue will continue to hold true. A visual percept is the final manifestation of this process. Brunswik (1903-1955) later went on to formalize these concepts with the lens model, which breaks the system's use of a cue into two parts: the ecological validity of the cue, which is its likelihood of correlating with a property of the world, and the system's utilization of the cue. In these theories, accurate perception requires both the existence of cues with sufficiently high ecological validity to make inference possible, and that the system actually utilizes these cues in an appropriate fashion during the construction of percepts.
A second set of theories was posited by Gibson (1904-1979), based on the ecological theory of perception. These theories held that no inferences are necessary to accomplish accurate perception. Rather, the visual system is able to take in sufficient cues related to objects and their surroundings. This means that a one:one mapping between the incoming cues and the environment they represent can be made. These mappings will be shaped by certain computational constraints; traits known to be common in an organism's environment. The ultimate result is the same: a visual precept is manifested by the process.
Cue combination is an active area of research in perception, that seeks to understand how information from multiple sources is combined by the brain to create a single perceptual experience or response. Recent cue recruitment experiments have shown that the adult human visual system can learn to utilize new cues through classical (Pavlovian) conditioning.
Visual cues
Visual cues are sensory cues received by the eye in the form of light and processed by the visual system during visual perception. Since the visual system is dominant in many species, especially humans, visual cues are a large source of information in how the world is perceived.
Types of cues
Depth
The ability to perceive the world in three dimensions and estimate the size and distance to an object depends heavily on depth cues. The two major depth cues, stereopsis and motion parallax, both rely on parallax which is the difference between the perceived position of an object given two different viewpoints. In stereopsis the distance between the eyes is the source of the two different viewpoints, resulting in a Binocular disparity. Motion parallax relies head and body movement to produce the necessary viewpoints.
Motion
The visual system can detect motion both using a simple mechanism based on information from multiple clusters of neurons as well as by aggregate through by integrating multiple cues including contrast, form, and texture. One major source of visual information when determining self-motion is optic flow. Optic flow not only indicates whether an agent is moving but in which direction and at what relative speed.
Biological motion
Humans in particular have evolved a particularly keen ability to detect if motion is being generated by biological sources, even with point light displays where dots represent the joints of an animal. Recent research suggests that this mechanism can also reveal the gender, emotional state, and action of a given human light point model.
Color
The ability to distinguish between colors allows an organism to quickly and easily recognize danger since many brightly colored plants and animals pose some kind of threat, usually harboring some kind of toxin. Color also serves as an inferential cue that can prime both the motor action and interpretation of a persuasive message.
Contrast
Contrast, or the difference in luminance and/or color that helps make an object distinguishable, is important in edge detection and serves as a cue.
Auditory cues
An auditory cue is a sound signal that represents an incoming sign received through the ears, causing the brain to hear. The results of receiving and processing these cues are collectively known as the sense of hearing and are the subject of research within the fields of psychology, cognitive science, and neurobiology.
Auditory system
The auditory system of humans and animals allows individuals to assimilate information from the surroundings, represented as sound waves. Sound waves first pass through the pinnae and the auditory canal, the parts of the ear that comprise the outer ear. Sound then reaches the tympanic membrane in the middle ear (also known as the eardrum). The tympanic membrane sets the malleus, incus, and stapes into vibration. The stapes transmits these vibrations to the inner ear by pushing on the membrane covering the oval window, which separates the middle and inner ear. The inner ear contains the cochlea, the liquid-filled structure containing the hair cells. These cells serve to transform the incoming vibration to electrical signals, which can then be transmitted to the brain.
The auditory nerve carries the signal generated by the hair cells away from the inner ear and towards the auditory receiving area in the cortex. The signal then travels through fibers to several subcortical structures and on to the primary auditory receiving area in the temporal lobe.
Cues for locating sound
Humans use several cues to determine the location of a given stimuli, mainly by using the timing difference between ears. These cues allow individuals to identify both the elevation, the height of the stimuli relative to the individual, and the azimuth, or the angle of the sound relative to the direction the individual is facing.
Interaural time and level difference
Unless a sound is directly in front of or behind the individual, the sound stimuli will have a slightly different distance to travel to reach each ear. This difference in distance causes a slight delay in the time the signal is perceived by each ear. The magnitude of the interaural time difference is greater the more the signal comes from the side of the head. Thus, this time delay allows humans to accurately predict the location of incoming sound cues. Interaural level difference is caused by the difference in sound pressure level reaching the two ears. This is because the head blocks the sound waves for the further ear, causing less intense sound to reach it. This level difference between the two ears allows humans to accurately predict the azimuth of an auditory signal. This effect only occurs for sounds that are high frequency.
Spectral cue
A spectral cue is a monaural (single ear) cue for locating incoming sounds based on the distribution of the incoming signal. The differences in distribution (or spectrum) of the sound waves are caused by interactions of the sounds with the head and the outer ear before entering the ear canal.
Principles of auditory cue grouping
The auditory system uses several heuristics to make sense of incoming cues, based on the properties of auditory stimuli that usually occur in the environment. Cue grouping refers to how humans naturally perceive incoming stimuli as organized patterns, based on certain rules.
Onset time
If two sounds start at different times, they are likely to have originated from different sources. Sounds that occur simultaneously likely originate from the same source.
Location
Cues originating at the same or slowly changing positions usually have the same source. When two sounds are separated in space, the cue of location (see: sound localization) helps an individual to separate them perceptually. If a sound is moving, it will move continuously. Erratically jumping sound is unlikely to come from the same source.
Similarity of timbre
Timbre is the tone quality or tone character of a sound, independent of pitch. This helps us distinguish between musical instruments playing the same notes. When hearing multiple sounds, the timbre of each sound will be unchanging (regardless of pitch), and thus we can differentiate between sounds from different sources over time.
Similarity of pitch
Pitch refers to the frequency of the sound wave reaching us. Although a single object could produce a variety of pitches over time, it is more likely that it would produce sounds in a similar range. Erratic changes in pitch are more likely to be perceived as originating from different sources.
Auditory continuity
Similar to the Gestalt principle of good continuation (see: principles of grouping), sounds that change smoothly or remain constant are often produced by the same source. Sound with the same frequency, even when interrupted by other noise, is perceived as continuous. Highly variable sound that is interrupted is perceived as separate.
Factors affecting auditory cue perception
The precedence effect
When one sound is presented for a long interval before the introduction of a second one originating from a different location, individuals will hear them as two distinct sounds, each originating from the correct location. However, when the delay between the onset of the first and second sound is shortened, listeners are unable to distinguish between the two sounds. Instead, they perceive them as both coming from the location of the lead sound. This effect counteracts the small disparity between the perception of sound caused by the difference in distance between each ear and the source of the auditory stimuli.
The interaction between auditory and visual cues
There are strong interactions between visual and auditory stimuli. Since both auditory and visual cues provide an accurate source of information about the location of an object, most times there will be minimal discrepancy between the two. However, it is possible to have a disparity in the information provided by the two sets of cues. An example of visual capture is the ventriloquism effect, that occurs when an individual's visual system locates the source of an auditory stimulus at a different position than where the auditory system locates it. When this occurs, the visual cues will override the auditory ones. The individual will perceive the sound as coming from the location where the object is seen. Audition can also affect visual perception. Research has demonstrated this effect by showing two objects on a screen, one moving diagonally from top-right to bottom-left and the other from top-left to bottom-right, intersecting in the middle. The paths of these identical objects could have been interpreted as crossing over each other, or as bouncing off each other. Without any auditory cue, a vast majority of subjects saw the objects crossing paths and continuing in their original trajectory. But with the addition of a small "click" sound, a majority of subjects perceived the objects as bouncing off each other. In this case, auditory cues help interpret visual cues.
Haptic cues
A haptic cue is either a tactile sensation that represents an incoming signal received by the somatic system, or a relationship between tactile sensations which can be used to infer a higher level of information. The results of receiving and processing these cues are collectively known as the sense of touch, and are the subject of research in the fields of psychology, cognitive science, and neurobiology.
The word "haptic" can refer explicitly to active exploration of an environment (particularly in experimental psychology and physiology), but it is often used to refer to the whole of the somesthetic experience.
Somatosensory system
The somatosensory system assimilates many kinds of information from the environment: temperature, texture, pressure, proprioception, and pain. The signals vary for each of these perceptions, and the receptor systems reflect this: thermoreceptors, mechanoreceptors, nociceptors, and chemoreceptors.
Haptic cues in research
The interaction between haptic and visual cues
In addition to the interplay of haptic communication and nonverbal communication, haptic cues as primers have been looked at as a means of decreasing reaction time for identifying a visual stimulus. Subjects were placed in a chair fitted with a back which provided haptic cues indicating where the stimulus would appear on a screen. Valid haptic cues significantly decreased reaction time while invalid cues increased reaction time.
Use in technology for the visually impaired
Haptic cues are used frequently to allow those who have impaired vision to have access to a greater wealth of information. Braille is a tactile written language which is read via touch, brushing the fingers over the raised patterns. Braille technology is the attempt to extend Braille to digital media and developing new tools to aid in the reading of web pages and other electronic devices often involves a combination of haptic and auditory cues.
A major issue that different technologies in this area attempt to overcome is sensory overload. The amount of information that can be quickly related via touch is less than that of vision and is limited by current technology. As a result, multi-modal approaches, converting the visual information into both haptic and auditory outputs, often have the best results. For example, an electronic pen can be drawn across a tablet mapped to the screen and produce different vibrations and sounds depending on what is at that location.
Olfactory cues
An olfactory cue is a chemical signal received by the olfactory system that represents an incoming signal received through the nose. This allows humans and animals to smell the chemical signal given off by a physical object. Olfactory cues are extremely important for sexual reproduction, as they trigger mating behavior in many species, as well as maternal bonding and survival techniques such as detecting spoiled food. The results of receiving and processing this information is known as the sense of smell.
Olfactory system
The process of smelling begins when chemical molecules enter the nose and reach the olfactory mucosa, a dime-sized region located in the nasal cavity that contains olfactory receptor neurons. There are 350 types of olfactory receptors, each sensitive to a narrow range of odorants. These neurons send signals to the glomeruli within the olfactory bulb. Each glomerulus collects information from a specific olfactory receptor neuron. The olfactory signal is then conducted to the piriform cortex and the amygdala, and then to the orbitalfrontal cortex, where higher level processing of the odor occurs.
Olfactory memory
Olfactory memory is the recollection of a given smell. Research has found that odor memory is highly persistent and has a high resistance to interference, meaning these memories remain within an individual for long times despite possible interference of other olfactory memories. These memories are mostly explicit, though implicit forms of odor memory do provide some understanding of memory. Mammalian olfactory cues play an important role in the coordination of the mother infant bond, and the following normal development of the offspring.
Olfactory memory is especially important for maternal behavior. Studies have shown that the fetus becomes familiar with olfactory cues within the uterus. This is demonstrated by research that suggests that newborns respond positively to the smell of their own amniotic fluid, meaning that fetuses learn from these cues in the womb.
Environmental cues
Environmental cues are all of the sensory cues that exist in the environment.
With directed attention, an environmental cue becomes an attended cue. However, most environmental cues are assimilated subconsciously, as in visual contextual cueing.
Environmental cues serve as the primary context that shapes how the world is perceived and as such they can prime prior experience to influence memory recall and decision making. This has applied use in marketing as there is evidence to suggest a store's atmosphere and layout can influence purchasing behavior.
Environmental cues play a direct role in mediating the behavior of both plants and animals. For example, environmental cues, such as temperature change or food availability, affect the spawning behavior of fish. In addition to cues generated by the environment itself, cues generated by other agents, such as ant pheromone trails, can influence behavior to indirectly coordinate actions between those agents.
In the study of perception, environmental cues play a large role in experimental design since these mechanisms evolved within a natural environment which gives rise to scene statistics and the desire to create a natural scene. If the experimental environment is too artificial, it can damage external validity in an ideal observer experiment that makes use of natural scene statistics.
Cueing in Parkinson's disease
Among the many problems associated with Parkinson's disease are disturbances with gait, or issues related to walking. One example of this is freezing of gait where a person with Parkinson's disease will stop walking abruptly and struggle with the inability to walk forward for a brief period. Research has shown that auditory cues associated with walking, such as the sound of footsteps in gravel, can improve conditions regarding disturbances in gait in people with Parkinson's disease. Specifically, the two aspects of cue-continuity (pace) and action-relevance (sounds commonly associated with walking) together can help reduce gait variability.
The use of sensory cues has also aided in improving motor functions for people with Parkinson's disease. Research has indicated that sensory cues are beneficial in helping people with Parkinson's disease complete their ADLs (activities of daily living). Although the research showed that these individuals still did not meet standard expectations for motor functions and post-evaluations revealed a slight relapse in motor impairment, the overall results confirm that sensory cues are a beneficial resource in physical therapy and improving motor development in combating Parkinson's disease symptoms.
See also
Environmental Context Dependent Memory
Stimulus (psychology)
Virtual Reality cue reactivity
References
Perception
Sensory systems
Cognitive psychology
Visual perception | Sensory cue | [
"Biology"
] | 3,619 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
5,442,383 | https://en.wikipedia.org/wiki/Magnetophon | Magnetophon was the brand or model name of the pioneering reel-to-reel tape recorder developed by engineers of the German electronics company AEG in the 1930s, based on the magnetic tape invention by Fritz Pfleumer. AEG created the world's first practical tape recorder, the K1, first demonstrated in Germany in 1935 at the Berlin Radio Show.
Later models introduced the concept of AC tape bias, which improved the sound quality by largely eliminating background hiss. The resulting reproduction was so great an advance on any existing recording method that even those well acquainted with the industry could not tell the recordings from live play. Adolf Hitler used these machines to perform what appeared to be live broadcasts from one city while he was in another. A cache of 350 of these tapes was released years later when they were found in Koblenz.
Two later model Magnetophons were taken to the United States at the end of the war, having been found in Bad Nauheim. These included both the newer oxide-coated PVC tape developed by I.G. Farben (BASF division) as well as the AC bias system. The Army officer who tracked them down, Jack Mullin, would use these machines as the basis of his own designs, which he demonstrated to the San Francisco chapter of the Institute of Radio Engineers in May 1946, and later at the MGM Studios in Hollywood in October of that year. Attending the SF demo were Ampex engineers Harold Lindsey and Myron Stolaroff, who were inspired to design their own reel-to-reel recorder based on Mullin's modified Magnetophon. Mullin's friend, Richard Ranger, had also designed his own take on the Mullinized Magnetophon called the Rangertone; however, a demonstration of that machine to Bing Crosby did not go well. Mullin then arranged for Crosby to experience a demonstration of the machine designed by Lindsey and Stolaroff: the Ampex Model 200A. Although an initial showcase of the Ampex machine was unable to demonstrate recording, the audio quality of its playback was good enough to get Crosby to agree to work with them. With Bing Crosby arranging financial support for start-up manufacturing, the Ampex 200A went into production and within three years most major recording studios had purchased one.
History
The Magnetophon tape recorder was one of the first recording machines to use magnetic tape in preserving voice and music. At first, early Magnetophons gave disappointing results. One of the first concerts to be recorded on a Magnetophon was Mozart's 39th Symphony played by the London Philharmonic Orchestra, conducted by Sir Thomas Beecham, during their 1936 concert tour. The recording was made on an AEG K2 Magnetophon running at 100 cm/s. The tape used was the early black iron oxide Fe3O4 type. When Beecham and the musicians heard the playback, they were greatly disappointed with the distortion and noise on the recording. Although the original tape is now lost, the recording survived until the 1990s and has been transferred. Some other surviving tapes show a tendency toward overmodulation.
Later in 1939, the Fe3O4 oxide was replaced by the Fe2O3 type, which gave a significantly better recording quality, so much that the formula became a worldwide standard until the 1970s when chromium dioxide tapes appeared.
Adding a direct-current bias to the record head gave some improvement, but in 1941, Hans Joachim von Braunmühl and Dr. Walter Weber, both engineers at the German national broadcasting organisation RRG (Reichs-Rundfunk-Gesellschaft), accidentally discovered the technique of high-frequency bias in which the simple addition of a high level (about 10X the maximum audio level) inaudible high-frequency tone resulted in a striking improvement in sound quality by effectively smoothing the magnetization of unused portions of the audio band. The discovery was made when a Magnetophon producing recordings of extraordinary quality was sent 'for repair'. The machine was found to have an oscillating DC bias amplifier. Magnetic media are inherently non-linear, but AC bias was the means whereby the magnetisation of the recording tape was made linearly proportional to the electrical signal which represents the audio component. The Magnetophon became a 'high fidelity' recording system because in so many respects, it outperformed gramophone recording (which was the 78 rpm system of the time).
Many speeches, concerts, and operatic performances were recorded. Since many of the recordings survived World War II they were later issued on LPs and compact discs. One of the more remarkable series of recordings took place at the Vienna State Opera House, also known as Wiener Staatsoper, in 1944, when the German composer Richard Strauss recorded many of his famous symphonic poems, including Don Juan, Till Eulenspiegel, and Also sprach Zarathustra, with the Vienna Philharmonic Orchestra.
AEG engineers made rapid strides in perfecting the system and had practical stereo recorders by 1943. Until 1945, about 250 stereophonic tape recordings were known to exist, including some Richard Strauss and Furtwängler. Only three of those recordings are known to still exist. This includes a performance of Beethoven's "Emperor" Concerto with pianist Walter Gieseking and the Berlin Reichssenders Orchestra conducted by Artur Rother. This remarkable performance was later issued on LP by Varèse Sarabande. Later in 1993, the Audio Engineering Society (AES) issued a special CD for the 50th birthday of stereo recording. This CD not only includes the "Emperor" Concerto, but the two other stereo recordings known to exist: a Brahms serenade and the last movement of Bruckner's 8th Symphony conducted by Herbert von Karajan. Piano Library also issued the Emperor concerto, and Iron Needle issued the Bruckner recordings (catalog IN 1407). ArkivMusic released a CD of the concerto, as well a later recording Gieseking made of Beethoven's first piano concerto with the Rafael Kubelik and the Philharmonic Orchestra.
Magnetophon recorders were widely used in German radio broadcasts during World War II, although they were a closely guarded secret at the time. The Allies were aware of the existence of the pre-war Magnetophon recorders, but not of the introduction of high-frequency bias and PVC-backed tape. Their intelligence experts knew that the Germans had some new form of recording system but they did not know the full details of its construction and operation until working models of the Magnetophon were discovered during the Allied invasion of Germany during 1944-45.
Influence and legacy
American audio engineer Jack Mullin acquired two Magnetophon recorders and fifty reels of magnetic tape from a German radio station at Bad Nauheim near Frankfurt in 1945. The allied forces were traveling through Germany during WWII when they first discovered the device. The Allies then handed the Magnetophon over to Mullin. Over the next two years Mullin modified and developed these machines, hoping to create a commercial recording system that could be used by movie studios. American popular vocalist Bing Crosby, dissatisfied with the quality of existing radio network recordings was prevailed upon to invest in this development and would use the technology, as modified by Mullin and the fledgling Ampex company, to record his radio broadcasts in the more relaxed atmosphere of the recording studio, which was a significant break from the then-norm of live studio audience broadcasts. In 2008, at the 50th Annual Grammy Awards Ceremony, Ampex received the company's first Grammy Award for Technical Achievement, to honor their contribution sixty years earlier of the Ampex 200, which "revolutionized the radio and recording industries". Ampex 200 co-designer Myron Stolaroff was among the company's employees representing Ampex who accepted the award.
In 2004, the AEG K-1 Magnetophon was inducted into the TECnology Hall of Fame, an honor given to "products and innovations that have had an enduring impact on the development of audio technology."
As a generic noun
Magnetophon became the generic word for the tape recorder in some languages including German ("Magnetophon"), Swedish ("magnetofon"), Czech, Polish (magnetofon), French (magnétophone), Italian (magnetofono - only for reel-to-reel), Romanian, Serbian, Croatian (magnetofon - only for reel-to-reel), Greek (μαγνητόφωνο - magnitofono), Russian (магнитофон - magnitofon), Bulgarian (магнетофон - magnetofon), Slovak, Spanish (magnetófono or magnetofón), Hungarian (magnetofon - commonly shortened to magnó), Finnish (magnetofoni - commonly shortened to mankka), Estonian (magnetofon - commonly shortened to makk), Lithuanian (magnetofonas), Latvian (magnetofons) and Ukrainian (магнітофон - magnitofon).
See also
History of multitrack recording
Wire recording
British Tape Recorder
References
Sources
Friedrich K. Engel, "Chapter 5: The Introduction of the Magnetophon". In
External links
AEG Allgemeine Elektricitäts-Gesellschaft & Magnetophon
Products introduced in 1935
Audio storage
Consumer electronics brands
Sound recording technology
Tape recording
German inventions of the Nazi period | Magnetophon | [
"Technology"
] | 1,930 | [
"Recording devices",
"Sound recording technology",
"Tape recording"
] |
5,442,545 | https://en.wikipedia.org/wiki/Induction%20hardening | Induction hardening is a type of surface hardening in which a metal part is induction-heated and then quenched. The quenched metal undergoes a martensitic transformation, increasing the hardness and brittleness of the part. Induction hardening is used to selectively harden areas of a part or assembly without affecting the properties of the part as a whole.
Process
Induction heating is a non contact heating process which uses the principle of electromagnetic induction to produce heat inside the surface layer of a work-piece. By placing a conductive material into a strong alternating magnetic field, electric current can be made to flow in the material thereby creating heat due to the I2R losses in the material. In magnetic materials, further heat is generated below the curie point due to hysteresis losses. The current generated flows predominantly in the surface layer, the depth of this layer being dictated by the frequency of the alternating field, the surface power density, the permeability of the material, the heat time and the diameter of the bar or material thickness. By quenching this heated layer in water, oil, or a polymer based quench, the surface layer is altered to form a martensitic structure which is harder than the base metal.
Definition
A widely used process for the surface hardening of steel. The components are heated by means of an alternating magnetic field to a temperature within or above the transformation range followed by immediate quenching. The core of the component remains unaffected by the treatment and its physical properties are those of the bar from which it was machined, whilst the hardness of the case can be within the range 37/58 HRC. Carbon and alloy steels with an equivalent carbon content in the range 0.40/0.45% are most suitable for this process.
A large alternating current is driven through a coil, generating a very intense and rapidly changing magnetic field in the space within. The workpiece to be heated is placed within this alternating magnetic field where eddy currents are generated within the workpiece and resistance leads to Joule heating of the metal.
Many mechanical parts, such as shafts, gears, and springs, are subjected to surface treatments after machining in order to improve wear behavior. The effectiveness of these treatments depends both on surface materials properties modification and on the introduction of residual stress. Among these treatments, induction hardening is one of the most widely employed to improve component durability. It determines in the work-piece a tough core with tensile residual stresses and a hard surface layer with compressive stress, which have proved to be very effective in extending the component fatigue life and wear resistance.
Induction surface hardened low alloyed medium carbon steels are widely used for critical automotive and machine applications which require high wear resistance. Wear resistance behavior of induction hardened parts depends on hardening depth and the magnitude and distribution of residual compressive stress in the surface layer.
History
The basis of all induction heating systems was discovered in 1831 by Michael Faraday. Faraday proved that by winding two coils of wire around a common magnetic core it was possible to create a momentary electromotive force in the second winding by switching the electric current in the first winding on and off. He further observed that if the current was kept constant, no EMF was induced in the second winding and that this current flowed in opposite directions subject to whether the current was increasing or decreasing in the circuit.
Faraday concluded that an electric current can be produced by a changing magnetic field. As there was no physical connection between the primary and secondary windings, the emf in the secondary coil was said to be induced and so Faraday's law of induction was born. Once discovered, these principles were employed over the next century or so in the design of dynamos (electrical generators and electric motors, which are variants of the same thing) and in forms of electrical transformers. In these applications, any heat generated in either the electrical or magnetic circuits was felt to be undesirable. Engineers went to great lengths and used laminated cores and other methods to minimise the effects.
Early last century the principles were explored as a means to melt steel, and the motor generator was developed to provide the power required for the induction furnace. After general acceptance of the methodology for melting steel, engineers began to explore other possibilities for the use of the process. It was already understood that the depth of current penetration in steel was a function of its magnetic permeability, resistivity and the frequency of the applied field. Engineers at Midvale Steel and The Ohio Crankshaft Company drew on this knowledge to develop the first surface hardening induction heating systems using motor generators.
The need for rapid easily automated systems led to massive advances in the understanding and use of the induction hardening process and by the late 1950s many systems using motor generators and thermionic emission triode oscillators were in regular use in a vast array of industries. Modern day induction heating units use the latest in semiconductor technology and digital control systems to develop a range of powers from 1 kW to many megawatts.
Principal methods
Single shot hardening
In single shot systems the component is held statically or rotated in the coil and the whole area to be treated is heated simultaneously for a pre-set time followed by either a flood quench or a drop quench system. Single shot is often used in cases where no other method will achieve the desired result for example for flat face hardening of hammers, edge hardening complex shaped tools or the production of small gears.
In the case of shaft hardening a further advantage of the single shot methodology is the production time compared with progressive traverse hardening methods. In addition the ability to use coils which can create longitudinal current flow in the component rather than diametric flow can be an advantage with certain complex geometry.
There are disadvantages with the single shot approach. The coil design can be an extremely complex and involved process. Often the use of ferrite or laminated loading materials is required to influence the magnetic field concentrations in given areas thereby to refine the heat pattern produced. Another drawback is that much more power is required due to the increased surface area being heated compared with a traverse approach.
Traverse hardening
In traverse hardening systems the work piece is passed through the induction coil progressively and a following quench spray or ring is used. Traverse hardening is used extensively in the production of shaft type components such as axle shafts, excavator bucket pins, steering components, power tool shafts and drive shafts. The component is fed through a ring type inductor which normally features a single turn. The width of the turn is dictated by the traverse speed, the available power and frequency of the generator. This creates a moving band of heat which when quenched creates the hardened surface layer. The quench ring can be either integral a following arrangement or a combination of both subject to the requirements of the application. By varying speed and power it is possible to create a shaft which is hardened along its whole length or just in specific areas and also to harden shafts with steps in diameter or splines. It is normal when hardening round shafts to rotate the part during the process to ensure any variations due to concentricity of the coil and the component are removed.
Traverse methods also feature in the production of edge components, such as paper knives, leather knives, lawnmower bottom blades, and hacksaw blades. These types of application normally use a hairpin coil or a transverse flux coil which sits over the edge of the component. The component is progressed through the coil and a following spray quench consisting of nozzles or drilled blocks.
Many methods are used to provide the progressive movement through the coil and both vertical and horizontal systems are used. These normally employ a digital encoder and programmable logic controller for the positional control, switching, monitoring, and setting. In all cases the speed of traverse needs to be closely controlled and consistent as variation in speed will have an effect on the depth of hardness and the hardness value achieved.
Equipment
Power required
Power supplies for induction hardening vary in power from a few kilowatts to hundreds of kilowatts depending on the size of the component to be heated and the production method employed i.e. single shot hardening, traverse hardening or submerged hardening.
In order to select the correct power supply it is first necessary to calculate the surface area of the component to be heated. Once this has been established then a variety of methods can be used to calculate the power density required, heat time and generator operating frequency. Traditionally this was done using a series of graphs, complex empirical calculations and experience. Modern techniques typically use finite element analysis and computer-aided manufacturing techniques, however as with all such methods a thorough working knowledge of the induction heating process is still required.
For single shot applications the total area to be heated needs to be calculated. In the case of traverse hardening the circumference of the component is multiplied by the face width of the coil. Care must be exercised when selecting a coil face width that it is practical to construct the coil of the chosen width and that it will live at the power required for the application.
Frequency
Induction heating systems for hardening are available in a variety of different operating frequencies typically from 1 kHz to 400 kHz. Higher and lower frequencies are available but typically these will be used for specialist applications. The relationship between operating frequency and current penetration depth and therefore hardness depth is inversely proportional. i.e. the lower the frequency the deeper the case.
The above table is purely illustrative, good results can be obtained outside these ranges by balancing power densities, frequency and other practical considerations including cost which may influence the final selection, heat time and coil width. As well as the power density and frequency, the time the material is heated for will influence the depth to which the heat will flow by conduction. The time in the coil can be influenced by the traverse speed and the coil width, however this will also have an effect on the overall power requirement or the equipment throughput.
It can be seen from the above table that the selection of the correct equipment for any application can be extremely complex as more than one combination of power, frequency and speed can be used for a given result. However in practice many selections are immediately obvious based on previous experience and practicality.
Advantages
Fast process, no holding time is required, hence more production rate
No scaling or decarburizing
More case depth, up to 8 mm
Selective hardening
High wear and fatigue resistance
Applications
The process is applicable for electrically conductive magnetic materials such as steel.
Long work pieces such as axles can be processed.
See also
Case hardening
Induction forging
Induction heater
Induction shrink fitting
References
Notes
Bibliography
.
.
.
External links
Frequently Asked Questions About The Induction Hardening Process with examples of Induction Heating Applications
The National Metals Centre offering Design, Modeling & Simulation (DMS) technologies relating to Induction Hardening processes - NAMTEC
Metal heat treatments
it:Tempra#Tempra ad induzione | Induction hardening | [
"Chemistry"
] | 2,227 | [
"Metallurgical processes",
"Metal heat treatments"
] |
5,442,632 | https://en.wikipedia.org/wiki/Phenibut | Phenibut, sold under the brand name Anvifen among others, is a central nervous system (CNS) depressant with anxiolytic effects, and is used to treat anxiety, insomnia, and for a variety of other indications. It is usually taken orally (swallowed by mouth), but may be given intravenously.
Side effects of phenibut can include sedation, sleepiness, nausea, irritability, agitation, dizziness, euphoria, and sometimes headache, among others. Overdose of phenibut can produce marked central nervous system depression including unconsciousness. The medication is structurally related to the neurotransmitter γ-aminobutyric acid (GABA), and hence is a GABA analogue. Phenibut is thought to act as a GABAB receptor agonist, similarly to baclofen and γ-hydroxybutyrate (GHB). However, at low concentrations, phenibut mildly increases the concentration of dopamine in the brain, providing stimulatory effects in addition to the anxiolysis.
Phenibut was developed in the Soviet Union and was introduced for medical use in the 1960s. Today, it is marketed for medical use in Russia, Ukraine, Belarus, Kazakhstan, and Latvia. The medication is not approved for clinical use in the United States and most of Europe, but it is sold on the Internet as a supplement and purported nootropic. Phenibut has been used recreationally and can produce euphoria as well as addiction, dependence, and withdrawal. It is a controlled substance in Australia, and it has been suggested that its legal status should be reconsidered in Europe as well. In Germany, phenibut is not approved as a drug and, as a food supplement, is controlled under the German New Psychoactive Substances Act.
In a 2023 assessment, the U.S. Food and Drug Administration (FDA) determined that phenibut does not meet the definition of a dietary ingredient, thereby making phenibut supplement products misbranded and illegal for marketing. FDA warning letters had been issued to supplement manufacturers marketing phenibut products as adulterated.
Medical uses
Phenibut is used in Russia, Ukraine, Belarus and Latvia as a pharmaceutical drug to treat anxiety and to improve sleep (e.g., in the treatment of insomnia). It is also used for various other indications, including the treatment of asthenia, depression, alcoholism, alcohol withdrawal syndrome, post-traumatic stress disorder, stuttering, tics, vestibular disorders, Ménière's disease, dizziness, for the prevention of motion sickness, and for the prevention of anxiety before or after surgical procedures or painful diagnostic tests.
Available forms
Phenibut is available as a medication in the form of 250 mg or 500 mg tablets for oral administration and as a solution at a concentration of 10 mg/mL for infusion. In the US, dietary supplements labeled as containing phenibut have been found to contain zero to greater than 1,100 mg of phenibut per serving.
Contraindications
Contraindications of phenibut include:
Intolerance to phenibut
Pregnancy and breastfeeding
Children who are younger than two years of age
Liver insufficiency or failure
Ulcerative lesions of the gastrointestinal tract
Phenibut should not be combined with alcohol.
Side effects
Phenibut is generally well-tolerated. Possible side effects may include sedation, somnolence, nausea, irritability, agitation, anxiety, dizziness, headache, and allergic reactions such as skin rash and itching. At high doses, motor incoordination, loss of balance, and hangovers may occur. Due to its CNS depressant effects, people taking phenibut should refrain from potentially dangerous activities such as operating heavy machinery. With prolonged use of phenibut, particularly at high doses, the liver and blood should be monitored, due to risk of fatty liver disease and eosinophilia.
Overdose
In overdose, phenibut can cause severe drowsiness, nausea, vomiting, eosinophilia, lowered blood pressure, renal impairment, and, above 7 grams, fatty liver degeneration. There are no specific antidotes for phenibut overdose. Lethargy, somnolence, agitation, delirium, tonic–clonic seizures, reduced consciousness or unconsciousness, and unresponsiveness have been reported in recreational users who have overdosed. Management of phenibut overdose includes activated charcoal, gastric lavage, induction of vomiting, and symptom-based treatment. There have been three associated deaths which found Phenibut in the users system but only one of these cases single-handedly included Phenibut.
Dependency and withdrawal
Tolerance to phenibut easily develops with repeated use leading to dependency. Withdrawal symptoms may occur upon discontinuation, and, in recreational users taking high doses, have been reported to include severe rebound anxiety, insomnia, anger, irritability, agitation, visual and auditory hallucinations, and acute psychosis. Baclofen has successfully been used for treatment of phenibut dependence.
Interactions
Phenibut may mutually potentiate and extend the duration of the effects of other CNS depressants, including anxiolytics, antipsychotics, sedatives, opioids, anticonvulsants, and alcohol.
Pharmacology
Pharmacodynamics
Phenibut acts as a full agonist of the GABAB receptor, similarly to baclofen. It has between 30- and 68-fold lower affinity for the GABAB receptor than baclofen, and, in accordance, is used at far higher doses in comparison. (R)-Phenibut has more than 100-fold higher affinity for the GABAB receptor than does (S)-phenibut; hence, (R)-phenibut is the active enantiomer at the GABAB receptor.
Phenibut also binds to and blocks α2δ subunit-containing VDCCs, similarly to gabapentin and pregabalin, and hence is a gabapentinoid. Both (R)-phenibut and (S)-phenibut display this action with similar affinity (Ki = 23 and 39 μM, respectively).
It is often claimed on websites about nootropics and elsewhere on the internet that phenibut increases dopamine. Three papers published in Russian by Soviet scientists in 1979, 1986, and 1990 report that phenibut increases dopamine in the striatum of rats and in the mouse brain. The mechanism underlying this putative effect is unclear. Structurally, phenibut can also be considered a derivative of phenethylamine, and some research suggests that phenibut antagonizes the action of phenethylamine.
Pharmacokinetics
Little information thus far has been published on the clinical pharmacokinetics of phenibut. The drug is reported to be well-absorbed. It distributes widely throughout the body and across the blood–brain barrier. Approximately 0.1% of an administered dose of phenibut reportedly penetrates into the brain, with this said to occur to a much greater extent in young people and the elderly. Following a single 250 mg dose in healthy volunteers, its elimination half-life was approximately 5.3 hours and the drug was largely (63%) excreted in the urine unchanged.
Some limited information has been described on the pharmacokinetics of phenibut in recreational users taking much higher doses (e.g., 1–3 grams) than typical clinical doses. In these individuals, the onset of action of phenibut has been reported to be 2 to 4 hours orally and 20 to 30 minutes rectally, the peak effects are described as occurring 4 to 6 hours following oral ingestion, and the total duration for the oral route has been reported to be 15 to 24 hours (or about 3 to 5 terminal half-lives).
Chemistry
Phenibut is a synthetic aromatic amino acid. It is a chiral molecule and thus has two potential configurations, as (R)- and (S)-enantiomers.
Structure and analogues
Phenibut is a derivative of the inhibitory neurotransmitter GABA. Hence, it is a GABA analogue. Phenibut is specifically the analogue of GABA with a phenyl ring substituted in at the β-position. As such, its chemical name is β-phenyl-γ-aminobutyric acid, which can be abbreviated as β-phenyl-GABA. The presence of the phenyl ring allows phenibut to cross the blood–brain barrier significantly, unlike GABA. Phenibut also contains the trace amine β-phenethylamine in its structure.
Phenibut is closely related to a variety of other GABA analogues including baclofen (β-(4-chlorophenyl)-GABA), 4-fluorophenibut (β-(4-fluorophenyl)-GABA), tolibut (β-(4-methylphenyl)-GABA), pregabalin ((S)-β-isobutyl-GABA), gabapentin (1-(aminomethyl)cyclohexane acetic acid), and GABOB (β-hydroxy-GABA). It has almost the same chemical structure as baclofen, differing from it only in having a hydrogen atom instead of a chlorine atom at the para position of the phenyl ring. Phenibut is also close in structure to pregabalin, which has an isobutyl group at the β position instead of phenibut's phenyl ring.
A glutamate-derivative analogue of phenibut is glufimet (dimethyl 3-phenylglutamate hydrochloride).
Synthesis
A chemical synthesis of phenibut has been published.
History
Phenibut was synthesized at the A. I. Herzen Leningrad Pedagogical Institute (USSR) by Professor Vsevolod Perekalin's team and tested at the Institute of Experimental Medicine, USSR Academy of Medical Sciences. It was introduced into clinical use in Russia in the 1960s.
Society and culture
Other names
Alternate spellings include fenibut and phenybut. It is also sometimes referred to as aminophenylbutyric acid. The word phenibut is a contraction of the chemical name of the drug, β-phenyl-γ-aminobutyric acid. In early publications, phenibut was referred to as fenigam and phenigama. The drug has not been assigned an .
Brand names
Phenibut is marketed in Russia, Ukraine, Belarus, and Latvia under the brand names Anvifen, Fenibut, Bifren, and Noofen (Russian: Анвифен, Фенибут, Бифрен and Ноофен, respectively).
Availability
Phenibut is approved in Russia, Ukraine, Belarus, and Latvia for medical use. It is not approved or available as a medication in other countries in the European Union, the United States, or Australia. In countries where phenibut is not a licensed pharmaceutical drug, it is sold online without a prescription as a "nutritional supplement". It is often used as a form of self-medication for social anxiety.
Recreational use
Phenibut is used recreationally due to its ability to produce euphoria, anxiolysis, and increased sociability, as well as remaining undetected in routine urinalysis. Because of its delayed onset of effects, first-time users often mistakenly take an additional dose of phenibut in the belief that the initial dose did not work. Recreational users usually take the drug orally; there are a few case reports of rectal administration and one report of insufflation, which was described as "very painful" and causing swollen nostrils.
Legal status
As of 2021, phenibut is a controlled substance in Australia, France, Hungary, Italy, Lithuania, and Germany where, nevertheless, it is readily obtained online.
In 2015, it was suggested that the legal status of phenibut in Europe should be reconsidered due to its recreational potential. In February 2018, the Australian Therapeutic Goods Administration declared it a prohibited (schedule 9) substance, citing health concerns due to withdrawal and overdose.
As of 14 November 2018, Hungary added phenibut and 10 other items to its New Psychoactive Substances ban list, and, as of 26 August 2020, Italy added phenibut to its New Psychoactive Substances ban list. As of 18 September 2020, France added phenibut to the controlled psychoactive substances list, prohibiting production, sale, storage, and use.
In the United States, phenibut is an unapproved drug, but is often misleadingly marketed as a dietary supplement. It is readily available without a prescription.
In Alabama, phenibut was made a Schedule II substance at the state level in November 2021.
See also
Gabapentin
Pregabalin
List of Russian drugs
References
Analgesics
Anticonvulsants
Anxiolytics
Bodybuilding supplements
Calcium channel blockers
Drug culture
Drugs in the Soviet Union
Euphoriants
GABA analogues
GABAA receptor agonists
GABAB receptor agonists
Gamma-Amino acids
Hypnotics
Muscle relaxants
Nootropics
Phenethylamines
Russian drugs | Phenibut | [
"Biology"
] | 2,924 | [
"Hypnotics",
"Behavior",
"Sleep"
] |
5,442,715 | https://en.wikipedia.org/wiki/New%20Gulim | New Gulim (새굴림/SaeGulRim) is a sans-serif type Unicode font designed especially for the Korean-language script, designed by HanYang System Co., Limited (now Hanyang Information & Communications Co., Ltd). It is an expanded version of Hanyang Gulrim (한양 굴림).
Font is hinted at 0–13 points, hinted and smoothed at 14 points or higher.
It contains 49,284 glyphs in v3.10. This font was part of Old Korean support tools for MS Word 2000 and 2003.
It covers following ranges: Basic Latin, Latin-1 Supplement, Latin Extended-A, Spacing Modifier Letters, Greek, Cyrillic, Hangul Jamo, General Punctuation, Letterlike Symbols, Number Forms, Arrows, Mathematical Operators, Enclosed Alphanumerics, Box Drawing, Geometric Shapes, Miscellaneous Symbols, CJK Symbols and Punctuation, Hiragana, Katakana, Hangul Compatibility Jamo, Enclosed CJK Letters and Months, CJK Compatibility, CJK Unified Ideographs Extension A, CJK Unified Ideographs, Hangul Syllables, CJK Compatibility Ideographs, Halfwidth and Fullwidth Forms. It basically extended the Gulim font to support all glyphs in CJK Unified Ideographs Extension A, CJK Unified Ideographs (up to Unicode 3.0), and miscellaneous glyph updates, with slight change in font metrics.
In the Private Use Area (E000–F8FF), it includes about 5000 precomposed pre-1933 orthography Korean syllables, small form variants of Hangul Jamo, some small hanja glyphs in regular script.
The font was once available as part of Microsoft Old Hangul Support Pack.
Gulim Old Hangul Jamo
Included with New Gulim in Old Hangul Support Pack is Gulim Old Hangul Jamo (굴림 옛한글 자모/GulRim YesHanGeul JaMo) font, which contains only Basic Latin, Hangul, and old Hangul glyphs found in New Gulim font. The old Hangul glyphs, and small form variants of Hangul glyphs that are in the PUA of New Gulim font are moved to CJK Unified Ideographs block. Only seven glyphs in the Hangul Syllables block of New Gulim are retained in their original code points in Gulim Old Hangul Jamo.
This font does not have hinting. It only supports code page 949.
See also
List of CJK fonts
List of typefaces
Unicode fonts
External links
Microsoft Office Assistance Home Page
Office XP Tool: Korean Language Pack (ie_ko.exe).
CJK typefaces
Sans-serif typefaces
Unicode typefaces | New Gulim | [
"Technology"
] | 586 | [
"Computing stubs",
"Digital typography stubs"
] |
5,442,749 | https://en.wikipedia.org/wiki/Shwartzman%20phenomenon | Shwartzman phenomenon is a rare reaction of a body to particular types of toxins, called endotoxins, which cause thrombosis in the affected tissue. A clearing of the thrombosis results in a reticuloendothelial blockade, which prevents re-clearing of the thrombosis caused by a repeat introduction of the toxin. That will cause tissue necrosis. Shwartzman phenomenon is usually observed during delivery or abortion, when foreign bodies are introduced into the tissues of the female reproductive system.
The Shwartzman phenomenon is named for Gregory Shwartzman, the doctor at Mount Sinai Hospital in New York City who was the first to develop the concept of immune system hypersensitivity in the 1920s. This reaction was experimented using Neisseria meningitidis endotoxin. A related observation was made by Giuseppe Sanarelli leading to the term Sanarelli-Shwartzman phenomenon, however many modern works use more generic terms such as disseminated intravascular coagulation.
This is notably seen with Neisseria meningitidis.
References
External links
Toxicology | Shwartzman phenomenon | [
"Environmental_science"
] | 228 | [
"Toxicology"
] |
5,442,846 | https://en.wikipedia.org/wiki/Multiple%20rule-based%20problems | Multiple rule-based problems are problems containing various conflicting rules and restrictions. Such problems typically have an "optimal" solution, found by striking a balance between the various restrictions, without directly defying any of the aforementioned restrictions.
Solutions to such problems can either require complex, non-linear thinking processes, or can instead require mathematics-based solutions in which an optimal solution is found by setting the various restrictions as equations, and finding an appropriate maximum value when all equations are added. These problems may thus require more working information as compared to causal relationship problem solving or single rule-based problem solving. The multiple rule-based problem solving is more likely to increase cognitive load than are the other two types of problem solving.
References
Mathematical analysis | Multiple rule-based problems | [
"Mathematics"
] | 146 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
5,443,653 | https://en.wikipedia.org/wiki/Five-year%20survival%20rate | The five-year survival rate is a type of survival rate for estimating the prognosis of a particular disease, normally calculated from the point of diagnosis. Lead time bias from earlier diagnosis can affect interpretation of the five-year survival rate.
There are absolute and relative survival rates, but the latter are more useful and commonly used.
Relative and absolute rates
Five-year relative survival rates are more commonly cited in cancer statistics. Five-year absolute survival rates may sometimes also be cited.
Five-year absolute survival rates describe the percentage of patients alive five years after the disease is diagnosed.
Five-year relative survival rates describe the percentage of patients with a disease alive five years after the disease is diagnosed, divided by the percentage of the general population of corresponding sex and age alive after five years. Typically, cancer five-year relative survival rates are well below 100%, reflecting excess mortality among cancer patients compared to the general population. In contrast to five-year absolute survival rates, five-year relative survival rates may also equal or even exceed 100% if cancer patients have the same or even higher survival rates than the general population. The pattern may occur if cancer patients can generally be cured, or patients diagnosed with cancer have greater socioeconomic wealth or access to medical care than the general population.
The fact that relative survival rates above 100% were estimated for some groups of patients appears counterintuitive on first view. It is unlikely that occurrence of prostate cancer would increase chances of survival, compared to the general population. A more plausible explanation is that the pattern reflects a selection effect of PSA screening, as screening tests tend to be used less often by socially disadvantaged population groups, who, in general, also have higher mortality.
Uses
Five-year survival rates can be used to compare the effectiveness of treatments. Use of five-year survival statistics is more useful in aggressive diseases that have a shorter life expectancy following diagnosis, such as lung cancer, and less useful in cases with a long life expectancy, such as prostate cancer.
Improvements in rates are sometimes attributed to improvements in diagnosis rather than to improvements in prognosis.
To compare treatments independently from diagnostics, it may be better to consider survival from reaching a certain stage of the disease or its treatment.
Analysis performed against the Surveillance, Epidemiology, and End Results database (SEER) facilitates calculation of five-year survival rates.
References
Medical terminology
Epidemiology | Five-year survival rate | [
"Environmental_science"
] | 488 | [
"Epidemiology",
"Environmental social science"
] |
5,443,884 | https://en.wikipedia.org/wiki/Ornstein%20isomorphism%20theorem | In mathematics, the Ornstein isomorphism theorem is a deep result in ergodic theory. It states that if two Bernoulli schemes have the same Kolmogorov entropy, then they are isomorphic. The result, given by Donald Ornstein in 1970, is important because it states that many systems previously believed to be unrelated are in fact isomorphic; these include all finite stationary stochastic processes, including Markov chains and subshifts of finite type, Anosov flows and Sinai's billiards, ergodic automorphisms of the n-torus, and the continued fraction transform.
Discussion
The theorem is actually a collection of related theorems. The first theorem states that if two different Bernoulli shifts have the same Kolmogorov entropy, then they are isomorphic as dynamical systems. The third theorem extends this result to flows: namely, that there exists a flow such that is a Bernoulli shift. The fourth theorem states that, for a given fixed entropy, this flow is unique, up to a constant rescaling of time. The fifth theorem states that there is a single, unique flow (up to a constant rescaling of time) that has infinite entropy. The phrase "up to a constant rescaling of time" means simply that if and are two Bernoulli flows with the same entropy, then for some constant c. The developments also included proofs that factors of Bernoulli shifts are isomorphic to Bernoulli shifts, and gave criteria for a given measure-preserving dynamical system to be isomorphic to a Bernoulli shift.
A corollary of these results is a solution to the root problem for Bernoulli shifts: So, for example, given a shift T, there is another shift that is isomorphic to it.
History
The question of isomorphism dates to von Neumann, who asked if the two Bernoulli schemes BS(1/2, 1/2) and BS(1/3, 1/3, 1/3) were isomorphic or not. In 1959, Ya. Sinai and Kolmogorov replied in the negative, showing that two different schemes cannot be isomorphic if they do not have the same entropy. Specifically, they showed that the entropy of a Bernoulli scheme BS(p1, p2,..., pn) is given by
The Ornstein isomorphism theorem, proved by Donald Ornstein in 1970, states that two Bernoulli schemes with the same entropy are isomorphic. The result is sharp, in that very similar, non-scheme systems do not have this property; specifically, there exist Kolmogorov systems with the same entropy that are not isomorphic. Ornstein received the Bôcher prize for this work.
A simplified proof of the isomorphism theorem for symbolic Bernoulli schemes was given by Michael S. Keane and M. Smorodinsky in 1979.
References
Further reading
Steven Kalikow, Randall McCutcheon (2010) Outline of Ergodic Theory, Cambridge University Press
Donald Ornstein (2008), "Ornstein theory" Scholarpedia, 3(3):3957.
Daniel J. Rudolph (1990) Fundamentals of measurable dynamics: Ergodic theory on Lebesgue spaces, Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1990.
Ergodic theory
Symbolic dynamics | Ornstein isomorphism theorem | [
"Mathematics"
] | 696 | [
"Symbolic dynamics",
"Ergodic theory",
"Dynamical systems"
] |
5,444,091 | https://en.wikipedia.org/wiki/Knob-and-tube%20wiring | Knob-and-tube wiring (sometimes abbreviated K&T) is an early standardized method of electrical wiring in buildings, in common use in North America from about 1880 to the 1930s. It consisted of single-insulated copper conductors run within wall or ceiling cavities, passing through joist and stud drill-holes via protective porcelain insulating tubes, and supported along their length on nailed-down porcelain knob insulators. Where conductors entered a wiring device such as a lamp or switch, or were pulled into a wall, they were protected by flexible cloth insulating sleeving called loom. The first insulation was asphalt-saturated cotton cloth, then rubber became common. Wire splices in such installations were twisted together for good mechanical strength, then soldered and wrapped with rubber insulating tape and friction tape (asphalt saturated cloth), or made inside metal junction boxes.
Knob and tube wiring was eventually displaced from interior wiring systems because of the high cost of installation compared with use of power cables, which combined both power conductors of a circuit in one run (and which later included grounding conductors).
At present, new concealed knob and tube installations are permitted in the U.S. by special permission.
Elements
Ceramic knobs were cylindrical and generally nailed directly into the wall studs or floor joists. Most had a circular groove running around their circumference, although some were constructed in two pieces with pass-through grooves on each side of the nail in the middle. A leather washer often cushioned the ceramic, to reduce breakage during installation.
By wrapping electrical wires around the knob, and securing them with tie wires, the knob could be used to securely and permanently anchor the wire. The knobs separated the wire from potentially combustible framework, facilitated changes in direction, and ensured that wires were not subject to excessive tension. Because the wires were suspended in air, they could dissipate heat well.
Ceramic tubes were inserted into holes bored in wall studs or floor joists, and the wires were directed through them. This kept the wires from coming into contact with the wood framing members and from being compressed by the wood as the house settled. Ceramic tubes were sometimes also used when wires crossed over each other, for protection in case the upper wire were to break and fall on the lower conductor.
Ceramic cleats, which were block-shaped pieces, served a purpose similar to that of the knobs except that cleats were generally used in places where the wiring was surface mounted. Not all knob and tube installations utilized cleats.
Ceramic bushings protected each wire entering a metal device box, when such an enclosure was used.
Loom, a woven flexible insulating sleeve, was slipped over insulated wire to provide additional protection whenever a wire passed over or under another wire, when a wire entered a metal device enclosure, and in other situations prescribed by code.
Other ceramic pieces would typically be used as a junction point between the wiring system proper, and the more flexible cloth-clad wiring found in light fixtures or other permanent, hard-wired devices. When a generic power outlet was desired, the wiring could run directly into the junction box through a tube of protective loom and a ceramic bushing.
Wiring devices such as light switches, receptacle outlets, and lamp sockets were either surface-mounted, suspended, or flush-mounted within walls and ceilings. Only in the last case were metal boxes always used to enclose the wiring and device.
Unusual wiring layouts
In many older K&T installations, the supply and return wires were routed separately from each other, rather than being located parallel to and near each other. This direct routing method has the advantage of reduced cost by allowing use of the shortest possible lengths of wire, but the major disadvantage is that a detailed building wiring diagram is needed for other electricians to understand multiple interwoven circuits, especially if the wiring is not fully visible throughout its length. By contrast, modern electrical codes now require that all residential wiring connections be made only inside protective enclosures, such as junction boxes, and that all connections must remain accessible for inspection, troubleshooting, repair, or modification.
Under the US electrical code, Carter system wiring layouts have now been banned, even for permissible new installations of K&T wiring. However, electricians must be aware of this older system, which is still present in many existing older electrical installations.
Neutral fusing
Another practice that was common (or even originally required) in some older K&T designs was the installation of separate fuses in both the hot wire and the neutral (return) wire of an electrical circuit. The failure of a neutral fuse would cut off power flow through the affected circuit, but the hot conductor could still remain hot relative to ground, an unexpected and potentially hazardous situation. Because of the presence of a neutral fuse, and in the event that it blew, the neutral conductor could not be relied on to remain near ground potential; and, in fact, could be at full line potential (via transmission of voltage through a switched-on light bulb, for example).
Modern electrical codes generally do not require a neutral fuse. Instead, they explicitly forbid configurations that might break continuity of the neutral conductor, unless all associated hot conductors are also simultaneously disconnected (for example, by using ganged or "tied" circuit breakers). In retrofit situations electricians may place a higher value fuse on the neutral, so that fuse blows last.
Advantages
In the early 1900s, K&T wiring was less expensive to install than other wiring methods. For several decades, electricians could choose between K&T wiring, conduit, armored cable, and metal junction boxes. The conduit methods were known to be of better quality, but cost significantly more than K&T. In 1909, flexible armored cable cost about twice as much as K&T, and conduit cost about three times the price of K&T. Knob and tube wiring persisted since it allowed owners to wire a building for electricity at lower cost.
Modern wiring methods assume that two or more load-carrying conductors will lie very near each other, as for instance in standard NM-2 cable. When installed correctly, the K&T wires are held away from the structural materials by ceramic insulators.
Over the K&T era multiple wire types evolved. Early wiring was insulated with cotton cloth and soft rubber, while later wiring was much more robust. Although the actual wire covering may have degraded over the decades, the porcelain standoffs have a nearly unlimited lifespan and will keep any bare wires safely insulated. Today, porcelain standoffs are still commonly used with bare-wire electric fencing for livestock, and such porcelain standoffs carry far higher voltage surges without risk of shorting to ground.
In summary, K&T wiring that was installed correctly, and not damaged or incorrectly modified since then, is fairly safe when used within the original current-carrying limits, typically about ten amperes per circuit.
Disadvantages
Historically, wiring installation requirements were less demanding in the age of knob-and-tube wiring than today. Compared to modern electrical wiring standards, these are the main technical shortcomings of knob-and-tube wiring methods:
never included a safety grounding conductor
did not confine switching to the hot conductor (the so-called Carter system prohibited as of 1923 places electrical loads across the common terminals of a three-way switch pair)
permitted the use of in-line splices in walls without a junction box (however, this downside is offset by the strong nature of the soldered and taped junctions used at the time).
susceptible to mechanical damage in accessible areas
Over time, the price of electrician labor grew faster than the cost of materials. This removed the price advantage of K&T methods, especially since they required time-consuming skillful soldering of in-line splices and junctions, and careful hand-wrapping of connections in layers of insulating tape.
Knob-and-tube wiring can be made with high current carrying capacity. However, most existing residential knob-and-tube installations, dating to before 1940, have fewer branch circuits than is desired today. While these installations were adequate for the electrical loads at the time of installation, modern households use a range and intensity of electrical equipment unforeseen at the time. Household power use increased dramatically following World War II, due to the wide availability of new electrical appliances and devices.
Modern home buyers often find that existing K&T systems lack the capacity for today's levels of power use. First-generation wiring systems became susceptible to abuse by homeowners who would replace blown fuses with fuses rated for higher current. This overfusing of the circuits subjects wiring to higher levels of current and risks heat damage or fire.
Knob-and-tube wiring may also be damaged by building renovations. Its cloth and rubber insulation can dry out and turn brittle. It may also be damaged by rodents and careless activities such as hanging objects from wiring running in accessible areas like basements or attics.
Currently, the United States National Electrical Code forbids the use of loose, blown-in, or expanding foam insulation over K&T wiring. This is because K&T is designed to let heat dissipate to the surrounding air. As a result, energy efficiency upgrades that involve insulating previously uninsulated walls usually also require replacement of the wiring in affected homes. However, California, Washington, Nebraska, and Oregon have modified the NEC to conditionally allow insulation around K&T. They did not find a single fire that was attributed to K&T, and permit insulation provided the home first passes inspection by an electrician.
As existing K&T wiring gets older, insurance companies may deny coverage due to a perception of increased risk. Several companies will not write new homeowners policies at all unless all K&T wiring is replaced, or an electrician certifies that the wiring is in good condition. Also, many institutional lenders are unwilling to finance a home with the relatively low-capacity service typical of K&T wiring, unless the electrical service is upgraded. Partial upgrades, where low demand lighting circuits are left intact, may be acceptable to some insurers.
See also
Rat-tail splice
T-splice
Western Union splice
References
Further reading
Written for home owners and do-it-yourselfers.
Written for professional electricians and advanced property owners.
External links
Electrical wiring | Knob-and-tube wiring | [
"Physics",
"Engineering"
] | 2,118 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.