id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
337,444 | https://en.wikipedia.org/wiki/Coitus%20reservatus | Coitus reservatus (from , "sexual intercourse" and , "reserved"), also known as sexual continence, is a form of sexual intercourse in which a male does not attempt to ejaculate within his partner, avoiding the seminal emission. It is distinct from death-grip syndrome, wherein a male has no volition in his emissionless state.
Alice Stockham coined the term karezza, derived from the Italian word carezza meaning "caress", to describe coitus reservatus, but the idea was already in practice at the Oneida Community. Alan Watts erroneously believed that karezza was a Persian word. The concept of karezza is loosely akin to maithuna in Hindu Tantra and sahaja in Hindu Yoga.
Ejaculation control was important for both genders, called Chinese caiyin buyang ()( "collect yin and replenish yang") for men and caiyang buyin () ("collect yang and replenish yin") for women, and was involved in Taoist sexual practices such as huanjing bunao
(), as well as Indian Tantra (where it is known as "asidhārāvrata") and Hatha Yoga (see vajroli mudra), although conventional ejaculation is also endorsed.
Practice of karezza and sexual continence
Stockham writes, "Karezza signifies 'to express affection in both words and action', and while it fittingly denotes the union that is the outcome of deepest human affection, love's consummation, it is used technically throughout this work to designate a controlled sexual union." So that, in practice, according to Stockham, it is more than just self-control, but mutual control where the penetrative partner helps the receptive partner and vice versa. According to Stockham, this is the key to overcoming many difficulties in controlling sexual expression individually.
Stockham's contribution was to apply this same philosophy of orgasm control to women as much as to men. A form of birth control, the technique also prolongs sexual pleasure to the point of achieving mystical ecstasy. In this practice, orgasm is separated from ejaculation, making possible enjoyment of the pleasure of sexual intercourse without experiencing seminal ejaculation, while still experiencing orgasm.
Some would have the principles of karezza applied to masturbation, whereby a person attempts to delay orgasm as long as possible to prolong pleasure in a process known as "orgasmic brinkmanship", "surfing", or "edging", but this is different from the practice of "karezza" in partnered intercourse. In Latin literature, this is known as .
One purpose of karezza is the maintenance and intensification of desire and enjoyment of sexual pleasure within relationships. According to Stockham, it takes from two weeks to a month for the body to recover from ejaculation: "Unless procreation is desired, let the final propagative orgasm be entirely avoided". Stockham advocated that the 'honeymoon period' of a relationship could be maintained in perpetuity by limiting the frequency of ejaculations, or, preferably avoiding them entirely.
Kalman Andras Oszlar writes, "Inasmuch as sexual togetherness is not limited into the physical world and does not mean quick wasting of sexual energies, by this we give free way to higher dimensions in the relationship." Affected by this, we may get into the state of flow, and in the course of this, the couple charges up with energy, while – with focused attention (Dhāraṇā) – the couple is submerging in that in which they are having pleasure. This state of mind is connected with the beneficial effects of tantric restraint and transformation. At first, Mihály Csíkszentmihályi defined positive philosophy, and since then, it has been referred to beyond the professional line. According to the professor, the flow experience is an entirely focused and motivationally intensified experience where people can entirely focus and properly command their feelings for the best performance or learning.
There is a slight difference between karezza and coitus reservatus. In coitus reservatus, unlike karezza, a woman can enjoy a prolonged orgasm while a man exercises self-control.
Like coitus interruptus, coitus reservatus is not a reliable form of preventing a sexually transmitted infection, as the penis leaks pre-ejaculate before ejaculation, which may contain all of the same infectious viral particles and other microbes as the semen. Although studies have not found sperm in pre-ejaculate fluid, the method is also unreliable for contraception because of the difficulty of controlling ejaculation beyond the point of no return. Additionally, pre-ejaculate fluid can collect sperm from a previous ejaculation, leading to pregnancy even when performed correctly.
Controversy
Alice Stockham was taken to court and forced to give up teaching the practice of karezza in the United States. Like many other sex reformers, Dr. Stockham was arrested by Anthony Comstock, who prosecuted a variety of sexual freedom reformers. Ida Craddock committed suicide after being repeatedly jailed for peddling pornography. The press attacked the Oneida Community and Noyes fled to Canada due to a warrant being issued for his arrest on a statutory rape charge on June 22, 1879. From there, he advised others to follow St. Paul's plan of either marriage or celibacy, and to obey the commandments against adultery and fornication.
Views of the Catholic Church
Many theologians within the Catholic tradition approve of coitus reservatus with precise guidelines.
In 6.918 of Moral Theology, St. Alphonsus Liguori allows it in marriage, if mutual (when both spouses restrain themselves from orgasm), according to citation by author Peter Gardella.
In 1952, the Holy See warned that:
John F. Harvey OSFS expressed this to mean that "confessors and spiritual directors should not presume that there is nothing objectionable" with it. This indicates it could become sinful under certain circumstances, for example, if ending in masturbation.
Masturbation is defined by Fr. John Hardon SJ as:
In the article hosted on the US Conference of Catholic Bishops website, John F. Harvey OSFS confirmed and explained coitus reservatus. The spouses should avoid undue frustration. They must not intend orgasm for either of them. If unintentionally one of them reaches the climax, then they must try to stimulate the other spouse to reach the climax, too.
This differs from non-penetrative, rubbing-only sex (mutual masturbation or manual sex) and differs from coitus interruptus because coitus reservatus ends with no orgasm at all for both spouses if practiced within guidelines from John F. Harvey OSFS referenced above.
Oneida Community
The Oneida Community, founded in the 19th century by John Humphrey Noyes, experimented with coitus reservatus, which was then called male continence in a religiously Christian communalist environment. The experiment lasted for about a quarter of a century, and then Noyes created Oneida silverware and established the Oneida Silver Co., which grew into Oneida Limited. Noyes identified three functions of the sexual organs: the urinary, the propagative (reproductive), and the amative (sexual love). Noyes believed in the separation of the amative from the propagative, and he put amative sexual intercourse on the same footing as other ordinary forms of social interchange. Sexual intercourse as Noyes defines it is the insertion of the penis into the vagina; ejaculation is not a requirement for sexual intercourse.
Western esotericism
Ida Craddock, C. F. Russell, and Louis T. Culling
Inspired by Ida Craddock's work Heavenly Bridegrooms, American occultist C. F. Russell developed a curriculum of sex magick. In the 1960s, disciple Louis T. Culling published these in two works entitled The Complete Magickal Curriculum of the Secret Order G.'.B.'.G.'. and Sex Magick. The first two degrees are "Alphaism and Dianism". Culling writes that Dianism is "sexual congress without bringing it to climax" and that each participant is to regard their partner, not as a "known earthly personality" but as a "visible manifestation of one's Holy Guardian Angel.
Rosicrucian groups
AMORC does not recommend engaging in sexual practices of an occult nature. This has been so since their First Imperator H. Spencer Lewis, Ph.D. made it public knowledge.
The Fraternitas Rosae Crucis led by Dr. R. Swinburne Clymer engages in sexual practices for the sake of race regeneration. Dr. Clymer is completely opposed to the practice of karezza or coitus reservatus and advocates instead a form of sexual intercourse in which the couple experiences the orgasm at the same time.
The Secretary of the FUDOSI instead approved the practice of karezza to establish harmony in the family and the world by preventing the waste and misuse of sex energy.
Dr. Arnold Krumm-Heller established the Fraternitas Rosicruciana Antiqua (FRA), a Rosicrucian school in Germany with branches in South America, having the following formula of sexual conduct: "Immissio Membri Virile In Vaginae Sine Ejaculatio Seminis" (Introduce the penis in the vagina without ejaculating the semen). Samael Aun Weor experimented with that formula and developed the doctrine of The Perfect Matrimony.
20th-century Western writers
English novelist Aldous Huxley, in his last novel Island, wrote that Maithuna, the Yoga of Love, is "the same as what Roman Catholicism means by coitus reservatus." Getting to the point by discussing coitus reservatus, Alan W. Watts in Nature, Man and Woman notes: "I would like to see someone make a case for the idea that the Apostles really did hand down an inner tradition to the Church, and that through all these centuries the Church has managed to guard it from the public eye. If so, it has remained far more secret and 'esoteric' than in any of the other great spiritual traditions of the world, so much so that its existence is highly doubtful". The Welsh writer Norman Lewis, in his celebrated account of life in Naples in 1944, claimed that San Rocco was the patron saint of coitus reservatus: "I recommended him to drink – as the locals did – marsala with the yolk of eggs stirred into it, and to wear a medal of San Rocco, patron of coitus reservatus, which could be had in any religious-supplies shop". The psychologist Havelock Ellis writes: "Coitus Reservatus, – in which intercourse is maintained even for very long periods, during which a woman may orgasm several times while the penetrative partner succeeds in holding back orgasm – so far from being injurious to a woman, is probably the form of coitus which gives her the maximum gratification and relief".
Modern practice of Karezza
Marina Robinson and her husband Gary Wilson promoted Karezza through books, websites, and media interviews. Robinson's book, which she claimed was "researched" by Wilson, was Peace Between the Sheets, later renamed Cupid's Poisoned Arrow. When asked whether Karezza was science-based or spiritual, Robinson claimed Karezza was purely a spiritual practice for transferring sexual "energy". Robinson described Karezza sex as superior to conventional sex, claiming beliefs that orgasm was an important part of sex reflected brainwashing. Wilson claimed the practice cured his lifelong alcoholism and depression. Another prominent modern Karezza practitioner, Mary Sharpe, claimed the time after orgasm was a time of increased violence, even leading to terrorism, making it necessary to avoid orgasm.
Claims of the semen retention community and those of the NoFap community are among the least accurate concerning men's health.
See also
References
Sources
Reprinted in 2023:
Republished:
Also at
Republished in 2022:
Further reading
External links
Sexual acts
Virtue | Coitus reservatus | [
"Biology"
] | 2,578 | [
"Sexual acts",
"Behavior",
"Sexuality",
"Mating"
] |
337,457 | https://en.wikipedia.org/wiki/Raoul%20Bott | Raoul Bott (September 24, 1923 – December 20, 2005) was a Hungarian-American mathematician known for numerous foundational contributions to geometry in its broad sense. He is best known for his Bott periodicity theorem, the Morse–Bott functions which he used in this context, and the Borel–Bott–Weil theorem.
Early life
Bott was born in Budapest, Hungary, the son of Margit Kovács and Rudolph Bott. His father was of Austrian descent, and his mother was of Hungarian Jewish descent; Bott was raised a Catholic by his mother and stepfather in Bratislava, Czechoslovakia, now the capital of Slovakia. Bott grew up in Czechoslovakia and spent his working life in the United States. His family emigrated to Canada in 1938, and subsequently he served in the Canadian Army in Europe during World War II.
Career
Bott later went to college at McGill University in Montreal, where he studied electrical engineering. He then earned a PhD in mathematics from Carnegie Mellon University in Pittsburgh in 1949. His thesis, titled Electrical Network Theory, was written under the direction of Richard Duffin. Afterward, he began teaching at the University of Michigan in Ann Arbor. Bott continued his study at the Institute for Advanced Study in Princeton. He was a professor at Harvard University from 1959 to 1999. In 2005 Bott died of cancer in San Diego.
With Richard Duffin at Carnegie Mellon, Bott studied existence of electronic filters corresponding to given positive-real functions. In 1949 they proved a fundamental theorem of filter synthesis. Duffin and Bott extended earlier work by Otto Brune that requisite functions of complex frequency s could be realized by a passive network of inductors and capacitors. The proof relied on induction on the sum of the degrees of the polynomials in the numerator and denominator of the rational function.
In his 2000 interview with Allyn Jackson of the American Mathematical Society, he explained that he sees "networks as discrete versions of harmonic theory", so his experience with network synthesis and electronic filter topology introduced him to algebraic topology.
Bott met Arnold S. Shapiro at the IAS and they worked together.
He studied the homotopy theory of Lie groups, using methods from Morse theory, leading to the Bott periodicity theorem (1957). In the course of this work, he introduced Morse–Bott functions, an important generalization of Morse functions.
This led to his role as collaborator over many years with Michael Atiyah, initially via the part played by periodicity in K-theory. Bott made important contributions towards the index theorem, especially in formulating related fixed-point theorems, in particular the so-called 'Woods Hole fixed-point theorem', a combination of the Riemann–Roch theorem and Lefschetz fixed-point theorem (it is named after Woods Hole, Massachusetts, the site of a conference at which collective discussion formulated it). The major Atiyah–Bott papers on what is now the Atiyah–Bott fixed-point theorem were written in the years up to 1968; they collaborated further in recovering in contemporary language Ivan Petrovsky on Petrovsky lacunas of hyperbolic partial differential equations, prompted by Lars Gårding. In the 1980s, Atiyah and Bott investigated gauge theory, using the Yang–Mills equations on a Riemann surface to obtain topological information about the moduli spaces of stable bundles on Riemann surfaces. In 1983 he spoke to the Canadian Mathematical Society in a talk he called "A topologist marvels at Physics".
He is also well known in connection with the Borel–Bott–Weil theorem on representation theory of Lie groups via holomorphic sheaves and their cohomology groups; and for work on foliations. With Chern he worked on Nevanlinna theory, studied holomorphic vector bundles over complex analytic manifolds and introduced the Bott-Chern classes, useful in the theory of Arakelov geometry and also to algebraic number theory.
He introduced Bott–Samelson varieties and the Bott residue formula for complex manifolds and the Bott cannibalistic class.
Awards
In 1964, he was awarded the Oswald Veblen Prize in Geometry by the American Mathematical Society. In 1983, he was awarded the Jeffery–Williams Prize by the Canadian Mathematical Society. In 1987, he was awarded the National Medal of Science.
In 2000, he received the Wolf Prize. In 2005, he was elected an Overseas Fellow of the Royal Society of London.
Students
Bott had 35 PhD students, including Stephen Smale, Lawrence Conlon, Daniel Quillen, Peter Landweber, Robert MacPherson, Robert W. Brooks, Robin Forman, Rama Kocherlakota, Susan Tolman, András Szenes, Kevin Corlette, and Eric Weinstein. Smale and Quillen won Fields Medals in 1966 and 1978 respectively.
Publications
1995: Collected Papers. Vol. 4. Mathematics Related to Physics. Edited by Robert MacPherson. Contemporary Mathematicians. Birkhäuser Boston, xx+485 pp.
1995: Collected Papers. Vol. 3. Foliations. Edited by Robert D. MacPherson. Contemporary Mathematicians. Birkhäuser, xxxii+610 pp.
1994: Collected Papers. Vol. 2. Differential Operators. Edited by Robert D. MacPherson. Contemporary Mathematicians. Birkhäuser, xxxiv+802 pp.
1994: Collected Papers. Vol. 1. Topology and Lie Groups. Edited by Robert D. MacPherson. Contemporary Mathematicians. Birkhäuser, xii+584 pp.
1982: (with Loring W. Tu) Differential Forms in Algebraic Topology. Graduate Texts in Mathematics #82. Springer-Verlag, New York-Berlin. xiv+331 pp.
1969: Lectures on K(X). Mathematics Lecture Note Series W. A. Benjamin, New York-Amsterdam x+203 pp.
See also
Bott–Duffin inverse
Parallelizable manifold
Thom's and Bott's proofs of the Lefschetz hyperplane theorem
References
External links
.
(By Loring W. Tu, January 4, 2002).
(The New York Times, January 8, 2006).
1923 births
2005 deaths
20th-century American mathematicians
21st-century American mathematicians
American people of Hungarian-Jewish descent
Hungarian Jews
20th-century Hungarian mathematicians
Topologists
Geometers
Differential geometers
Algebraic geometers
Harvard University Department of Mathematics faculty
University of Michigan faculty
McGill University Faculty of Engineering alumni
Carnegie Mellon University alumni
Foreign members of the Royal Society
National Medal of Science laureates
Wolf Prize in Mathematics laureates
Members of the French Academy of Sciences
Hungarian Roman Catholics
Hungarian emigrants to Canada
Canadian emigrants to the United States
Hungarian expatriates in Czechoslovakia | Raoul Bott | [
"Mathematics"
] | 1,387 | [
"Topologists",
"Topology",
"Geometers",
"Geometry"
] |
337,489 | https://en.wikipedia.org/wiki/Karpman%20drama%20triangle | The Karpman drama triangle is a social model of human interaction proposed by San Francisco psychiatrist, Stephen B. Karpman in 1968. The triangle maps a type of destructive interaction that can occur among people in conflict. The drama triangle model is a tool used in psychotherapy, specifically transactional analysis. The triangle of actors in the drama are persecutors, victims, and rescuers.
Karpman described how in some cases these roles were not undertaken in an honest manner to resolve the presenting problem, but rather were used fluidly and switched between by the actors in a way that achieved unconscious goals and agendas. The outcome in such cases was that the actors would be left feeling justified and entrenched, but there would often be little or no change to the presenting problem, and other more fundamental problems giving rise to the situation remaining unaddressed.
Use
Through popular usage, and the work of Stephen Karpman and others, Karpman's triangle has been adapted for use in structural analysis and transactional analysis.
Theory
Karpman used triangles to map conflicted or drama-intense relationship transactions. The Karpman Drama Triangle models the connection between personal responsibility and power in conflicts, and the destructive and shifting roles people play. He defined three roles in the conflict; Persecutor, Rescuer (the one up positions) and Victim (one down position). Karpman placed these three roles on an inverted triangle and referred to them as being the three aspects, or faces of drama.
The Victim: The Victim in this model is not intended to represent an actual victim, but rather someone feeling or acting like one. The Victim seeks to convince him or herself and others that he or she cannot do anything, nothing can be done, all attempts are futile, despite trying hard. One payoff for this stance is avoiding real change or acknowledgement of one's true feelings, which may bring anxiety and risk, while feeling one is doing all one can to escape it. As such, the Victim's stance is "Poor me!" The Victim feels persecuted, oppressed, helpless, hopeless, powerless, ashamed, and seems unable to make decisions, solve problems, take pleasure in life or achieve insight. The Victim will remain with a Persecutor or, if not being persecuted, will set someone else up in the role of Persecutor. The Victim will also seek help, creating one or more Rescuers to save the day, who will in reality perpetuate the Victim's negative feelings and leave the situation broadly unchanged.
The Rescuer: The Rescuer's line is "Let me help you." A classic enabler, the Rescuer feels guilty if they do not go to the rescue, and ultimately becomes angry (and becomes a Persecutor) as their help fails to achieve change. Yet the Rescuer's rescuing has negative effects: it keeps the Victim dependent and doesn't allow the Victim permission to fail and experience the consequences of his or her choices. The rewards derived from this rescue role are that the focus is taken away from the Rescuer, who can also feel good for having tried, and justified in their negative feelings (to the other actor/s) upon failing. When one focuses one's energy on another, it enables one to ignore one's own anxiety and troubles. This rescue role is also pivotal because one's actual primary interest is really an avoidance of one's own problems disguised as concern for the Victim's needs.
The Persecutor: (a.k.a. Villain) The Persecutor insists, "It's all your fault." The Persecutor is controlling, blaming, critical, oppressive, angry, authoritarian, rigid and superior. But if blamed in turn, the Persecutor may become defensive and may switch roles to become a Victim if attacked forcefully by the Rescuer and/or Victim, in which case the Victim may also switch roles to become a Persecutor.
Initially a drama triangle arises when a person takes on the role of a victim or persecutor. This person then feels the need to enlist other players into the conflict. As often happens, a rescuer is encouraged to enter the situation. These enlisted players take on roles of their own that are not static, and therefore various scenarios can occur. The victim might turn on the rescuer, for example, while the rescuer then switches to persecution.
The reason that the situation persists is that each participant has their (frequently unconscious) psychological wishes/needs met without having to acknowledge the broader dysfunction or harm done in the situation as a whole. Each participant is acting upon his or her own selfish needs, rather than acting in a genuinely responsible or altruistic manner. Any character might "ordinarily come on like a plaintive victim; it is now clear that the one can switch into the role of Persecutor providing it is 'accidental' and the one apologizes for it".
The motivations of the rescuer are the least obvious. In the terms of the triangle, the rescuer has a mixed or covert motive and benefits egoically in some way from being "the one who rescues". The rescuer has a surface motive of resolving the problem and appears to make great efforts to solve it, but also has a hidden motive to not succeed, or to succeed in a way in which he or she benefits. The rescuer may get a self-esteem boost, for example, or receive respected rescue status, or derive enjoyment by having someone depend on or trust him or her and act in a way that ostensibly seems to be trying to help, but at a deeper level plays upon the victim in order to continue getting a payoff.
The relationship between the victim and the rescuer may be one of codependency. The rescuer keeps the victim dependent by encouraging his or her victimhood. The victim gets his or her needs met by being taken care of by the rescuer.
Participants generally tend to have a primary or habitual role (victim, rescuer, persecutor) when they enter into drama triangles. Participants first learn their habitual roles in their respective families of origin. Even though participants each have a role with which they most identify, once on the triangle, participants rotate through all the three positions.
Each triangle has a "payoff" for those playing it. The "antithesis" of a drama triangle lies in discovering how to deprive the actors of their payoff.
Historical context
Family therapy movement
After World War II, therapists observed that while many battle-torn veteran patients readjusted well after returning to their families, some patients did not; some even regressed when they returned to their home environment. Researchers felt that they needed an explanation for this and began to explore the dynamics of family life – and thus began the family therapy movement. Prior to this time, psychiatrists and psychoanalysts focused on the patient's already-developed psyche and downplayed outside detractors. Intrinsic factors were addressed and extrinsic reactions were considered as emanating from forces within the person.
Transactions analysis
In the 1950s, Eric Berne developed transactional analysis, a method for studying interactions between individuals. This approach was profoundly different from that of Freud. While Freud relied on asking patients about themselves, Berne felt that a therapist could learn by observing what was communicated (words, body language, facial expressions) in a transaction. So instead of directly asking the patient questions, Berne would frequently observe the patient in a group setting, noting all of the transactions that occurred between the patient and other individuals.
Triangles/triangulation
The theory of triangulation was originally published in 1966 by Murray Bowen as one of eight parts of Bowen's family systems theory. Murray Bowen, a pioneer in family systems theory, began his early work with schizophrenics at the Menninger Clinic, from 1946 to 1954. Triangulation is the “process whereby a two-party relationship that is experiencing tension will naturally involve third parties to reduce tension”. Simply put, when people find themselves in conflict with another person, they will reach out to a third person. The resulting triangle is more comfortable, as it can hold much more tension, because the tension is being shifted around three people instead of two.
Bowen studied the dyad of the mother and her schizophrenic child while he had them both living in a research unit at the Menninger clinic. Bowen then moved to the National Institute of Mental Health (NIMH), where he resided from 1954 to 1959. At the NIMH Bowen extended his hypothesis to include the father-mother-child triad. Bowen considered differentiation and triangles the crux of his theory, Bowen Family Systems Theory. Bowen intentionally used the word triangle rather than triad. In Bowen Family Systems Theory, the triangle is an essential part of the relationship.
Couples left to their own resources oscillate between closeness and distance. Two people having this imbalance often have difficulty resolving it by themselves. To stabilize the relationship, the couple often seek the aid of a third party to help re-establish closeness. A triangle is the smallest possible relationship system that can restore balance in a time of stress. The third person assumes an outside position. In periods of stress, the outside position is the most comfortable and desired position. The inside position is plagued by anxiety, along with its emotional closeness. The outsider serves to preserve the inside couple's relationship. Bowen noted that not all triangles are constructive – some are destructive.
Pathological/perverse triangles
In 1968, Nathan Ackerman conceptualized a destructive triangle. Ackerman stated "we observe certain constellations of family interactions which we have epitomized as the pattern of family interdependence, roles those of destroyer or persecutor, the victim of the scapegoating attack, and the family healer or the family doctor." Ackerman also recognized the pattern of attack, defense, and counterattack, as shifting roles.
Karpman triangle and Eric Berne
In 1968, Stephen Karpman, who had an interest in acting and was a member of the Screen Actors Guild, chose "drama triangle" rather than "conflict triangle" as, here, the Victim in his model is not intended to represent an actual victim, but rather someone feeling or acting like one. He first published his theory in an article entitled "Fairy Tales and Script Drama Analysis". His article, in part, examined the fairy tale "Little Red Riding Hood" to illustrate its points. Karpman was, at the time, a recent graduate of Duke University School of Medicine and was doing post post-graduate studies under Berne. Berne, who founded the field of transactional analysis, encouraged Karpman to publish what Berne referred to as "Karpman's triangle". Karpman's article was published in 1968. In 1972, Karpman received the Eric Berne Memorial Scientific Award for the work.
Transactional analysis
Eric Berne, a Canadian-born psychiatrist, created the theory of transactional analysis, in the middle of the 20th century, as a way of explaining human behavior. Berne's theory of transactional analysis was based on the ideas of Freud but was distinctly different. Freudian psychotherapists focused on talk therapy as a way of gaining insight to their patients' personalities. Berne believed that insight could be better discovered by analyzing patients’ social transactions.
Games in transactional analysis refers to a series of transactions that is complementary (reciprocal), ulterior, and proceeds towards a predictable outcome. In this context, the Karpman Drama Triangle is a "game".
Games are often characterized by a switch in roles of players towards the end. The number of players may vary. Games in this sense are devices used (often unconsciously) by people to create a circumstance where they can justifiably feel certain resulting feelings (such as anger or superiority) or justifiably take or avoid taking certain actions where their own inner wishes differ from societal expectations. They are always a substitute for a more genuine and full adult emotion and response which would be more appropriate. Three quantitative variables are often useful to consider for games:
Flexibility: "The ability of the players to change the currency of the game (that is, the tools they use to play it). 'Some games...can be played properly with only one kind of currency, while others, such as exhibitionistic games, are more flexible", so that the focus of power may shift from words, to money, to parts of the body.
Tenacity: "Some people give up their games easily, others are more persistent", referring to the way people stick to their games and their resistance to breaking away from them.
Intensity: "Some people play their games in a relaxed way, others are more tense and aggressive. Games so played are known as easy and hard games, respectively".
The consequences of games may vary from small paybacks to paybacks built up over a long period to a major level. Based on the degree of acceptability and potential harm, games are classified into three categories, representing first degree games, second degree games, and third degree games:
socially acceptable,
undesirable but not irreversibly damaging
may result in drastic harm.
The Karpman triangle was an adaptation of a model that was originally conceived to analyze the play-action pass and the draw play in American football and later adapted as a way to analyze movie scripts. Karpman is reported to have doodled thirty or more diagram types before settling on the triangle. Karpman credits the movie Valley of the Dolls as being a testbed for refining the model into what Berne coined as the Karpman Drama Triangle.
Karpman now has many variables of the Karpman triangle in his fully developed theory, besides role switches. These include space switches (private-public, open-closed, near-far) which precede, cause, or follow role switches, and script velocity (number of role switches in a given unit of time). These include the Question Mark triangle, False Perception triangle, Double Bind triangle, The Indecision triangle, the Vicious Cycle triangle, Trapping triangle, Escape triangle, Triangles of Oppression, and Triangles of Liberation, Switching in the triangle, and the Alcoholic Family triangle.
While transactional analysis is the method for studying interactions between individuals, one researcher postulates that drama-based leaders can instill an organizational culture of drama. Persecutors are more likely to be in leadership positions and a persecutor culture goes hand in hand with cutthroat competition, fear, blaming, manipulation, high turnover and an increased risk of lawsuits. There are also victim cultures which can lead to low morale and low engagement as well as an avoidance of conflict, and rescuer cultures which can be characterized as having a high dependence on the leader, low initiative and low innovation.
Therapeutic models
The Winner's Triangle was published by Acey Choy in 1990 as a therapeutic model for showing patients how to alter social transactions when entering a triangle at any of the three entry points. Choy recommends that anyone feeling like a victim think more in terms of being vulnerable and caring, that anyone cast as a persecutor adopt an assertive posture, and anyone recruited to be a rescuer should react by being "caring".
Vulnerable – a victim should be encouraged to accept their vulnerability, problem solve, and be more self-aware.
Assertive – a persecutor should be encouraged to ask for what they want, be assertive, and cultivate self-compassion.
Caring – a rescuer should be encouraged to show concern and be caring, but not over-reach and problem solve for others.
The Power of TED*, first published in 2009, recommends that the "victim" adopt the alternative role of creator, view the persecutor as a challenger, and enlist a coach instead of a rescuer.
Creator – victims are encouraged to be outcome-oriented as opposed to problem-oriented and take responsibility for choosing their response to life challenges. They should focus on resolving "dynamic tension" (the difference between current reality and the envisioned goal or outcome) by taking incremental steps toward the outcomes they are trying to achieve.
Challenger – a victim is encouraged to see a persecutor as a person (or situation) that forces the creator to clarify their needs, and focus on their learning and growth.
Coach – a rescuer should be encouraged to ask questions that are intended to help the individual to make informed choices. The key difference between a rescuer and a coach is that the coach sees the creator as capable of making choices and of solving their own problems. A coach asks questions that enable the creator to see the possibilities for positive action, and to focus on what they do want instead of what they don't want.
See also
References
Further reading
Books
Emerald, David (2016). The Power of TED* (*The Empowerment Dynamic). Bainbridge Island: Polaris Publishing Group.
Emerald, David (2019). 3 Vital Questions: Transforming Workplace Drama. Bainbridge Island: Polaris Publishing Group.
Karpman, Stephen (2014). A Game Free Life. Self published.
Zimberoff, Diane (1989). Breaking Free from the Victim Trap. Nazareth: Wellness Press.
Harris, Thomas (1969). I'm OK, You're OK. New York: Galahad Books.
Berne, Eric (1966). Games People Play. New York: Ballantine Books.
West, Chris (2020). The Karpman Drama Triangle Explained. London: CWTK Publishing.
Articles
Johnson, R. Skip (2015). Escaping Conflict and the Karpman Drama Triangle. BPDFamily
Forrest, Lynne (2008). The Three Faces of Victim — An Overview of the Drama Triangle. Transforming Victim Consciousness
Choy, Acey (1990). The Winner's Triangle Transactional Analysis Journal 20(1):40
Gurowitz, Edward (1978). Energy Considerations in Treatment of the Drama Triangle. Transactional Analysis Journal January 1978 vol. 8 no. 1: 16–18
Behavioral concepts
Transactional analysis
Triangles
Eponyms | Karpman drama triangle | [
"Biology"
] | 3,742 | [
"Behavior",
"Behavioral concepts",
"Behaviorism"
] |
337,522 | https://en.wikipedia.org/wiki/Eolas | Eolas (, meaning "Knowledge"; bacronym: "Embedded Objects Linked Across Systems") is a United States technology firm formed as a spin-off from the University of California, San Francisco (UCSF), in order to commercialize UCSF's patents for work done there by Eolas' co-founders, as part of the Visible Embryo Project. The company was founded in 1994 by Dr. Michael Doyle, Rachelle Tunik, David Martin, and Cheong Ang from the UCSF Center for Knowledge Management (CKM). The company was created at the request of UCSF, and was founded by the inventors of the university's patents.
In addition to the work done while at UCSF, Dr. Doyle has led work at Eolas to create new technologies ranging from Spatial Genomics/Spatial transcriptomics, Code signing, transient-key cryptography, and blockchain to mobile AI assistants and automated audio conversation annotation.
The University of California, San Francisco CKM team created an advanced early web browser that supported plugins, streaming media, and cloud computing, which could provide seamless access to potentially-unlimited remote high-performance computational capabilities. They demonstrated it at Xerox PARC, in November 1993, at the second Bay Area SIGWEB meeting. The claim that the plug-in/applet functionality was an innovation, advanced to justify their patent application, has been contested by Pei-Yuan Wei, who developed the earlier Viola browser, which added scripted-app capabilities in 1992, a claim supported by Sir Tim Berners-Lee, inventor of the WWW, and other Web pioneers. Given only a short time to prepare, Wei was only able to demonstrate Viola's equivalent capabilities for local rather than remote files at the 2003 Eolas v. Microsoft trial, and thus fell short of proving prior art to the trial court's satisfaction. The case with Microsoft over patent 5,838,906 was settled in 2007 for a confidential amount of money after an initial $565 million judgment was stayed on appeal, but the University of California disclosed its piece of the final settlement as $30.4 million. In 2009 Eolas sued numerous other companies over patent number 7,599,985 in the United States District Court for the Eastern District of Texas. As of June 2011, a number of these companies, including Texas Instruments, Oracle and JPMorgan Chase, had signed licensing deals with Eolas, while the company continued to seek licenses from others.
In February 2012, an eight-person jury in the Eastern District of Texas invalidated some of the claims in the ’906 and ’985 patents, and in July 2012, Judge Leonard Davis ruled against Eolas. One year later, moreover, the US Court of Appeals for the Federal Circuit sustained that ruling.
However, after a new patent covering cloud computing on the Web was granted to Eolas in November 2015, Eolas filed a new lawsuit against Google, Amazon and Walmart, which is currently underway in the Northern District of California.
Products
In September 1995, the founders of Eolas released WebRouser, an advanced Web browser based on Mosaic that implemented plugins, client-side image maps, web-page-defined browser buttons and menus, embedded streaming media, and cloud computing capabilities. In 2005, Eolas released 'Muse', a "multimedia doodling application," which it later licensed to Iconicast LLC, to use as the basis for an iOS app called 'HueTunes'. The HueTunes app was featured at the DEMO conference in 2013. In 2012, Eolas developed the Einstein Brain Atlas iPad app for the National Museum of Health and Medicine Chicago, which was named the Gizmodo App of the Day. According to the Eolas Web site, their current products include two health-education systems: the AnatLab Visible Human, used to teach gross anatomy to medical students, and AnatLab Histology, an iOS and Android app that provides mobile access to a complete collection of ultra-high-resolution histology microscopic slide images.
Patents
US patent 5,838,906, titled "Distributed hypermedia method for automatically invoking external application providing interaction and display of embedded objects within a hypermedia document," was filed on October 17, 1994 and granted on November 17, 1998.
In Autumn 2003, the inventor of the World Wide Web and the Director of the W3C Consortium Tim Berners-Lee wrote to the Under Secretary of Commerce, asking for this patent to be invalidated, in order to "eliminate this major impediment to the operation of the Web". Leaders of Open Source Community sided with Microsoft in fighting the patent due to its threat to the free nature of the Web and to the basic established HTML standards. The specific concerns of having one company (Eolas) controlling a critical piece of the Web framework were cited.
In March 2004, the United States Patent and Trademark Office (USPTO) re-examined and initially rejected the patent. Eolas submitted a rebuttal in May 2004. On September 27, 2005, the USPTO upheld the validity of the patent. The PTO ruling rejected the relevance of Pei Wei's Viola code to the Eolas patent. According to the University of California press release, "In its 'Reasons for Patentability/Confirmation' notice, the patent examiner rejected the arguments for voiding UC's previously approved patent claims for the Web-browser technology as well as the evidence presented to suggest that the technology had been developed prior to the UC innovation. The examiner considered the Viola reference the primary reference asserted by Microsoft at trial as a prior art publication and found that Viola does 'not teach nor fairly suggest that instant 906 invention, as claimed.'"
Eolas was granted a second patent in October 2009 related to the same technology.
After considering the evidence asserted at the 2012 trial, including Viola, the US Patent Office granted Eolas a new patent in November 2015 with claims which generally cover cloud computing on the Web.
All of the above patents had expired by September 2017.
Litigation
Microsoft declined to license the technology when it was offered to them (and others) in 1994.
In 1999, Eolas filed suit in the US District Court for the Northern District of Illinois against Microsoft over validity and use of the patent. Eolas won the initial case in August 2003 and was awarded damages of $ from Microsoft for infringement. The District Court reaffirmed the jury's decision in January 2004.
In June 2004, Microsoft appealed the case to the Court of Appeals for the Federal Circuit. In March 2005, the District Court judgment was remanded, but the infringement and damages parts of the case were upheld. The appeals court ruled that the two Viola-related exhibits that had been thrown out of the original trial needed to be shown to a jury in a retrial. Microsoft quickly filed for a rehearing.
In October 2005, the Supreme Court of the United States refused to hear Microsoft's appeal, leaving intact the Federal Circuit Court of Appeals ruling in favor of Eolas with respect to foreign sales of Microsoft Windows. However, the remand to District Court had not been heard yet.
In May 2007, the USPTO agreed to allow Microsoft to argue ownership of the patent after they reissued a Microsoft patent that covers the same concepts as outlined in the Eolas patent, and contains wording that mirrors the Eolas patent.
The USPTO ruled in favor of Eolas on that matter in September 2007.
Microsoft and Eolas agreed in July 2007 to delay a pending re-trial, in order to negotiate a settlement. On August 27, 2007, Eolas reported to its shareholders that a settlement had been reached and that Eolas expected to pay a substantial dividend as a result; the exact amount and terms of the settlement were not disclosed.
In October 2009, Eolas sued a number of large corporations for infringement of the same patent. The 22 sued corporations include Adobe, Amazon.com, Apple, Argosy Publishing, Blockbuster, CDW Corp., Citigroup, eBay, Frito-Lay, The Go Daddy Group, Google, J.C. Penney Co. Inc., JPMorgan Chase & Co., New Frontier Media Inc., Office Depot Inc., Perot Systems Corp., Playboy Enterprises International Inc., Rent-A-Center Inc., Staples Inc., Sun Microsystems Inc., Texas Instruments Inc., Yahoo! Inc., and YouTube LLC. Steven J. Vaughan-Nichols, writing in Computerworld's opinion section, called Eolas a patent troll after these lawsuits were initiated. As of June 2011, a number of these companies, including Texas Instruments, Oracle and JPMorgan Chase have signed licensing deals with Eolas, while others are still litigating.
In February 2012, a Texas jury found that some of the claims in two of Eolas' patents were invalid after testimony from several defendants including Tim Berners-Lee and Pei-Yuan Wei, credited as creator of the Viola browser. The testimony professed that the Viola browser included Eolas' claimed plugin invention before their conception date (September 7, 1993). There is "substantial evidence that Viola was publicly known and used" before the plaintiffs' alleged conception date, it added. The ruling effectively ended a pending lawsuit against Yahoo, Google, Amazon and JC Penney.
On July 23, 2013, the US Court of Appeals for the Federal Circuit, which has nationwide jurisdiction, affirmed a Texas federal court which had ruled in July 2012 that several claims relating to the two patents in the suit were invalid, a ruling which Eolas had appealed.
After the US Patent Office considered the evidence asserted at the 2012 trial, including Viola, it granted Eolas a new patent in November 2015 with claims which generally cover cloud computing on the Web. Eolas then filed a new lawsuit against Google, Amazon and Walmart, which is currently underway in the Northern District of California.
Effects on other browsers
In February 2006, Microsoft modified its Internet Explorer web browser to attempt to side-step the Eolas patent. The change, first discussed in 2003,
requires users to click once on an ActiveX control to "activate" it before they could use its interface. The specific message is "Click to activate this control", shown as a tooltip when the cursor is held over the embedded object. However, following a November 2007 announcement that Microsoft had "licensed the technologies from Eolas",
in April 2008, Microsoft released an update which removed the click-to-activate functionality, reverting the software to its original design.
In June 2006, Opera Software released version 9 of its Opera browser for Windows and other operating systems, with modifications similar to Microsoft's.
Doyle has stated that Eolas will offer royalty-free licenses to non-commercial entities. A statement on Eolas' web site clarifies the company's policy with regard to such licenses. , Mozilla Foundation, which develops the open-source Mozilla Firefox browser, has not announced that it has requested any license to the Eolas patents; the last of these patents expired prior to the end of 2017.
Proposed Workarounds
Before some claims in the company's patents were invalidated in 2012, one proposed workaround was to dynamically create the HTML element containing the plug-in using JavaScript, rather than embedding it on the page. In this situation, Internet Explorer did not ask the user for an "activation" click because of the infringers' argument that the patent did not cover embedded scripting.
Opera users could use User JavaScript functionality in the browser to attempt to work around this issue in a similar way with locally modified JavaScript.
See also
Software patent
List of software patents
Microsoft litigation
Inventor
References
External links
Letter of Sir Tim Berners-Lee to Under Secretary of Commerce
Butting Heads Over the '906 Rebuttal, Dale Dougherty
Eolas patent Valid Slashdot discussion.
Internet technology companies of the United States
Computer law
Patent monetization companies of the United States
United States patent case law
Companies based in Tyler, Texas
American companies established in 1994 | Eolas | [
"Technology"
] | 2,525 | [
"Computer law",
"Computing and society"
] |
337,542 | https://en.wikipedia.org/wiki/Information%20foraging | Information foraging is a theory that applies the ideas from optimal foraging theory to understand how human users search for information. The theory is based on the assumption that, when searching for information, humans use "built-in" foraging mechanisms that evolved to help our animal ancestors find food. Importantly, a better understanding of human search behavior can improve the usability of websites or any other user interface.
History of the theory
In the 1970s optimal foraging theory was developed by anthropologists and ecologists to explain how animals hunt for food. It suggested that the eating habits of animals revolve around maximizing energy intake over a given amount of time. For every predator, certain prey is worth pursuing, while others would result in a net loss of energy.
In the early 1990s, Peter Pirolli and Stuart Card from PARC noticed the similarities between users' information searching patterns and animal food foraging strategies. Working together with psychologists to analyze users' actions and the information landscape that they navigated (links, descriptions, and other data), they showed that information seekers use the same strategies as food foragers.
In the late 1990s, Ed H. Chi worked with Pirolli, Card, and others at PARC to further develop information scent ideas and algorithms to actually use these concepts in real interactive systems, including the modeling of web user browsing behavior, the inference of information needs from web visit log files, and the use of information scent concepts in reading and browsing interfaces.
Details of the theory
"Informavores" constantly make decisions on what kind of information to look for, whether to stay at the current site to try to find additional information or whether they should move on to another site, which path or link to follow to the next information site, and when to finally stop the search. Although human cognition is not a result of evolutionary pressure to improve Web use, survival-related traits to respond quickly on partial information and reduce energy expenditures force them to optimize their searching behavior and, simultaneously, to minimize the thinking required.
Information scent
The most important concept in the information foraging theory is information scent. As animals rely on scents to indicate the chances of finding prey in current area and guide them to other promising patches, so do humans rely on various cues in the information environment to get similar answers. Human users estimate how much useful information they are likely to get on a given path, and after seeking information compare the actual outcome with their predictions. When the information scent stops getting stronger (i.e., when users no longer expect to find useful additional information), the users move to a different information source.
Information diet
Some tendencies in the behaviour of web users are easily understood from the information foraging theory standpoint. On the Web, each site is a patch and information is the prey. Leaving a site is easy, but finding good sites has not always been as easy. Advanced search engines have changed this fact by reliably providing relevant links, altering the foraging strategies of the users. When users expect that sites with lots of information are easy to find, they have less incentive to stay in one place. The growing availability of broadband connections may have a similar effect: always-on connections encourage this behavior, short online visits to get specific answers.
Models
Attempts have been made to develop computational cognitive models to characterize information foraging behavior on the Web.
These models assume that users perceive relevance of information based on some measures of information scent, which are usually derived based on statistical techniques that extract semantic relatedness of words from large text databases. Recently these information foraging models have been extended to explain social information behavior. See also models of collaborative tagging.
Notes
Sources
Information Foraging: Why Google Makes People Leave Your Site Faster by Jakob Nielsen, June 30, 2003, Alertbox.
High-tech quest for a user-friendly Web, June 2, 2002, USA Today.
Word Spy – information foraging, December 19, 2002.
Human–computer interaction | Information foraging | [
"Engineering"
] | 789 | [
"Human–computer interaction",
"Human–machine interaction"
] |
337,566 | https://en.wikipedia.org/wiki/Long-term%20effects%20of%20alcohol | The long-term effects of alcohol have been extensively researched. The health effects of long-term alcohol consumption vary depending on the amount consumed. Even light drinking poses health risks, but atypically small amounts of alcohol may have health benefits. Alcoholism causes severe health consequences which outweigh any potential benefits.
Long-term alcohol consumption is capable of damaging nearly every organ and system in the body. Risks include malnutrition, cirrhosis, chronic pancreatitis, erectile dysfunction, hypertension, coronary heart disease, ischemic stroke, heart failure, atrial fibrillation, gastritis, stomach ulcers, alcoholic liver disease, certain types of dementia, and several types of cancer, including oropharyngeal cancer, esophageal cancer, liver cancer, colorectal cancer, and female breast cancers. In addition, damage to the central nervous system and peripheral nervous system (e.g., painful peripheral neuropathy) can occur from chronic heavy alcohol consumption. There is also an increased risk for accidental injuries, for example, those sustained in traffic accidents and falls. Excessive alcohol consumption can have a negative impact on aging. The developing adolescent brain is particularly vulnerable to the toxic effects of alcohol. In addition, the developing fetal brain is also vulnerable, and fetal alcohol spectrum disorders (FASDs) may result if pregnant mothers consume alcohol. Some nations have introduced alcohol packaging warning messages that inform consumers about alcohol and cancer, and about risk of fetal alcohol syndrome for women who drink while pregnant.
Conversely, light intake of alcohol may have some beneficial effects. The association of alcohol intake with reduced cardiovascular risk has been noted since 1904 and remains even after adjusting for known confounders. Light alcohol intake is also associated with reduced risk of type 2 diabetes, gastritis, and cholelithiasis. However, these are only observational studies and high-quality evidence for the beneficial effects of alcohol is nonexistent. Alcohol does have psychosocial benefits such as stress reduction, mood elevation, increased sociability, and relaxation, but it is unclear if these outweigh the confirmed increase in the risk of cancer.
Overall effect
The level of ethanol consumption that minimizes the risk of disease, injury, and death is subject to some controversy. Several studies have found a J-shaped relationship between alcohol consumption and health, meaning that risk is minimized at a certain (non-zero) consumption level, and drinking below or above this level increases risk, with the risk level of drinking a large amount of alcohol greater than the risk level of abstinence. Other studies have found a dose-response relationship, with lifetime abstention from alcohol being the optimal strategy and more consumption incurring more risk. The studies use different data sets and statistical techniques so cannot be directly compared. Some older studies included former and occasional drinkers in the "abstainers" category, which obscures the benefits of lifetime abstention as former drinkers often are in poor health. However, the J-curve was reconfirmed by studies that took the mentioned confounders into account. Nonetheless, some authors remain suspicious that the apparent health benefits of light alcohol use are in large part due to various selection biases and competing risks. Mendelian randomization studies have been inconsistent regarding the risk curve, with 3 studies finding linear dose-response risks overall and 2 studies finding a J-shape for lipid profiles. The variance in alcohol consumption that is explained by genetics is small, requiring large sample sizes and potentially violating assumptions of the analysis.
As one reviewer noted, "Despite the wealth of observational data, it is not absolutely clear that alcohol reduces risk, because no randomized controlled trials have been performed." The NIAAA announced a randomized controlled trial in 2017, but the NIH cancelled it in 2018 due to irregular interactions by the program staff with the alcohol industry. A trial in Spain is expected to complete in 2028. Fekjær compares the present situation to those of hormone replacement therapy (HRT), vitamin E, and β-carotene. Similarly to alcohol, observational studies for each of these treatments showed significantly reduced risk of coronary heart disease. However, initial randomized trials of these treatments failed to replicate the effect. For HRT, pooling multiple RCTs and stratifying the data by age and time since menopause showed the benefits were limited to treatment soon after menopause. For vitamin E, trials have shown that the benefits are limited to certain populations such as those with diabetes and a specific genotype. For β-carotene, the randomized trials have shown that β-carotene increases CVD risk when supplemented, with all beneficial effects due to other vitamins in foods providing β-carotene.
In light of the conflicting evidence, many have cautioned against recommendations for the use of alcohol for health benefits. At a symposium in 1997, Peter Anderson labeled such alcohol promotion as "ridiculous and dangerous". It has been argued that the health benefits from alcohol have been exaggerated by the alcohol industry, with industry participation in the wording of messages and warnings. The debate is not purely scientific, with groups such as ISFAR critiquing anti-alcohol studies as distorting the evidence, scientists in turn accusing these groups of bias due to industry funding, and members of the groups responding that these are false and misleading assertions. Studies with industry funding find less risk of stroke, and industry-linked systematic reviews consistently find cardioprotective effects, compared to reviews with no associations being 54% positive.
Considered as a treatment for cardiovascular disease, alcohol is addictive, has greater risk of adverse effects, and is less effective than other interventions such as heart medications, exercise, or good nutrition. The available evidence is in agreement that current drinking levels are too high. The World Health Organization has emphasized the need to revise alcohol control policies worldwide in order to reduce overall alcohol consumption.
The World
Globally, assuming the J-shaped curve is correct, the age-standardised, both-sexes consumption that minimizes risk is about 5 grams of ethanol per day, and an average individual would cause themselves harm by drinking more than 17 grams per day. However, the average intake among current drinkers in 2016 was approximately 40 grams of ethanol per day. 1.03 billion males (35.1% of the male population aged ≥15 years, ~2/3 of male drinkers) and 312 million females (10.5% of the female population aged ≥15 years, ~1/3 of female drinkers) consumed harmful amounts of alcohol. The proportion of the population consuming harmful amounts of alcohol has stayed at approximately the same level over the past three decades.
Estimates of the worldwide number of deaths per year caused by alcohol vary. The GBD 2016 study estimated 2.8 million, while the GBD 2020 study estimated 1.78 million. The WHO estimates 3 million deaths per year from harmful use of alcohol, representing 5.3% of all deaths across the globe. All of these numbers are net deaths, subtracting deaths prevented from deaths caused. Stockwell argues that alcohol may not prevent any deaths and guesses that as many as 6 million deaths may be caused by alcohol. Besides this, the World Health Organization attributes 5.1% of the global burden of disease and injury to alcohol, as measured in disability-adjusted life years (DALYs). The WHO does not list alcohol in its 2019 list of the top 20 leading causes of DALYs, but alcohol use disorder would rank around #39, combining AUD with alcohol-related cirrhosis and liver cancer would rank between malaria (#19) and refractive errors (#20), and all alcohol-attributed DALYs would rank between stroke (#3) and lower respiratory infections (#4). Similarly the number of alcohol-attributed deaths would rank between chronic obstructive pulmonary disease (#3) and lower respiratory infections (#4).
Research of Western cultures has consistently shown increased survival associated with light to moderate alcohol consumption. Australasia and Europe are also the locations with the highest levels of harmful alcohol consumption. Researchers have investigated cultures with different alcohol consumption norms and found conflicting results.
The risks of alcohol consumption are age-dependent. Risk is greatest among males aged 15–39 years, due to binge drinking which may result in violence or traffic accidents. It is less risky and potentially more beneficial for an older individual to consume a given amount of alcohol, compared to a similar younger individual, as they are less likely to develop cancer during their remaining lifespan, less likely to be involved in accidents, and more likely to benefit from alcohol's cardiovascular effects. Taking the lower bound of the confidence intervals, the GBD 2020 study suggests that people do not need to drink until age 25, and in many regions, the study did not find any significant benefit for drinking over abstinence even as late as ages 45 or 60. Other studies have found similar patterns.
India
A large study of 4465 subjects in India confirmed the possible harm of alcohol consumption on coronary risk in men. Compared to lifetime abstainers, alcohol users had higher blood sugar (2 mg/dl), blood pressure (2 mm Hg) levels, and the HDL-C levels (2 mg/dl) and significantly higher tobacco use (63% vs. 21%). Asian Indians who consume alcohol had a 60% higher risk of heart attack which was greater with local spirits (80%) than branded spirits (50%). The harm was observed in alcohol users classified as occasional as well as regular light, moderate, and heavy consumers. Five percent of all cancers diagnosed in Indian in 2021 were attributed to alcohol consumption, with cancers of the esophagus, liver, and breast accounting for the most number of cases.
Russia
As of 2014, male life expectancy was lower in Russia than other countries. For example, at 2005 mortality rates, only 7% of UK men but 37% of Russian men would die before the age of 55 years. A study by Zaridze et al. in 2009 found that "excessive alcohol consumption in Russia, particularly by men, has in recent years caused more than half of all the deaths at ages 15–54 years." The study used 43,802 deaths linked to alcohol or tobacco but only 5475 other deaths as controls. Further studies have confirmed that heavy drinking and smoking are the main cause of high death rates in Russia as of 2014. The high consumption of vodka in the context of binge drinking is a significant factor. For smokers aged 35-54, the 20-year risk of death was 35% for men who had reported drinking three or more bottles of vodka a week and 16% for men who had reported consuming less than one bottle a week.
South Asia
The landmark INTERHEART Study has revealed that alcohol consumption in South Asians was not protective against CAD in sharp contrast to other populations who benefit from it.
United Kingdom
A governmental report from Britain has found that "There were 8,724 alcohol-related deaths in 2007, lower than 2006, but more than double the 4,144 recorded in 1991. The alcohol-related death rate was 13.3 per 100,000 population in 2007, compared with 6.9 per 100,000 population in 1991." In Scotland, the NHS estimate that in 2003 one in every 20 deaths could be attributed to alcohol. A 2009 report noted that the death rate from alcohol-related disease was 9,000, a number three times that of 25 years previously.
A UK report came to the result that the effects of low-to-moderate alcohol consumption on mortality are age-dependent. Low-to-moderate alcohol use increases the risk of death for individuals aged 16–34 (due to increased risk of cancers, accidents, liver disease, and other factors), but decreases the risk of death for individuals ages 55+ (due to decreased risk of ischemic heart disease).
A study in the United Kingdom found that alcohol causes about 4% of cancer cases in the UK (12,500 cases per year).
United States
Excessive alcohol use was the 3rd leading behavioral cause of death for people in the United States in the year 2000. In 2001, an estimated 75,766 deaths were attributable to alcohol. From 2006 through 2010, there were approximately 87,798 deaths on average attributable to alcohol occurred in the United States each year. Alcohol-related deaths among Americans about doubled from 1999 to 2020. In 2020, alcohol was linked to nearly 50,000 deaths among adults aged 25 to 85, a sharp rise from just under 20,000 in 1999. All age groups experienced increases, with the most significant rise occurring in individuals aged 25 to 34, where death rates nearly quadrupled during this period. In 2025, the US Surgeon General advocated for cancer risk warnings on alcoholic beverages.
Cardiovascular system
Alcohol has been found to have anticoagulant properties. Thrombosis is lower among moderate drinkers than abstainers. A meta-analysis of randomized trials found that alcohol consumption in moderation decreases serum levels of fibrinogen, a protein that promotes clot formation, while it increases levels of tissue type plasminogen activator, an enzyme that helps dissolve clots. These changes were estimated to reduce coronary heart disease risk by about 24%. Another meta-analysis in 2011 found favorable changes in HDL cholesterol, adiponectin, and fibrinogen associated with moderate alcohol consumption. A systematic review based on 16,351 participants showed J-shaped curve for the overall relationship between cardiovascular mortality and alcohol intake. Maximal protective effect was shown with 5–10 g of alcohol consumption per day and the effect was significant up to 26 g/day alcohol consumption. Serum levels of C-reactive protein (CRP), a putative marker of inflammation and predictor of CHD (coronary heart disease) risk, are lower in moderate drinkers than in those who abstain from alcohol, suggesting that alcohol consumption in moderation might have anti-inflammatory effects. Data from one prospective study suggest that, among men with initially low alcohol consumption (</=1 drink per week), a subsequent moderate increase in alcohol consumption may lower their CVD risk.
Peripheral arterial disease
A prospective study published in 1997 found "moderate alcohol consumption appears to decrease the risk of PAD in apparently healthy men." In a large population-based study, moderate alcohol consumption was inversely associated with peripheral arterial disease in women but not in men. But when confounding by smoking was considered, the benefit extended to men. The study concluded "an inverse association between alcohol consumption and peripheral arterial disease was found in nonsmoking men and women."
Intermittent claudication
A study found that moderate consumption of alcohol had a protective effect against intermittent claudication. The lowest risk was seen in men who drank 1 to 2 drinks per day and in women who drank half to 1 drink per day.
Heart attack and stroke
Drinking in moderation has been found to help those who have had a heart attack survive it. However, excessive alcohol consumption leads to an increased risk of heart failure. At present there have been no randomised trials to confirm the evidence which suggests a protective role of low doses of alcohol against heart attacks. There is an increased risk of hypertriglyceridemia, cardiomyopathy, hypertension, and stroke if three or more standard drinks of alcohol are taken per day. A systematic review reported that reducing alcohol intake lowers blood pressure in a dose-dependent manner in heavy drinkers. There is no safe amount of alcohol without having a negative effect on blood pressure. Even individuals who consume only one drink per day show a link to higher blood pressure.
Cardiomyopathy
Large amounts of alcohol over the long term can lead to alcoholic cardiomyopathy. Alcoholic cardiomyopathy presents in a manner clinically identical to idiopathic dilated cardiomyopathy, involving hypertrophy of the musculature of the heart that can lead to congestive heart failure.
Hematologic diseases
Alcoholics may have anemia from several causes; they may also develop thrombocytopenia from direct toxic effect on megakaryocytes, or from hypersplenism.
Atrial fibrillation
Alcohol consumption increases the risk of atrial fibrillation, a type of abnormal heart rhythm that increases the risk of stroke and heart failure. This remains true even at moderate levels of consumption.
Nervous system
Chronic heavy alcohol consumption impairs brain development, causes alcohol dementia, brain shrinkage, physical dependence, alcoholic polyneuropathy (also known as 'alcohol leg'), increases neuropsychiatric and cognitive disorders and causes distortion of the brain chemistry. At present, due to poor study design and methodology, the literature is inconclusive on whether moderate alcohol consumption increases the risk of dementia or decreases it. Evidence for a protective effect of low to moderate alcohol consumption on age-related cognitive decline and dementia has been suggested by some research; however, other research has not found a protective effect of low to moderate alcohol consumption. Some evidence suggests that low to moderate alcohol consumption may speed up brain volume loss. Chronic consumption of alcohol may result in increased plasma levels of the toxic amino acid homocysteine; which may explain alcohol withdrawal seizures, alcohol-induced brain atrophy and alcohol-related cognitive disturbances. Alcohol's impact on the nervous system can also include disruptions of memory and learning (see Effects of alcohol on memory), such as resulting in a blackout phenomenon.
Strokes
Epidemiological studies of middle-aged populations generally find the relationship between alcohol intake and the risk of stroke to be either U- or J-shaped. There may be very different effects of alcohol based on the type of stroke studied. The predominant form of stroke in Western cultures is ischemic, whereas non-western cultures have more hemorrhagic stroke. In contrast to the beneficial effect of alcohol on ischemic stroke, consumption of more than two drinks per day increases the risk of hemorrhagic stroke. The National Stroke Association estimates this higher amount of alcohol increases stroke risk by 50%. "For stroke, the observed relationship between alcohol consumption and risk in a given population depends on the proportion of strokes that are hemorrhagic. Light-to-moderate alcohol intake is associated with a lower risk of ischemic stroke which is likely to be, in part, causal. Hemorrhagic stroke, on the other hand, displays a log-linear relationship with alcohol intake."
Brain
Alcohol misuse is associated with widespread and significant brain lesions. Alcohol related brain damage is not only due to the direct toxic effects of alcohol; alcohol withdrawal, nutritional deficiency, electrolyte disturbances, and liver damage are also believed to contribute to alcohol-related brain damage.
Cognition and dementia
Excessive alcohol intake is associated with impaired prospective memory. This impaired cognitive ability leads to increased failure to carry out an intended task at a later date, for example, forgetting to lock the door or to post a letter on time. The higher the volume of alcohol consumed and the longer consumed, the more severe the impairments. One of the organs most sensitive to the toxic effects of chronic alcohol consumption is the brain. In the United States approximately 20% of admissions to mental health facilities are related to alcohol-related cognitive impairment, most notably alcohol-related dementia. Chronic excessive alcohol intake is also associated with serious cognitive decline and a range of neuropsychiatric complications. The elderly are the most sensitive to the toxic effects of alcohol on the brain. There is some inconclusive evidence that small amounts of alcohol taken in earlier adult life is protective in later life against cognitive decline and dementia. However, a study concluded, "Our findings suggest that, despite previous suggestions, moderate alcohol consumption does not protect older people from cognitive decline."
Wernicke–Korsakoff syndrome is a manifestation of thiamine deficiency, usually as a secondary effect of alcohol misuse. The syndrome is a combined manifestation of two eponymous disorders, Korsakoff's Psychosis and Wernicke's encephalopathy. Wernicke's encephalopathy is the acute presentation of the syndrome and is characterised by a confusional state while Korsakoff's psychosis main symptoms are amnesia and executive dysfunction. "Banana bags", intravenous fluid containers containing vitamins and minerals (bright yellow due to the vitamins), can be used to mitigate these outcomes.
Essential tremor
Essential tremors—or, in the case of essential tremors on a background of family history of essential tremors, familial tremors—can be temporarily relieved in up to two-thirds of patients by drinking small amounts of alcohol.
Ethanol is known to activate aminobutyric acid type A (GABAA) and inhibit N-methyl-D-aspartate (NMDA) glutamate receptors, which are both implicated in essential tremor pathology and could underlie the ameliorative effects. Additionally, the effects of ethanol have been studied in different animal essential tremor models. (For more details on this topic, see Essential tremor).
Sleep
Chronic use of alcohol used to induce sleep can lead to insomnia: frequent moving between sleep stages occurs, with awakenings due to headaches and diaphoresis. Stopping chronic alcohol misuse can also lead to profound disturbances of sleep with vivid dreams. Chronic alcohol misuse is associated with NREM stage 3 and 4 sleep as well as suppression of REM sleep and REM sleep fragmentation. During withdrawal REM sleep is typically exaggerated as part of a rebound effect.
Mental health effects
High rates of major depressive disorder occur in heavy drinkers. Whether it is more true that major depressive disorder causes self-medicating alcohol use, or the increased incidence of the disorder in people with an alcohol use disorder is caused by the drinking, is not known though some evidence suggests drinking causes the disorder. Alcohol misuse is associated with a number of mental health disorders and alcoholics have a very high suicide rate. A study of people hospitalized for suicide attempts found that those who were alcoholics were 75 times more likely to go on to successfully commit suicide than non-alcoholic suicide attempts. In the general alcoholic population the increased risk of suicide compared to the general public is 5-20 times greater. About 15 percent of alcoholics commit suicide, the most common methods being overdosing and cutting/scratching. There are high rates of suicide attempts, self-harm, suicidal ideation, and self-harm ideation in people with substance dependence who have been hospitalized. Use of other illicit drugs is also associated with an increased risk of suicide. About 33 percent of suicides in the under 35s are correlated with alcohol or other substance misuse.
Social skills are significantly impaired in people that have alcoholism due to the neurotoxic effects of alcohol on the brain, especially the prefrontal cortex area of the brain. The social skills that are impaired by alcohol use disorder include impairments in perceiving facial emotions, prosody perception problems and theory of mind deficits; the ability to understand humor is also impaired in people with an alcohol use disorder.
Studies have shown that alcohol dependence relates directly to cravings and irritability. Another study has shown that alcohol use is a significant predisposing factor towards antisocial behavior in children. Depression, anxiety and panic disorder are disorders commonly reported by alcohol dependent people. Alcoholism is associated with dampened activation in brain networks responsible for emotional processing (e.g. the amygdala and hippocampus). Evidence that the mental health disorders are often induced by alcohol misuse via distortion of brain neurochemistry is indicated by the improvement or disappearance of symptoms that occurs after prolonged abstinence, although problems may worsen in early withdrawal and recovery periods. Psychosis is secondary to several alcohol-related conditions including acute intoxication and withdrawal after significant exposure. Chronic alcohol misuse can cause psychotic type symptoms to develop, more so than with other illicit substances. Alcohol misuse has been shown to cause an 800% increased risk of psychotic disorders in men and a 300% increased risk of psychotic disorders in women which are not related to pre-existing psychiatric disorders. This is significantly higher than the increased risk of psychotic disorders seen from cannabis use making alcohol misuse a very significant cause of psychotic disorders. Approximately 3 percent of people who are alcohol dependent experience psychosis during acute intoxication or withdrawal. Alcohol-related psychosis may manifest itself through a kindling mechanism. The mechanism of alcohol-related psychosis is due to distortions to neuronal membranes, gene expression, as well as thiamin deficiency. It is possible in some cases that excessive alcohol use, via a kindling mechanism, can cause the development of a chronic substance-induced psychotic disorder, i.e. schizophrenia. The effects of an alcohol-related psychosis include an increased risk of depression and suicide as well as psychosocial impairments. However, moderate wine drinking has been shown to lower the risk for depression.
While alcohol initially helps social phobia or panic symptoms, with longer term alcohol misuse can often worsen social phobia symptoms and can cause panic disorder to develop or worsen, during alcohol intoxication and especially during the alcohol withdrawal syndrome. This effect is not unique to alcohol but can also occur with long-term use of drugs which have a similar mechanism of action to alcohol such as the benzodiazepines, which are sometimes prescribed as tranquilizers to people with alcohol problems. Approximately half of patients attending mental health services for conditions including anxiety disorders such as panic disorder or social phobia have alcohol or benzodiazepine dependence. It was noted that every individual has an individual sensitivity level to alcohol or sedative hypnotic drugs and what one person can tolerate without ill health another will have very ill health and that even moderate drinking can cause rebound anxiety syndromes and sleep disorders. A person who is experiencing the toxic effects of alcohol will not benefit from other therapies or medications as they do not address the root cause of the symptoms.
Addiction to alcohol, as with any addictive substance tested so far, has been correlated with an enduring reduction in the expression of GLT1 (EAAT2) in the nucleus accumbens and is implicated in the drug-seeking behavior expressed nearly universally across all documented addiction syndromes. This long-term dysregulation of glutamate transmission is associated with an increase in vulnerability to both relapse-events after re-exposure to drug-use triggers as well as an overall increase in the likelihood of developing addiction to other reinforcing drugs. Drugs which help to re-stabilize the glutamate system such as N-acetylcysteine have been proposed for the treatment of addiction to cocaine, nicotine, and alcohol.
The effect on depression and returning to drinking among individuals with alcohol dependence has always been controversial. Studies show that after doing a study on men and women hospitalized for alcohol dependence the likelihood of returning to drinking with depression is extremely high. A diagnosis of major depression at entry into an inpatient treatment for alcohol dependence showed shorter times to first drink and also relapse in both women and men.
Digestive system and weight gain
The impact of alcohol on weight-gain is contentious: some studies find no effect, others find decreased or increased effect on weight gain.
Alcohol use increases the risk of chronic gastritis (stomach inflammation); it is one cause of cirrhosis, hepatitis, and pancreatitis in both its chronic and acute forms.
Metabolic syndrome
A national survey (NHANES) conducted in the U.S. concluded, "Mild to moderate alcohol consumption is associated with a lower prevalence of the metabolic syndrome, with a favorable influence on lipids, waist circumference, and fasting insulin. This association was strongest among whites and among beer and wine drinkers." Similarly, a national survey conducted in Korea reported a J-curve association between alcohol intake and metabolic syndrome: "The results of the present study suggest that the metabolic syndrome is negatively associated with light alcohol consumption (1–15 g alcohol/d) in Korean adults," but risk increased at higher alcohol consumption.
Gallbladder effects
Research has found that drinking reduces the risk of developing gallstones. Compared with alcohol abstainers, the relative risk of gallstone disease, controlling for age, sex, education, smoking, and body mass index, is 0.83 for occasional and regular moderate drinkers (< 25 ml of ethanol per day), 0.67 for intermediate drinkers (25-50 ml per day), and 0.58 for heavy drinkers. This inverse association was consistent across strata of age, sex, and body mass index." Frequency of drinking also appears to be a factor. "An increase in frequency of alcohol consumption also was related to decreased risk. Combining the reports of quantity and frequency of alcohol intake, a consumption pattern that reflected frequent intake (5–7 days/week) of any given amount of alcohol was associated with a decreased risk, as compared with nondrinkers. In contrast, infrequent alcohol intake (1–2 days/week) showed no significant association with risk."
A large self-reported study published in 1998 found no correlation between gallbladder disease and multiple factors including smoking, alcohol consumption, hypertension, and coffee consumption. A retrospective study from 1997 found vitamin C (ascorbic acid) supplement use in drinkers was associated with a lower prevalence of gallbladder disease, but this association was not seen in non-drinkers.
Liver disease
Alcoholic liver disease is a major public health problem. For example, in the United States up to two million people have alcohol-related liver disorders. Chronic heavy alcohol consumption can cause fatty liver, cirrhosis, and alcoholic hepatitis. Treatment options are limited and consist of most importantly discontinuing alcohol consumption. In cases of severe liver disease, the only treatment option may be a liver transplant from alcohol abstinent donors. Research is being conducted into the effectiveness of anti-TNFs. Certain complementary medications, e.g., milk thistle and silymarin, appear to offer some benefit. Alcohol is a leading cause of liver cancer in the Western world, accounting for 32-45% of hepatic cancers. Up to half a million people in the United States develop alcohol-related liver cancer.
Pancreatitis
Alcohol misuse is a leading cause of both acute pancreatitis and chronic pancreatitis. Alcoholic pancreatitis can result in severe abdominal pain and may progress to pancreatic cancer.
Chronic pancreatitis often results in intestinal malabsorption, and can result in diabetes.
Body composition
Alcohol affects the nutritional state of chronic drinkers. It can decrease food consumption and lead to malabsorption. It can also create imbalances in skeletal muscle mass and cause muscle wasting. Chronic consumption of alcohol can also increase the breakdown of important proteins in the body which can affect gene expression.
Oral and dental implications
Oral cancer
The consumption of alcohol alone is not associated with an increased risk of oral squamous cell carcinoma (OSCC); however, the synergistic consumption of alcohol and tobacco is positively associated with the occurrence of (OSCC), and significantly increases an individual's risk. Studies confirm that alcohol dissolves the lipid component of epithelium and increases the permeability, amplifying the toxicity of carcinogenic components of tobacco. Limiting the overall consumption of the two has shown to reduce the risk of OSCC by three-fourth. The knowledge provided is useful for better understanding the differences in the effect of the combined consumption of alcohol and tobacco, in the development of OSCC.
Alcohol consumption has frequently been associated with an increased risk of oral cancer in current literature. Studies have found that people that consume alcohol were two times more likely to develop oral cancer in comparison to people who did not. The mechanisms in which alcohol acts as a carcinogen within the oral cavity are currently not fully understood. It is thought to be a multifactorial disease which then gives rise to a cancerous lesion. Many theories have become apparent in research, including alcohol being responsible for high estrogen and androgen levels, specifically in women, which may facilitate the alcohol-related immunodeficiency and/or immunosuppression that causes carcinogenesis. Therefore, immediate cessation of the habit of alcohol consumption can aid in decreasing the risk of oral cancer.
Alcohol-based mouthwashes used to be very common and can still be purchased for use today. Correlation in the presence of alcohol in mouthwashes with development of oral and pharyngeal cancer is unknown due to lack of evidence. However, it has been suggested that acetaldehyde, the first metabolite of ethanol, plays a role in the carcinogenesis of alcohol in oral cancer. Acetaldehyde, has been found to increase when in the salivary medium after an alcoholic beverage has been consumed and could possibly occur with alcohol-based mouthwashes as well, posing as a possible risk factor for oral cancer. However, more research must be conducted regarding these theories.
Periodontitis
Alcohol consumption is associated with a higher risk of periodontitis, an inflammatory disease of the gums around the teeth. There was also found to be a dose-response relationship in which the risk of periodontitis increased by 0.4% for each additional gram of daily alcohol consumption. Mechanisms explaining the relationship between the two are still unclear; however, several explanations have been suggested. One explanation is the weakening of neutrophil activity by alcohol consumption which potentially leads to bacterial overgrowth and increases bacterial penetration subsequently leading to periodontal inflammation and periodontal disease. Characteristics of the disease include shrinkage of gingival height and increased mobility of teeth which may exfoliate if the disease continues to progress. A patient's consumption of alcohol needs to be monitored to estimate the risk of periodontitis, but further well-designed cohort studies are needed to reaffirm theses results.
Other systems
Respiratory system
Chronic alcohol ingestion can impair multiple critical cellular functions in the lungs. These cellular impairments can lead to increased susceptibility to serious complications from lung disease. Recent research cites alcoholic lung disease as comparable to liver disease in alcohol-related mortality. Alcoholics have a higher risk of developing acute respiratory distress syndrome (ARDS) and experience higher rates of mortality from ARDS when compared to non-alcoholics. In contrast to these findings, a large prospective study has shown a protective effect of moderate alcohol consumption on respiratory mortality.
Kidney stones
Research indicates that drinking beer or wine is associated with a lower risk of developing kidney stones.
Sexual function in men
Low to moderate alcohol consumption is shown to have protective effect for men's erectile function. Several reviews and meta-analyses of existing literature show that low to moderate alcohol consumption significantly decrease erectile dysfunction risk.
Men's sexual behaviors can be affected dramatically by high alcohol consumption. Both chronic and acute alcohol consumption have been shown in most studies
(but not all) to inhibit testosterone production in the testes. This is believed to be caused by the metabolism of alcohol reducing the NAD+/NADH ratio both in the liver and the testes; since the synthesis of testosterone requires NAD+, this tends to reduce testosterone production.
Long term excessive intake of alcohol can lead to damage to the central nervous system and the peripheral nervous system resulting in loss of sexual desire and impotence in men. This is caused by reduction of testosterone from ethanol-induced testicular atrophy, resulting in increased feminisation of males and is a clinical feature of alcohol abusing males who have cirrhosis of the liver.
Hormonal imbalance
Excessive alcohol intake can result in hyperoestrogenisation. It has been speculated that alcoholic beverages may contain estrogen-like compounds. In men, high levels of estrogen can lead to testicular failure and the development of feminine traits including development of male breasts, called gynecomastia. In women, increased levels of estrogen due to excessive alcohol intake have been related to an increased risk of breast cancer.
Increased cortisol
Alcohol and cortisol have a complex relationship. While cortisol is a stress hormone, alcoholism can lead to increased cortisol levels in the body over time. This can be problematic because cortisol can temporarily shut down other bodily functions, potentially causing physical damage.
Diabetes mellitus
A meta-analysis determined the dose-response relationships by sex and end point using lifetime abstainers as the reference group. A U-shaped relationship was found for both sexes. Compared with lifetime abstainers, the relative risk (RR) for type 2 diabetes among men was most protective when consuming 22 g/day alcohol and became deleterious at just over 60 g/day alcohol. Among women, consumption of 24 g/day alcohol was most protective, and became deleterious at about 50 g/day alcohol. A systematic review on intervention studies in women also supported this finding. It reported that alcohol consumption in moderation improved insulin sensitivity among women.
The way in which alcohol is consumed (i.e., with meals or binge drinking) affects various health outcomes. It may be the case that the risk of diabetes associated with heavy alcohol consumption is due to consumption mainly on the weekend as opposed to the same amount spread over a week. In the United Kingdom "advice on weekly consumption is avoided". A twenty-year twin study from Finland reported that moderate alcohol consumption may reduce the risk of type 2 diabetes in men and women. However, binge drinking and high alcohol consumption was found to increase the risk of type 2 diabetes in women.
Rheumatoid arthritis
Regular consumption of alcohol is associated with an increased risk of gouty arthritis and a decreased risk of rheumatoid arthritis. Two recent studies report that the more alcohol consumed, the lower the risk of developing rheumatoid arthritis. Among those who drank regularly, the one-quarter who drank the most were up to 50% less likely to develop the disease compared to the half who drank the least.
The researchers noted that moderate alcohol consumption also reduces the risk of other inflammatory processes such as cardiovascular disease. Some of the biological mechanisms by which ethanol reduces the risk of destructive arthritis and prevents the loss of bone mineral density (BMD), which is part of the disease process.
A study concluded, "Alcohol either protects from RA or, subjects with RA curtail their drinking after the manifestation of RA". Another study found, "Postmenopausal women who averaged more than 14 alcoholic drinks per week had a reduced risk of rheumatoid arthritis..."
Osteoporosis
Moderate alcohol consumption is associated with higher bone mineral density in postmenopausal women. "...Alcohol consumption significantly decreased the likelihood [of osteoporosis]." "Moderate alcohol intake was associated with higher BMD in postmenopausal elderly women." "Social drinking is associated with higher bone mineral density in men and women [over 45]." However, heavy alcohol use is associated with bone loss.
Skin
Chronic excessive alcohol use is associated with a wide range of skin disorders including urticaria, porphyria cutanea tarda, flushing, cutaneous stigmata of cirrhosis, psoriasis, pruritus, seborrheic dermatitis, and rosacea.
A 2010 study concluded, "Nonlight beer intake is associated with an increased risk of developing psoriasis among women. Other alcoholic beverages did not increase the risk of psoriasis in this study."
Immune system
Bacterial infection
Excessive alcohol consumption seen in people with an alcohol use disorder is a known risk factor for developing pneumonia.
Common cold
A study on the common cold found that "Greater numbers of alcoholic drinks (up to three or four per day) were associated with decreased risk for developing colds because drinking was associated with decreased illness following infection. However, the benefits of drinking occurred only among nonsmokers. ... Although alcohol consumption did not influence risk of clinical illness for smokers, moderate alcohol consumption was associated with decreased risk for nonsmokers."
Another study concluded, "Findings suggest that wine intake, especially red wine, may have a protective effect against common cold. Beer, spirits, and total alcohol intakes do not seem to affect the incidence of common cold."
Cancer
In 1988, the International Agency for Research on Cancer (Centre International de Recherche sur le Cancer) of the World Health Organization classified alcohol as a Group 1 carcinogen, stating "There is sufficient evidence for the carcinogenicity of alcoholic beverages in humans.... Alcoholic beverages are carcinogenic to humans (Group 1)." The U.S. Department of Health & Human Services' National Toxicology Program in 2000 listed alcohol as a known carcinogen.
It was estimated in 2006 that "3.6% of all cancer cases worldwide are related to alcohol drinking, resulting in 3.5% of all cancer deaths." A European study from 2011 found that one in 10 of all cancers in men and one in 33 in women were caused by past or current alcohol intake. The World Cancer Research Fund panel report Food, Nutrition, Physical Activity and the Prevention of Cancer: a Global Perspective finds the evidence "convincing" that alcoholic drinks increase the risk of the following cancers: mouth, pharynx and larynx, oesophagus, colorectum (men), breast (pre- and postmenopause).
Even light and moderate alcohol consumption increases cancer risk in individuals, especially with respect to squamous cell carcinoma of the esophagus, oropharyngeal cancer, and breast cancer.
Acetaldehyde, a metabolic product of alcohol, is suspected to promote cancer. Typically the liver eliminates 99% of acetaldehyde produced. However, liver disease and certain genetic enzyme deficiencies result in high acetaldehyde levels. Heavy drinkers who are exposed to high acetaldehyde levels due to a genetic defect in alcohol dehydrogenase have been found to be at greater risk of developing cancers of the upper gastrointestinal tract and liver. A review in 2007 found "convincing evidence that acetaldehyde... is responsible for the carcinogenic effect of ethanol... owing to its multiple mutagenic effects on DNA." Acetaldehyde can react with DNA to create DNA adducts including the Cr-PdG adduct. This Cr-PdG adduct "is likely to play a central role in the mechanism of alcoholic beverage related carcinogenesis."
Alcohol's effect on the fetus
Fetal alcohol syndrome or FAS is a birth defect that occurs in the offspring of women who drink alcohol during pregnancy. More risks than benefits according to a survey of current knowledge. Alcohol crosses the placental barrier and can stunt fetal growth or weight, create distinctive facial stigmata, damaged neurons and brain structures, and cause other physical, mental, or behavioural problems. Fetal alcohol exposure is the leading known cause of intellectual disability in the Western world. Alcohol consumption during pregnancy is associated with brain insulin and insulin-like growth factor resistance.
Effects of alcoholism on family and children
Children raised in alcoholic families have the potential to suffer emotional distress as they move into their own committed relationships. These children are at a higher risk for divorce and separation, unstable marital conditions and fractured families. Feelings of depression and antisocial behaviors experienced in early childhood frequently contribute to marital conflict and domestic violence. Women are more likely than men to be victims of alcohol-related domestic violence.
Children of alcoholics often incorporate behaviors learned as children into their marital relationships. These behaviors lead to poor parenting practices. For example, adult children of alcoholics may simultaneously express love and rejection toward a child or spouse. This is known as insecure attachment. Insecure attachment contributes to trust and bonding issues with intimate partners and offspring. In addition, prior parental emotional unavailability contributes to poor conflict resolution skills in adult relationships. Evidence shows a correlation between alcoholic fathers who display harsh and ineffective parenting practices with adolescent and adult alcohol dependence.
Children of alcoholics are often unable to trust other adults due to fear of abandonment. Further, because children learn their bonding behaviors from watching their parents' interactions, daughters of alcoholic fathers may be unable to interact appropriately with men when they reach adulthood. Poor behavior modeling by alcoholic parents contributes to inadequate understanding of how to engage in opposite gender interactions.
Sons of alcoholics are at risk for poor self-regulation that is often displayed in the preschool years. This leads to blaming others for behavioral problems and difficulties with impulse control. Poor decision-making correlates to early alcohol use, especially in sons of alcoholics. Sons often demonstrate thrill-seeking behavior, harm avoidance, and exhibit a low level of frustration tolerance.
Economic impact from long-term consumption of alcohol
There is currently no consistent approach to measuring the economic impact of alcohol consumption. The economic burden such as direct, indirect, and intangible cost of diseases can be estimated through cost-of-illness studies. Direct costs are estimated through prevalence and incidence studies, while indirect costs are estimated through the human capital method, the demographic method, and the friction cost method. However, it is difficult to accurately measure the economic impact due to differences in methodologies, cost items related to alcohol consumption, and measurement techniques.
Alcohol dependence has a far reaching impact on health outcomes. A study conducted in Germany in 2016 found the economic burden for those dependent on alcohol was 50% higher than those who were not. In the study, over half of the economic cost was due to lost productivity, and only 6% was due to alcohol treatment programs. The economic cost was mostly borne by individuals between 30 and 49 years old. In another study conducted with data from eight European countries, 77% of alcohol dependent patients had psychiatric and somatic co-morbidity, which in turn increased systematic healthcare and economic cost. Alcohol consumption can also affect the immune system and produce complications in people with HIV, pneumonia, and tuberculosis.
Indirect costs due to alcohol dependence are significant. The biggest indirect cost comes from lost productivity, followed by premature mortality. Men with alcohol dependence in the U.S. have lower labor force participation by 2.5%, lower earnings by 5.0%, and higher absenteeism by 0.5–1.2 days. Female binge drinkers have higher absenteeism by 0.4–0.9 days. Premature mortality is another large contributor to indirect costs of alcohol dependence. In 2004, 3.8% of global deaths were attributable to alcohol (6.3% for men and 1.1% for women). Those under 60 years old have much higher prevalence in global deaths attributable to alcohol at 5.3%.
In general, indirect costs such as premature mortality due to alcohol dependence, loss of productivity due to absenteeism and presenteeism, and cost of property damage and enforcement, far exceed the direct health care and law enforcement costs. Aggregating the economic cost from all sources, the impact can range from 0.45 to 5.44% of a country's gross domestic product (GDP). The wide range is due to inconsistency in measurement of economic burden, as researchers in some studies attributed possible positive effects from long term alcohol consumption.
See also
Short-term effects of alcohol consumption
Alcohol and suicide
Self-medication on CNS depressants (alcohol)
Self-medicated effectiveness on alcohol
Notes
References
External links
Alcohol, Other Drugs, and Health: Current Evidence. Boston University/National Institute on Alcohol Abuse and Alcoholism Journal
Alcohol is linked to high blood pressure, cancers and heart attack (NHS)
Long
Alc | Long-term effects of alcohol | [
"Environmental_science"
] | 9,809 | [
"Toxic effects of substances chiefly nonmedicinal as to source",
"Toxicology"
] |
337,595 | https://en.wikipedia.org/wiki/Voyagers%21 | Voyagers! is an American science-fiction television series about time travel that aired on NBC from October 3, 1982, to July 10, 1983, during the 1982–1983 season. The series starred Jon-Erik Hexum and Meeno Peluce.
Opening narration
Plot
Phineas Bogg (Jon-Erik Hexum) is one of a society of time travelers called Voyagers, who with the help of a young boy named Jeffrey Jones (played by Meeno Peluce) from 1982, uses a hand-held device called an Omni (which looks like a large pocket watch) that flashes red when history is wrong and green when the timeline is corrected, to travel in time and ensure that history unfolds correctly.
Bogg and Jeffrey first met when Bogg's Omni malfunctioned and took him to 1982 (the device was not supposed to reach any later than 1970), landing him in the skyscraper apartment of Jeffrey's aunt and uncle, who were caring for him after his parents' deaths. Bogg's guidebook, which contained a detailed description of how history was supposed to unfold, was grabbed by Jeffrey's dog Ralph, and in the struggle to retrieve it, Jeffrey accidentally fell out his bedroom window and Bogg jumped out to rescue him by activating the Omni. With his guidebook stuck in 1982, Bogg (who, being more interested in girls than in history, apparently never paid much attention in his Voyager training/history classes) had to rely on Jeffrey, whose father had been a history professor, to help him. Jeffrey's knowledge proved invaluable; for example, in the first episode, Jeffrey ensured that baby Moses' basket traveled down the Nile, where it was met by the Pharaoh's daughter.
Phineas is a great womanizer and manages to fall for a beautiful woman in almost every episode. Whenever Jeffrey's wisdom was paired up against Bogg's stubbornness, Jeffrey usually wins out, to which Bogg would always mutter, "Smart kids give me a pain!" Another catchphrase used by Bogg as an expletive was "Bat's breath!" They develop a strong relationship and become a formidable team. In the course of their adventures together, they sometimes encounter other Voyagers whose missions happened to overlap with theirs.
As revealed later in the series, despite Jeffrey's age and the accidental circumstances of his first encounter with Phineas, he was always destined to become a Voyager.
Over the closing credits of each episode, regular cast member Meeno Peluce said in voice-over: "If you want to learn more about [historical element from the episode], take a voyage down to your public library. It's all in books!"
Cast
Jon-Erik Hexum as Phineas Bogg
Meeno Peluce as Jeffrey Jones
Reception
Tom Shales of The Washington Post praised the series as "a live-action version of the Mr. Peabody and Sherman cartoons on the delightful old 'Bullwinkle' show" and "largely a joy ride from start to finish."
Voyagers! ran for one season of 20 episodes, broadcast opposite the top-rated 60 Minutes. The series averaged a 17 share. Voyagers! seemed likely to be renewed for a second season, but controversies in 60 Minutes reporting led executives to believe that 60 Minutes might successfully be challenged by a competing news program, instead. NBC cancelled Voyagers! and replaced it with the news magazine program Monitor, which averaged only a 7 share. David Letterman poked fun at NBC's cancellation of the series by airing a sketch on his Late Night program titled "They Took My Show Away", a parody of an after-school special in which the host comforts a boy who was a Voyagers! fan.
U.S. television ratings
<onlyinclude>
Home media
Television film
In 1985, following the death of series lead Jon-Erik Hexum, Universal re-edited several episodes of the show into a television film. Entitled Voyager from the Unknown, the story combined the pilot episode and "Voyagers of the Titanic" into one feature-length film. This version incorporates new video special effects, some voice-over dubbing for Hexum and Peluce's characters that changed, and added dialogue and new footage to include a supercomputer directing Voyager missions.
The opening begins with a narration and painted illustrations of Bogg receiving his guidebook on "Planet Voyager" by artist Jerry Gebr.
"Far out in the cosmos there exists a planet known as Voyager, where the mystery of travel into space and through time has been solved. It is inhabited by a race who call themselves Voyagers. Their purpose is to keep constant surveillance on history. These people have a time machine device, the Omni, which will take them into the past, present or future. As each Voyager graduates he is given an omni and a guidebook. One such graduate Phineas Bogg, who was assigned as a field worker to operate in certain time zones."
VHS release
The re-edited telefilm was issued on VHS by MCA Home Video in 1985. It was the only official release of Voyagers! on home video in the US until the DVD release in 2007.
DVD release
On July 17, 2007, Universal Studios Home Entertainment released all 20 episodes of Voyagers! on DVD in Region 1. It was released in Region 2 on October 29, 2007.
Streaming
All 20 episodes are also available in the United States by streaming through Amazon Prime Video.
As of September 2024, the series was available on The Roku Channel.
Episodes
References
Bibliography
External links
Voyagers Guidebook
Voyagers Guidebook (Blog Site)
1982 American television series debuts
1983 American television series endings
1980s American science fiction television series
Alternate history television series
American English-language television shows
NBC television dramas
Television series by Universal Television
American time travel television series
1980s American time travel television series
Cultural depictions of Mark Twain
Cultural depictions of Theodore Roosevelt
Cultural depictions of Franklin D. Roosevelt
Cultural depictions of T. E. Lawrence
Cultural depictions of Spartacus
Cultural depictions of Billy the Kid
Cultural depictions of Jack the Ripper
Cultural depictions of the Wright brothers
Cultural depictions of Babe Ruth
Cultural depictions of Charles Dickens
Cultural depictions of Isaac Newton
Cultural depictions of Harry Houdini
Cultural depictions of Marco Polo
Cultural depictions of Kublai Khan
Cultural depictions of Albert Schweitzer
Cultural depictions of Alexander Graham Bell
Cultural depictions of Arthur Conan Doyle
Cultural depictions of Louis Pasteur
Cultural depictions of Buffalo Bill
Cultural depictions of Jimmy Carter
Cultural depictions of Thomas Edison
Cultural depictions of George Washington
Depictions of Abraham Lincoln on television
Cultural depictions of Albert Einstein
Cultural depictions of Andrew Jackson
Depictions of Cleopatra on television
Cultural depictions of Queen Victoria on television
Television series about RMS Titanic | Voyagers! | [
"Astronomy"
] | 1,374 | [
"Cultural depictions of Isaac Newton",
"Cultural depictions of astronomers"
] |
337,713 | https://en.wikipedia.org/wiki/Composite%20data%20type | In computer science, a composite data type or compound data type is a data type that consists of programming language scalar data types and other composite types that may be heterogeneous and hierarchical in nature. It is sometimes called a structure or by a language-specific keyword used to define one such as struct. It falls into the aggregate type classification which includes homogenous collections such as the array and list.
See also
References
Data types
Type theory
Articles with example C code
Articles with example C++ code | Composite data type | [
"Mathematics"
] | 106 | [
"Type theory",
"Mathematical logic",
"Mathematical structures",
"Mathematical objects"
] |
337,775 | https://en.wikipedia.org/wiki/Nuclear%20weapons%20testing | Nuclear weapons tests are experiments carried out to determine the performance of nuclear weapons and the effects of their explosion. Nuclear testing is a sensitive political issue. Governments have often performed tests to signal strength. Because of their destruction and fallout, testing has seen opposition by civilians as well as governments, with international bans having been agreed on. Thousands of tests have been performed, with most in the second half of the 20th century.
The first nuclear device was detonated as a test by the United States at the Trinity site in New Mexico on July 16, 1945, with a yield approximately equivalent to 20 kilotons of TNT. The first thermonuclear weapon technology test of an engineered device, codenamed Ivy Mike, was tested at the Enewetak Atoll in the Marshall Islands on November 1, 1952 (local date), also by the United States. The largest nuclear weapon ever tested was the Tsar Bomba of the Soviet Union at Novaya Zemlya on October 30, 1961, with the largest yield ever seen, an estimated 50–58 megatons.
With the advent of nuclear technology and its increasingly global fallout an anti-nuclear movement formed and in 1963, three (UK, US, Soviet Union) of the then four nuclear states and many non-nuclear states signed the Limited Test Ban Treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. The treaty permitted underground nuclear testing. France continued atmospheric testing until 1974, and China continued until 1980. Neither has signed the treaty.
Underground tests conducted by the Soviet Union continued until 1990, the United Kingdom until 1991, the United States until 1992, and both China and France until 1996. In signing the Comprehensive Nuclear-Test-Ban Treaty in 1996, these countries pledged to discontinue all nuclear testing; the treaty has not yet entered into force because of its failure to be ratified by eight countries. Non-signatories India and Pakistan last tested nuclear weapons in 1998. North Korea conducted nuclear tests in 2006, 2009, 2013, January 2016, September 2016 and 2017. The most recent confirmed nuclear test in September 2017 in North Korea.
Types
Nuclear weapons tests have historically been divided into four categories reflecting the medium or location of the test.
Atmospheric testing involves explosions that take place in the atmosphere. Generally, these have occurred as devices detonated on towers, balloons, barges, or islands, or dropped from airplanes, and also those only buried far enough to intentionally create a surface-breaking crater. The United States, the Soviet Union, and China have all conducted tests involving explosions of missile-launched bombs (See List of nuclear weapons tests#Tests of live warheads on rockets). Nuclear explosions close enough to the ground to draw dirt and debris into their mushroom cloud can generate large amounts of nuclear fallout due to irradiation of the debris (particularly with neutron radiation) as well as radioactive contamination of otherwise non-radioactive material. This definition of atmospheric is used in the Limited Test Ban Treaty, which banned this class of testing along with exoatmospheric and underwater.
Underground testing is conducted under the surface of the earth, at varying depths. Underground nuclear testing made up the majority of nuclear tests by the United States and the Soviet Union during the Cold War; other forms of nuclear testing were banned by the Limited Test Ban Treaty in 1963. True underground tests are intended to be fully contained and emit a negligible amount of fallout. Unfortunately these nuclear tests do occasionally "vent" to the surface, producing from nearly none to considerable amounts of radioactive debris as a consequence. Underground testing, almost by definition, causes seismic activity of a magnitude that depends on the yield of the nuclear device and the composition of the medium in which it is detonated, and generally creates a subsidence crater. In 1976, the United States and the USSR agreed to limit the maximum yield of underground tests to 150 kt with the Threshold Test Ban Treaty.Underground testing also falls into two physical categories: tunnel tests in generally horizontal tunnel drifts, and shaft tests in vertically drilled holes.
Exoatmospheric testing is conducted above the atmosphere. The test devices are lifted on rockets. These high-altitude nuclear explosions can generate a nuclear electromagnetic pulse (NEMP) when they occur in the ionosphere, and charged particles resulting from the blast can cross hemispheres following geomagnetic lines of force to create an auroral display.
Underwater testing involves nuclear devices being detonated underwater, usually moored to a ship or a barge (which is subsequently destroyed by the explosion). Tests of this nature have usually been conducted to evaluate the effects of nuclear weapons against naval vessels (such as in Operation Crossroads), or to evaluate potential sea-based nuclear weapons (such as nuclear torpedoes or depth charges). Underwater tests close to the surface can disperse large amounts of radioactive particles in water and steam, contaminating nearby ships or structures, though they generally do not create fallout other than very locally to the explosion.
Salvo tests
Another way to classify nuclear tests is by the number of explosions that constitute the test. The treaty definition of a salvo test is:
In conformity with treaties between the United States and the Soviet Union, a salvo is defined, for multiple explosions for peaceful purposes, as two or more separate explosions where a period of time between successive individual explosions does not exceed 5 seconds and where the burial points of all explosive devices can be connected by segments of straight lines, each of them connecting two burial points, and the total length does not exceed 40 kilometers. For nuclear weapon tests, a salvo is defined as two or more underground nuclear explosions conducted at a test site within an area delineated by a circle having a diameter of two kilometers and conducted within a total period of time of 0.1 seconds.
The USSR has exploded up to eight devices in a single salvo test; Pakistan's second and last official test exploded four different devices. Almost all lists in the literature are lists of tests; in the lists in Wikipedia (for example, Operation Cresset has separate items for Cremino and Caerphilly, which together constitute a single test), the lists are of explosions.
Purpose
Separately from these designations, nuclear tests are also often categorized by the purpose of the test itself.
Weapons-related tests are designed to garner information about how (and if) the weapons themselves work. Some serve to develop and validate a specific weapon type. Others test experimental concepts or are physics experiments meant to gain fundamental knowledge of the processes and materials involved in nuclear detonations.
Weapons effects tests are designed to gain information about the effects of the weapons on structures, equipment, organisms, and the environment. They are mainly used to assess and improve survivability to nuclear explosions in civilian and military contexts, tailor weapons to their targets, and develop the tactics of nuclear warfare.
Safety experiments are designed to study the behavior of weapons in simulated accident scenarios. In particular, they are used to verify that a (significant) nuclear detonation cannot happen by accident. They include one-point safety tests and simulations of storage and transportation accidents.
Nuclear test detection experiments are designed to improve the capabilities to detect, locate, and identify nuclear detonations, in particular, to monitor compliance with test-ban treaties. In the United States these tests are associated with Operation Vela Uniform before the Comprehensive Test Ban Treaty stopped all nuclear testing among signatories.
Peaceful nuclear explosions were conducted to investigate non-military applications of nuclear explosives. In the United States, these were performed under the umbrella name of Operation Plowshare.
Aside from these technical considerations, tests have been conducted for political and training purposes, and can often serve multiple purposes.
Alternatives to full-scale testing
Computer simulation is used extensively to provide as much information as possible without physical testing. Mathematical models for such simulation model scenarios not only of performance but also of shelf life and maintenance. A theme has generally been that even though simulations cannot fully replace physical testing, they can reduce the amount of it that is necessary.
Hydronuclear tests study nuclear materials under the conditions of explosive shock compression. They can create subcritical conditions, or supercritical conditions with yields ranging from negligible all the way up to a substantial fraction of full weapon yield.
Critical mass experiments determine the quantity of fissile material required for criticality with a variety of fissile material compositions, densities, shapes, and reflectors. They can be subcritical or supercritical, in which case significant radiation fluxes can be produced. This type of test has resulted in several criticality accidents.
Subcritical (or cold) tests are any type of tests involving nuclear materials and possibly high explosives (like those mentioned above) that purposely result in no yield. The name refers to the lack of creation of a critical mass of fissile material. They are the only type of tests allowed under the interpretation of the Comprehensive Nuclear-Test-Ban Treaty tacitly agreed to by the major atomic powers. Subcritical tests continue to be performed by the United States, Russia, and the People's Republic of China, at least.
Subcritical tests executed by the United States include:
History
The first atomic weapons test was conducted near Alamogordo, New Mexico, on July 16, 1945, during the Manhattan Project, and given the codename "Trinity". The test was originally to confirm that the implosion-type nuclear weapon design was feasible, and to give an idea of what the actual size and effects of a nuclear explosion would be before they were used in combat against Japan. The test gave a good approximation of many of the explosion's effects, but did not give an appreciable understanding of nuclear fallout, which was not well understood by the project scientists until well after the atomic bombings of Hiroshima and Nagasaki.
The United States conducted six atomic tests before the Soviet Union developed their first atomic bomb (RDS-1) and tested it on August 29, 1949. Neither country had very many atomic weapons to spare at first, and so testing was relatively infrequent (when the US used two weapons for Operation Crossroads in 1946, they were detonating over 20% of their current arsenal). By the 1950s the United States had established a dedicated test site on its own territory (Nevada Test Site) and was also using a site in the Marshall Islands (Pacific Proving Grounds) for extensive atomic and nuclear testing.
The early tests were used primarily to discern the military effects of atomic weapons (Crossroads had involved the effect of atomic weapons on a navy, and how they functioned underwater) and to test new weapon designs. During the 1950s, these included new hydrogen bomb designs, which were tested in the Pacific, and also new and improved fission weapon designs. The Soviet Union also began testing on a limited scale, primarily in Kazakhstan. During the later phases of the Cold War, both countries developed accelerated testing programs, testing many hundreds of bombs over the last half of the 20th century.
Atomic and nuclear tests can involve many hazards. Some of these were illustrated in the US Castle Bravo test in 1954. The weapon design tested was a new form of hydrogen bomb, and the scientists underestimated how vigorously some of the weapon materials would react. As a result, the explosion—with a yield of 15 Mt—was over twice what was predicted. Aside from this problem, the weapon also generated a large amount of radioactive nuclear fallout, more than had been anticipated, and a change in the weather pattern caused the fallout to spread in a direction not cleared in advance. The fallout plume spread high levels of radiation for over , contaminating populated islands in nearby atoll formations. Though they were soon evacuated, many of the islands' inhabitants suffered from radiation burns and later from other effects such as increased cancer rate and birth defects, as did the crew of the Japanese fishing boat Daigo Fukuryū Maru. One crewman died from radiation sickness after returning to port, and it was feared that the radioactive fish they had been carrying had made it into the Japanese food supply.
Castle Bravo was the worst US nuclear accident, but many of its component problems—unpredictably large yields, changing weather patterns, unexpected fallout contamination of populations and the food supply—occurred during other atmospheric nuclear weapons tests by other countries as well. Concerns over worldwide fallout rates eventually led to the Partial Test Ban Treaty in 1963, which limited signatories to underground testing. Not all countries stopped atmospheric testing, but because the United States and the Soviet Union were responsible for roughly 86% of all nuclear tests, their compliance cut the overall level substantially. France continued atmospheric testing until 1974, and China until 1980.
A tacit moratorium on testing was in effect from 1958 to 1961 and ended with a series of Soviet tests in late 1961, including the Tsar Bomba, the largest nuclear weapon ever tested. The United States responded in 1962 with Operation Dominic, involving dozens of tests, including the explosion of a missile launched from a submarine.
Almost all new nuclear powers have announced their possession of nuclear weapons with a nuclear test. The only acknowledged nuclear power that claims never to have conducted a test was South Africa (although see Vela incident), which has since dismantled all of its weapons. Israel is widely thought to possess a sizable nuclear arsenal, though it has never tested, unless they were involved in Vela. Experts disagree on whether states can have reliable nuclear arsenals—especially ones using advanced warhead designs, such as hydrogen bombs and miniaturized weapons—without testing, though all agree that it is very unlikely to develop significant nuclear innovations without testing. One other approach is to use supercomputers to conduct "virtual" testing, but codes need to be validated against test data.
There have been many attempts to limit the number and size of nuclear tests; the most far-reaching is the Comprehensive Test Ban Treaty of 1996, which has not, , been ratified by eight of the "Annex 2 countries" required for it to take effect, including the United States. Nuclear testing has since become a controversial issue in the United States, with a number of politicians saying that future testing might be necessary to maintain the aging warheads from the Cold War. Because nuclear testing is seen as furthering nuclear arms development, many are opposed to future testing as an acceleration of the arms race.
In total nuclear test megatonnage, from 1945 to 1992, 520 atmospheric nuclear explosions (including eight underwater) were conducted with a total yield of 545 megatons, with a peak occurring in 1961–1962, when 340 megatons were detonated in the atmosphere by the United States and Soviet Union, while the estimated number of underground nuclear tests conducted in the period from 1957 to 1992 was 1,352 explosions with a total yield of 90 Mt.
Yield
The yields of atomic bombs and thermonuclear are typically measured in different amounts. Thermonuclear bombs can be hundreds or thousands of times stronger than their atomic counterparts. Due to this, thermonuclear bombs' yields are usually expressed in megatons which is about the equivalent of 1,000,000 tons of TNT. In contrast, atomic bombs' yields are typically measured in kilotons, or about 1,000 tons of TNT.
In US context, it was decided during the Manhattan Project that yield measured in tons of TNT equivalent could be imprecise. This comes from the range of experimental values of the energy content of TNT, ranging from . There is also the issue of which ton to use, as short tons, long tons, and metric tonnes all have different values. It was therefore decided that one kiloton would be equivalent to .
Nuclear testing by country
The nuclear powers have conducted more than 2,000 nuclear test explosions (numbers are approximate, as some test results have been disputed):
United States: 1,054 tests by official count (involving at least 1,149 devices). 219 were atmospheric tests as defined by the CTBT. These tests include 904 at the Nevada Test Site, 106 at the Pacific Proving Grounds and other locations in the Pacific, 3 in the South Atlantic Ocean, and 17 other tests taking place in Amchitka Alaska, Colorado, Mississippi, New Mexico and Nevada outside the NNSS (see Nuclear weapons and the United States for details). 24 tests are classified as British tests held at the NTS. There were 35 Plowshare detonations and 7 Vela Uniform tests; 88 tests were safety experiments and 4 were transportation/storage tests. Motion pictures were made of the explosions, later used to validate computer simulation predictions of explosions. United States' table data.
Soviet Union: 715 tests (involving 969 devices) by official count, plus 13 unnumbered test failures. Most were at their Southern Test Area at Semipalatinsk Test Site and the Northern Test Area at Novaya Zemlya. Others include rocket tests and peaceful-use explosions at various sites in Russia, Kazakhstan, Turkmenistan, Uzbekistan and Ukraine. Soviet Union's table data.
United Kingdom: 45 tests, of which 12 were in Australian territory, including three at the Montebello Islands and nine in mainland South Australia at Maralinga and Emu Field, 9 at Christmas Island (Kiritimati) in the Pacific Ocean, plus 24 in the United States at the Nevada Test Site as part of joint test series). 43 safety tests (the Vixen series) are not included in that number, though safety experiments by other countries are. The United Kingdom's summary table.
France: 210 tests by official count (50 atmospheric, 160 underground), four atomic atmospheric tests at C.S.E.M. near Reggane, 13 atomic underground tests at C.E.M.O. near In Ekker in the French Algerian Sahara, and nuclear atmospheric and underground tests at and around Fangataufa and Moruroa Atolls in French Polynesia. Four of the In Ekker tests are counted as peaceful use, as they were reported as part of the CET's APEX (Application pacifique des expérimentations nucléaires, “Peaceful Application of Nuclear Experiments”), and given alternate names. France's summary table.
China: 45 tests (23 atmospheric and 22 underground), at Lop Nur Nuclear Weapons Test Base, in Malan, Xinjiang There are two additional unnumbered failed tests. China's summary table.
India: Six underground explosions (including the first one in 1974), at Pokhran. India's summary table.
Pakistan: Six underground explosions at Ras Koh Hills and the Chagai District. Pakistan's summary table.
North Korea: North Korea is the only country in the world that still tests nuclear weapons, and their tests have caused escalating tensions between them and the United States. Their most recent nuclear test was on September 3, 2017. North Korea's summary table
There may also have been at least three alleged but unacknowledged nuclear explosions (see list of alleged nuclear tests) including the Vela incident.
From the first nuclear test in 1945 until tests by Pakistan in 1998, there was never a period of more than 22 months with no nuclear testing. June 1998 to October 2006 was the longest period since 1945 with no acknowledged nuclear tests.
A summary table of all the nuclear testing that has happened since 1945 is here: Worldwide nuclear testing counts and summary.
Global fallout
Nuclear weapons testing did not produce scenarios like nuclear winter as a result of a scenario of a concentrated number of nuclear explosions in a nuclear holocaust, but the thousands of tests, hundreds being atmospheric, did nevertheless produce a global fallout that peaked in 1963 (the bomb pulse), reaching levels of about 0.15 mSv per year worldwide, or about 7% of average background radiation dose from all sources, and has slowly decreased since, with natural environmental radiation levels being around 1 mSv. This global fallout was one of the main drivers for the ban of nuclear weapons testing, particularly atmospheric testing. It has been estimated that by 2020 up to 2.4 million people have died as a result of nuclear weapons testing.
Criticism
Nuclear arms tests have been criticized for its arms race and its fallout, with a potentially global fallout.
Nuclear weapons tests have been criticized by anti-nuclear activists as nuclear imperialism, colonialism, ecocide, environmental racism and nuclear genocide.
The movement gained particularly in the 1960s and in the 1980s again.
The international day "End Nuclear Tests Day" raises critical awareness annually.
Treaties against testing
There are many existing anti-nuclear explosion treaties, notably the Partial Nuclear Test Ban Treaty and the Comprehensive Nuclear Test Ban Treaty. These treaties were proposed in response to growing international concerns about environmental damage among other risks. Nuclear testing involving humans also contributed to the formation of these treaties. Examples can be seen in the following articles:
Desert Rock exercises
Totskoye range nuclear tests
The Partial Nuclear Test Ban treaty makes it illegal to detonate any nuclear explosion anywhere except underground, in order to reduce atmospheric fallout. Most countries have signed and ratified the Partial Nuclear Test Ban, which went into effect in October 1963. Of the nuclear states, France, China, and North Korea have never signed the Partial Nuclear Test Ban Treaty.
The 1996 Comprehensive Nuclear-Test-Ban Treaty (CTBT) bans all nuclear explosions everywhere, including underground. For that purpose, the Preparatory Commission of the Comprehensive Nuclear-Test-Ban Treaty Organization is building an international monitoring system with 337 facilities located all over the globe. 85% of these facilities are already operational. , the CTBT has been signed by 183 States, of which 157 have also ratified. For the Treaty to enter into force it needs to be ratified by 44 specific nuclear technology-holder countries. These "Annex 2 States" participated in the negotiations on the CTBT between 1994 and 1996 and possessed nuclear power or research reactors at that time. The ratification of eight Annex 2 states is still missing: China, Egypt, Iran, Israel and the United States have signed but not ratified the Treaty; India, North Korea and Pakistan have not signed it.
The following is a list of the treaties applicable to nuclear testing:
Compensation for victims
Over 500 atmospheric nuclear weapons tests were conducted at various sites around the world from 1945 to 1980. As public awareness and concern mounted over the possible health hazards associated with exposure to the nuclear fallout, various studies were done to assess the extent of the hazard. A Centers for Disease Control and Prevention/ National Cancer Institute study claims that nuclear fallout might have led to approximately 11,000 excess deaths, most caused by thyroid cancer linked to exposure to iodine-131.
United States: Prior to March 2009, the US was the only nation to compensate nuclear test victims. Since the Radiation Exposure Compensation Act of 1990, more than $1.38 billion in compensation has been approved. The money is going to people who took part in the tests, notably at the Nevada Test Site, and to others exposed to the radiation. As of 2017, the US government refused to pay for the medical care of troops who associate their health problems with the construction of Runit Dome in the Marshall Islands.
France: In March 2009, the French Government offered to compensate victims for the first time and legislation is being drafted which would allow payments to people who suffered health problems related to the tests. The payouts would be available to victims' descendants and would include Algerians, who were exposed to nuclear testing in the Sahara in 1960. Victims say the eligibility requirements for compensation are too narrow.
United Kingdom: There is no formal British government compensation program. Nearly 1,000 veterans of Christmas Island nuclear tests in the 1950s are engaged in legal action against the Ministry of Defense for negligence. They say they suffered health problems and were not warned of potential dangers before the experiments.
Russia: Decades later, Russia offered compensation to veterans who were part of the 1954 Totsk test. There was no compensation to civilians sickened by the Totsk test. Anti-nuclear groups say there has been no government compensation for other nuclear tests.
China: China has undertaken highly secretive atomic tests in remote deserts in a Central Asian border province. Anti-nuclear activists say there is no known government program for compensating victims.
Milestone nuclear explosions
The following list is of milestone nuclear explosions. In addition to the atomic bombings of Hiroshima and Nagasaki, the first nuclear test of a given weapon type for a country is included, as well as tests that were otherwise notable (such as the largest test ever). All yields (explosive power) are given in their estimated energy equivalents in kilotons of TNT (see TNT equivalent). Putative tests (like Vela incident) have not been included.
Note
"Staged" refers to whether it was a "true" thermonuclear weapon of the so-called Teller–Ulam configuration or simply a form of a boosted fission weapon. For a more complete list of nuclear test series, see List of nuclear tests. Some exact yield estimates, such as that of the Tsar Bomba and the tests by India and Pakistan in 1998, are somewhat contested among specialists.
See also
(in Nevada in the US)
(including nuclear weapons accidents)
Nuclear test sites
(documentary about nuclear weapon testing)
Explanatory notes
Citations
General and cited references
Gusterson, Hugh. Nuclear Rites: A Weapons Laboratory at the End of the Cold War. Berkeley, CA: University of California Press, 1996.
Hacker, Barton C. Elements of Controversy: The Atomic Energy Commission and Radiation Safety in Nuclear Weapons Testing, 1947–1974. Berkeley, CA: University of California Press, 1994.
Rice, James. Downwind of the Atomic State: Atmospheric Testing and the Rise of the Risk Society. (New York University Press, 2023). https://nyupress.org/9781479815340/downwind-of-the-atomic-state/
Schwartz, Stephen I. Atomic Audit: The Costs and Consequences of U.S. Nuclear Weapons. Washington, D.C.: Brookings Institution Press, 1998.
Weart, Spencer R. Nuclear Fear: A History of Images. Cambridge, MA: Harvard University Press, 1985.
External links
Federation of American Scientists
Preparatory Commission for the Comprehensive Nuclear-Test-Ban-Treaty Organization
Nuclear Weapon Archive
NuclearFiles.org
What About Radiation on Bikini Atoll?
Bulletin of the Atomic Scientists
Alsos Digital Library for Nuclear Issues
Atomic Bomb website and nuclear weapon testing articles
The Woodrow Wilson Center's Nuclear Proliferation International History Project
Testing | Nuclear weapons testing | [
"Technology"
] | 5,364 | [
"Environmental impact of nuclear power",
"Nuclear weapons testing"
] |
337,862 | https://en.wikipedia.org/wiki/Table%20%28information%29 | A table is an arrangement of information or data, typically in rows and columns, or possibly in a more complex structure. Tables are widely used in communication, research, and data analysis. Tables appear in print media, handwritten notes, computer software, architectural ornamentation, traffic signs, and many other places. The precise conventions and terminology for describing tables vary depending on the context. Further, tables differ significantly in variety, structure, flexibility, notation, representation and use. Information or data conveyed in table form is said to be in tabular format (adjective). In books and technical articles, tables are typically presented apart from the main text in numbered and captioned floating blocks.
Basic description
A table consists of an ordered arrangement of rows and columns. This is a simplified description of the most basic kind of table. Certain considerations follow from this simplified description:
the term row has several common synonyms (e.g., record, k-tuple, n-tuple, vector);
the term column has several common synonyms (e.g., field, parameter, property, attribute, stanchion);
a column is usually identified by a name;
a column name can consist of a word, phrase or a numerical index;
the intersection of a row and a column is called a cell.
The elements of a table may be grouped, segmented, or arranged in many different ways, and even nested recursively. Additionally, a table may include metadata, annotations, a header, a footer or other ancillary features.
Simple table
The following illustrates a simple table with four columns and nine rows. The first row is not counted, because it is only used to display the column names. This is called a "header row".
Multi-dimensional table
The concept of dimension is also a part of basic terminology. Any "simple" table can be represented as a "multi-dimensional"
table by normalizing the data values into ordered hierarchies. A common example of such a table is a multiplication table.
In multi-dimensional tables, each cell in the body of the table (and the value of that cell) relates to the values at the beginnings of the column (i.e. the header), the row, and other structures in more complex tables. This is an injective relation: each combination of the values of the headers row (row 0, for lack of a better term) and the headers column (column 0 for lack of a better term) is related to a unique cell in
the table:
Column 1 and row 1 will only correspond to cell (1,1);
Column 1 and row 2 will only correspond to cell (2,1) etc.
The first column often presents information dimension description by which the rest of the table is navigated. This column is called "stub column". Tables may contain three or multiple dimensions and can be classified by the number of dimensions. Multi-dimensional tables may have super-rows - rows that describe additional dimensions for the rows that are presented below that row and are usually grouped in a tree-like structure. This structure is typically visually presented with an appropriate number of white spaces in front of each stub's label.
In literature tables often present numerical values, cumulative statistics, categorical values, and at times parallel descriptions in form of text. They can condense large amount of information to a limited space and therefore they are popular in scientific literature in many fields of study.
Generic representation
As a communication tool, a table allows a form of generalization of information from an unlimited number of different social or scientific contexts. It provides a familiar way to convey information that might otherwise not be obvious or readily understood.
For example, in the following diagram, two alternate representations of the same information are presented side by side. On the left is the NFPA 704 standard "fire diamond" with example values indicated and on the right is a simple table displaying the same values, along with additional information. Both representations convey essentially the same information, but the tabular representation is arguably more comprehensible to someone who is not familiar with the NFPA 704 standard. The tabular representation may not, however, be ideal for every circumstance (for example because of space limitations, or safety reasons).
Specific uses
There are several specific situations in which tables are routinely used as a matter of custom or formal convention.
Publishing
Cross-reference (Table of contents)
Mathematics
Arithmetic (Multiplication table)
Logic (Truth table)
Natural sciences
Chemistry (Periodic table)
Oceanography (tide table)
Information technology
Software applications
Modern software applications give users the ability to generate, format, and edit tables and tabular data for a wide variety of uses, for example:
word processing applications;
spreadsheet applications;
presentation software;
tables specified in HTML or another markup language
Software development
Tables have uses in software development for both high-level specification and low-level implementation.
Usage in software specification can encompass ad hoc inclusion of simple decision tables in textual documents through to the use of tabular specification methodologies, examples of which include Software Cost Reduction and Statestep.
Proponents of tabular techniques, among whom David Parnas is prominent, emphasize their understandability, as well as the quality and cost advantages of a format allowing systematic inspection, while corresponding shortcomings experienced with a graphical notation were cited in motivating the development of at least two tabular approaches.
At a programming level, software may be implemented using constructs generally represented or understood as tabular, whether to store data (perhaps to memoize earlier results), for example, in arrays or hash tables, or control tables determining the flow of program execution in response to various events or inputs.
Databases
Database systems often store data in structures called tables; in which columns are data fields and rows represent data records.
Historical relationship to furniture
In medieval counting houses, the tables were covered with a piece of checkered cloth, to count money.Exchequer is an archaic term for the English institution which accounted for money owed to the monarch. Thus the checkerboard tables of stacks of coins are a concrete realization of this information.
See also
Chart
Diagram
Abstract data type
Column (database)
Information graphics
Periodic table
Reference table
Row (database)
Table (database)
Table (HTML)
Tensor
Dependent and independent variables
Zebra striping
References
External links
Infographics
Data modeling | Table (information) | [
"Engineering"
] | 1,294 | [
"Data modeling",
"Data engineering"
] |
337,864 | https://en.wikipedia.org/wiki/Fuzzball%20router | Fuzzball routers were the first modern routers on the Internet. They were DEC PDP-11 computers (usually LSI-11 personal workstations) loaded with the Fuzzball software written by David L. Mills (of the University of Delaware). The name "Fuzzball" was the colloquialism for Mills's routing software. The software evolved from the Distributed Computer Network (DCN) that started at the University of Maryland in 1973. It acquired the nickname sometime after it was rewritten in 1977.
Six Fuzzball routers provided the routing backbone of the first 56 kbit/s NSFNET, allowing the testing of many of the Internet's first protocols. It allowed the development of the first TCP/IP routing protocols, and the Network Time Protocol. They were the first routers to implement key refinements to TCP/IP such as variable-length subnet masks.
See also
Interface Message Processor
References
External links
The Fuzzball, with photographs
Fuzzball source code, last update in 1992, 16 megabytes
American inventions
Hardware routers
History of telecommunications | Fuzzball router | [
"Technology"
] | 232 | [
"Computing stubs",
"Computer network stubs"
] |
337,876 | https://en.wikipedia.org/wiki/List%20of%20calculus%20topics | This is a list of calculus topics.
Limits
Limit (mathematics)
Limit of a function
One-sided limit
Limit of a sequence
Indeterminate form
Orders of approximation
(ε, δ)-definition of limit
Continuous function
Differential calculus
Derivative
Notation
Newton's notation for differentiation
Leibniz's notation for differentiation
Simplest rules
Derivative of a constant
Sum rule in differentiation
Constant factor rule in differentiation
Linearity of differentiation
Power rule
Chain rule
Local linearization
Product rule
Quotient rule
Inverse functions and differentiation
Implicit differentiation
Stationary point
Maxima and minima
First derivative test
Second derivative test
Extreme value theorem
Differential equation
Differential operator
Newton's method
Taylor's theorem
L'Hôpital's rule
General Leibniz rule
Mean value theorem
Logarithmic derivative
Differential (calculus)
Related rates
Regiomontanus' angle maximization problem
Rolle's theorem
Integral calculus
Antiderivative/Indefinite integral
Simplest rules
Sum rule in integration
Constant factor rule in integration
Linearity of integration
Arbitrary constant of integration
Cavalieri's quadrature formula
Fundamental theorem of calculus
Integration by parts
Inverse chain rule method
Integration by substitution
Tangent half-angle substitution
Differentiation under the integral sign
Trigonometric substitution
Partial fractions in integration
Quadratic integral
Proof that 22/7 exceeds π
Trapezium rule
Integral of the secant function
Integral of secant cubed
Arclength
Solid of revolution
Shell integration
Special functions and numbers
Natural logarithm
e (mathematical constant)
Exponential function
Hyperbolic angle
Hyperbolic function
Stirling's approximation
Bernoulli numbers
Absolute numerical
See also list of numerical analysis topics
Rectangle method
Trapezoidal rule
Simpson's rule
Newton–Cotes formulas
Gaussian quadrature
Lists and tables
Table of common limits
Table of derivatives
Table of integrals
Table of mathematical symbols
List of integrals
List of integrals of rational functions
List of integrals of irrational functions
List of integrals of trigonometric functions
List of integrals of inverse trigonometric functions
List of integrals of hyperbolic functions
List of integrals of exponential functions
List of integrals of logarithmic functions
List of integrals of area functions
Multivariable
Partial derivative
Disk integration
Gabriel's horn
Jacobian matrix
Hessian matrix
Curvature
Green's theorem
Divergence theorem
Stokes' theorem
Vector Calculus
Series
Infinite series
Maclaurin series, Taylor series
Fourier series
Euler–Maclaurin formula
History
Adequality
Infinitesimal
Archimedes' use of infinitesimals
Gottfried Leibniz
Isaac Newton
Method of Fluxions
Infinitesimal calculus
Brook Taylor
Colin Maclaurin
Leonhard Euler
Gauss
Joseph Fourier
Law of continuity
History of calculus
Generality of algebra
Nonstandard calculus
Elementary Calculus: An Infinitesimal Approach
Nonstandard calculus
Infinitesimal
Archimedes' use of infinitesimals
For further developments: see list of real analysis topics, list of complex analysis topics, list of multivariable calculus topics.
Calculus
Calculus | List of calculus topics | [
"Mathematics"
] | 583 | [
"Calculus"
] |
337,976 | https://en.wikipedia.org/wiki/Shirin%20Ebadi | Shirin Ebadi (; born 21 June 1947) is an Iranian Nobel laureate, lawyer, writer, teacher and a former judge and founder of the Defenders of Human Rights Center in Iran. In 2003, Ebadi was awarded the Nobel Peace Prize for her pioneering efforts for democracy and women's, children's, and refugee rights. She was the first Muslim woman and the first Iranian to receive the award.
She has lived in exile in London since 2009.
Life and early career as a judge
Ebadi was born in Hamadan into an educated Persian family. Her father, Mohammad Ali Ebadi, was the city's chief notary public and a professor of commercial law. Her mother, Minu Yamini, was a homemaker. She was of Jewish descent. When she was an infant, her family moved to Tehran. Before earning a law degree from the University of Tehran Ebadi attended Anoshiravn Dadgar and Reza Shah Kabir schools.
She was admitted to the law department of the University of Tehran in 1965 and 1969; upon graduation, she passed the qualification exams to become a judge. After a six-month internship period, she officially became a judge in March 1969. She continued her studies at the University of Tehran to pursue a doctorate in law; in 1971, one of her professors was Mahmoud Shehabi Khorassani. In 1975, she became the first female president of the Tehran city court and served until the Iranian Revolution. She was one of the first female judges in Iran.
After the 1979 Revolution women were no longer allowed to serve as judges and
she was dismissed and given a new job as a clerk in the court she had presided over.
Later, despite already having a law office permit her applications were repeatedly rejected, and Ebadi was unable to practice law until 1993. She used this free time to write books and many articles in Iranian periodicals.
Ebadi as a lawyer
By 2004, Ebadi was lecturing law at the University of Tehran while practicing law in Iran. She is a campaigner for strengthening the legal status of children and women, and her work on women's rights played a key role in the May 1997 landslide presidential election of the reformist Mohammad Khatami.
As a lawyer, she is known for taking up pro bono cases of dissident figures who have fallen foul of the judiciary. Among her clients were the family of Dariush Forouhar, a dissident intellectual and politician who was found stabbed to death – along with his wife, Parvaneh Eskandari – in their home.
The couple was among several dissidents who died in a spate of gruesome murders that terrorized Iran's intellectual community. Suspicion fell on extremist hard-liners determined to stop the more liberal climate fostered by President Khatami, who championed freedom of speech. The murders were found to be committed by a team of employees of the Iranian Ministry of Intelligence, whose head, Saeed Emami, allegedly committed suicide in jail before being brought to court.
Ebadi also represented the family of Ezzat Ebrahim-Nejad, who was killed in the Iranian student protests in July 1999. In 2000 Ebadi was accused of manipulating the videotaped confession of Amir Farshad Ebrahimi, a former member of the Ansar-e Hezbollah. Ebrahimi confessed his involvement in attacks by the organization on the orders of high-level conservative authorities, including the killing of Ezzat Ebrahim-Nejad and attacks against members of President Khatami's cabinet. Ebadi claimed that she had only videotaped Amir Farshad Ebrahimi's confessions to present them to the court. This case was named "Tape makers" by hardliners who questioned the credibility of his videotaped deposition and his motives. Ebadi and another lawyer, Rohami were sentenced to five years in jail and suspension of their law licenses for sending Ebrahimi's videotaped deposition to President Khatami and the head of the Islamic judiciary. The Islamic judiciary's supreme court later vacated the sentences, but they did not forgive Ebarahimi's videotaped confession and sentenced him to 48 months in jail, including 16 months in solitary confinement. This case brought an increased focus on Iran from human rights groups abroad.
Ebadi has also defended various child abuse cases, including the case of Arian Golshani, a child who was abused for years and then beaten to death by her father and stepbrother. This case gained international attention and caused controversy in Iran. Ebadi used this case to highlight Iran's problematic child custody laws, whereby custody of children in divorce is usually given to the father, even in the case of Arian, where her mother had told the court that the father was abusive and had begged for custody of her daughter. Ebadi also handled the case of Leila, a teenage girl who was gang-raped and murdered. Leila's family became homeless, trying to cover the costs of the execution of the perpetrators owed to the government because, in the Islamic Republic of Iran, it is the victim's family's responsibility to pay to restore their honor when a girl is raped by paying the government to execute the perpetrator. Ebadi was not able to achieve victory in this case. Still, she brought international attention to this problematic law. Ebadi also handled a few cases dealing with bans of periodicals (including the cases of Habibollah Peyman, Abbas Marufi, and Faraj Sarkouhi). She has also established two non-governmental organizations in Iran with Western funding, the Society for Protecting the Rights of the Child (SPRC) (1994) and the Defenders of Human Rights Center (DHRC) in 2001.
She also helped in the drafting of the original text of a law against physical abuse of children, which was passed by the Iranian parliament in 2002. Female members of Parliament also asked Ebadi to draft a law explaining how a woman's right to divorce her husband is in line with Sharia (Islamic Law). Ebadi presented the bill before the government, but the male members made her leave without considering the bill, according to Ebadi's memoir.
Political views
In her book Iran Awakening, Ebadi explains her political/religious views on Islam, democracy and gender equality:
In the last 23 years, from the day I was stripped of my judgeship to the years of doing battle in the revolutionary courts of Tehran, I had repeated one refrain: an interpretation of Islam that is in harmony with equality and democracy is an authentic expression of faith. Not religion binds women, but the selective dictates of those who wish them cloistered. That belief and the conviction that change in Iran must come peacefully and from within has underpinned my work.
At the same time, Ebadi expresses a nationalist love of Iran and has criticized the policies and actions of Western countries. She opposed the pro-Western Shah, initially supported the Islamic Revolution, and remembers the CIA's 1953 overthrow of prime minister Mohammad Mosaddeq with rage.
At a press conference shortly after the Peace Prize announcement, Ebadi explicitly rejected foreign interference in the country's affairs: "The fight for human rights is conducted in Iran by the Iranian people, and we are against any foreign intervention in Iran."
Subsequently, Ebadi openly defended the Islamic regime's nuclear development program:
Aside from being economically justified, it has become a cause of national pride for an old nation with a glorious history. No Iranian government, regardless of its ideology or democratic credentials, would dare to stop the program.
However, in a 2012 interview, Ebadi stated:
The [Iranian] people want to stop enrichment, but the government doesn't listen. Iran is situated on a fault line, and people are scared of a Fukushima type of situation happening. We want peace, security, and economic welfare, and we cannot forgo all of our other rights for nuclear energy. The government claims it is not making a bomb. But I am not a member of the government, so I cannot speak to this directly. The fear is that if they do, Israel will be wiped out. If the Iranian people are able to topple the government, this could improve the situation. [In 2009] the people of Iran rose up and were badly suppressed. Right now, Iran is the country with the most journalists in prison. This is the price people are paying.
Concerning the Israeli–Palestinian conflict, in 2010, Shirin Ebadi, was one of four Peace Prize laureates supporting legislation requiring the University of California to divest itself from any companies providing technology to the Israel Defense Forces, who (bill supporters declared) were engaged in war crimes. (The legislation was supported by the Associated Students of the University of California).
Since the victory of Hassan Rouhani in the 2013 Iranian presidential election, Shirin Ebadi has expressed her worry about the growing human rights violations in her homeland. Ebadi, in her Dec. 2013 speech at Human Rights Day seminar at Leiden University angrily said: "I will shut up, but the problems of Iran will not be solved".
In April 2015, speaking on the subject of the Western campaign against the Sunni extremist group ISIL in Syria and Iraq, Ebadi expressed her desire that the Western world spend money funding education and an end to corruption rather than fighting with guns and bombs. She reasoned that because the Islamic State stems from an ideology based on a "wrong interpretation of Islam", the physical force will not end ISIS because it will not end its beliefs.
In 2018, in an interview with Bloomberg, Ebadi stated her belief that the Islamic Republic has reached a point of which it is now un-reformable. Ebadi called for a referendum on the Islamic Republic.
Nobel Peace Prize
On 10 October 2003, Ebadi was awarded the Nobel Peace Prize for her efforts for democracy and human rights, especially for the rights of women and children. The selection committee praised her as a "courageous person" who "has never heeded the threat to her own safety". Now she travels abroad lecturing in the West. She is against a policy of forced regime change.
The decision of the Nobel committee surprised some observers worldwide. Pope John Paul II had been predicted to win the Peace Prize amid speculation that he was nearing death. The era in which her prize was granted has been called one "when there still seemed a chance of something resembling a détente" between the U.S. and Iran (according to Associate Press).
She presented a book entitled Democracy, human rights, and Islam in modern Iran: Psychological, social and cultural perspectives to the Nobel Committee. The volume documents the historical and cultural basis of democracy and human rights from Cyrus and Darius, 2,500 years ago to Mohammad Mossadeq, the prime minister of modern Iran who nationalized the oil industry.
In her acceptance speech, Ebadi criticized repression in Iran and insisted that Islam was compatible with democracy, human rights and freedom of opinion. In the same speech she also criticized US foreign policy, particularly the War on terrorism. She was the first Iranian and the first Muslim woman to receive the prize.
Thousands greeted her at the airport when she returned from Paris after receiving the news that she had won the prize. The response to the Award in Iran was mixed—enthusiastic supporters greeted her at the airport upon her return, the conservative media underplayed it, and then-Iranian President Mohammad Khatami criticized it as political. In Iran, officials of the Islamic Republic were either silent or critical of the selection of Ebadi, calling it a political act by a pro-Western institution and were also critical when Ebadi did not cover her hair at the Nobel award ceremony. IRNA reported the Nobel committee's decision in few lines that the evening newspapers and the Iranian state media waited hours to report —and then only as the last item on the radio news update. Reformist officials are said to have "generally welcomed the award", but "come under attack for doing so." Reformist president Mohammad Khatami did not officially congratulate Ms. Ebadi and stated that although the scientific Nobels are important, the Peace Prize is "not very important" and was awarded to Ebadi on the basis of "totally political criteria". Vice President Mohammad Ali Abtahi, the only official to initially congratulate Ebadi, defended the president saying "abusing the President's words about Ms. Ebadi is tantamount to abusing the prize bestowed on her for political considerations".
In 2009, Norway's Foreign Minister Jonas Gahr Støre, published a statement reporting that Ebadi's Nobel Peace Prize had been confiscated by Iranian authorities and that "This [was] the first time a Nobel Peace Prize ha[d] been confiscated by national authorities." Iran denied the charges.
Post-Nobel prize
Since receiving the Nobel Prize, Ebadi has lectured, taught and received awards in different countries, issued statements and defended people accused of political crimes in Iran. She has traveled to and spoken to audiences in India, the United States, and other countries; released her autobiography in an English translation. With five other Nobel laureates, she created the Nobel Women's Initiative to promote peace, justice, and equality for women. In 2019, Ebadi called for a treaty to end violence against women, in support of Every Woman Coalition.
Threats
In April 2008, she told Reuters news agency that Iran's human rights record had regressed in the past two years and agreed to defend Baháʼís arrested in Iran in May 2008.
In April 2008, Ebadi released a statement saying: "Threats against my life and security and those of my family, which began some time ago, have intensified", and that the threats warned her against making speeches abroad and to stop defending Iran's persecuted Baháʼí community. In August 2008, the IRNA news agency published an article attacking Ebadi's links to the Baháʼí Faith and accused her of seeking support from the West. It also criticized Ebadi for defending homosexuals, appearing without the Islamic headscarf abroad, questioning Islamic punishments, and "defending CIA agents". It accused her daughter, Nargess Tavassolian, of conversion to the Baháʼí faith, a capital offense in the Islamic Republic. However Shirin Ebadi has denied it, saying, "I am proud to say that my family and I are Shiites," Her daughter believes "the government wanted to scare my mother with this scenario." Ebadi believes the attacks are in retaliation for her agreeing to defend the families of the seven Baháʼís arrested in May.
In December 2008, Iranian police shut down the office of a human rights group led by her. Another human rights group, Human Rights Watch, has said it was "extremely worried" about Ebadi's safety., and in December 2009 issued a statement demanding the Islamic Republic "stop harassing" her. Among many other complaints, the group accused the IRI of detaining "Ebadi's husband and sister for questioning and threatened them with losing their jobs and eventual arrest if Ebadi continues her human rights advocacy."
Seizure
Ebadi said while in London in late November 2009 that her Nobel Peace Prize medal and diploma had been taken from their bank box alongside her and a ring she had received from Germany's association of journalists. She said they had been taken by the Revolutionary Court approximately three weeks previously. Ebadi also said her bank account was frozen by authorities. Norwegian Minister of Foreign Affairs Jonas Gahr Støre expressed his "shock and disbelief" at the incident. The Iranian foreign ministry subsequently denied the confiscation, and also criticized Norway for interfering in Iran's affairs.
Post-Nobel Prize timeline
2003 (November) – She declared that she would provide legal representation for the family of the murdered Canadian freelance photographer Zahra Kazemi. The trial was halted in July 2004, prompting Ebadi and her team to leave the court in protest that their witnesses had not been heard.
2004 (January) – During the World Social Forum in Bombay Ebadi, speaking at a small girls' school run by the NGO "Sahyog", proposed that 30 January (the day Mahatma Gandhi was assassinated) be observed as International Day of Non-Violence. This proposal was brought to her by school children in Paris by their Indian teacher Akshay Bakaya. Three years later, Sonia Gandhi and Archbishop Desmond Tutu relayed the idea at the Delhi Satyagraha Convention in January 2007, preferring however to propose Gandhi's birthday on 2 October. The UN General Assembly on 15 June 2007 adopted 2 October as the International Day of Non-Violence.
2004 – Ebadi was listed by Forbes magazine as one of the "100 most powerful women in the world". She is also included in a published list of the "100 most influential women of all time".
2005 Spring – Ebadi taught a course on "Islam and Human Rights" at the University of Arizona's James E. Rogers College of Law in Tucson, Arizona.
2005 (12 May) – Ebadi delivered an address on Senior Class Day at Vanderbilt University, Nashville, Tennessee. Vanderbilt Chancellor Gordon Gee presented Ebadi with the Chancellor's Medal for her human rights work.
2005 – Ebadi was voted the world's 12th leading public intellectual in The 2005 Global Intellectuals Poll by Prospect (UK).
2006 – Random House released her first book for a Western audience, Iran Awakening: A Memoir of Revolution and Hope, with Azadeh Moaveni. A reading of the book was serialized as BBC Radio 4's Book of the Week in September 2006. American novelist David Ebershoff was the book's editor.
2006 – Ebadi was one of the founders of The Nobel Women's Initiative along with sister Nobel Peace laureates Betty Williams, Mairead Corrigan Maguire, Wangari Maathai, Jody Williams and Rigoberta Menchú Tum. Six women representing North America and South America, Europe, the Middle East and Africa decided to bring together their experiences in a united effort for peace with justice and equality. The Nobel Women's Initiative aims to help strengthen work being done in support of women's rights worldwide.
2007 (17 May) – Ebadi announced that she would defend the Iranian American scholar Haleh Esfandiari, who is jailed in Tehran.
2008 (March) – Ebadi tells Reuters news agency that Iran's human rights record had regressed in the past two years.
2008 (14 April) – Ebadi released a statement saying, "Threats against my life and security and those of my family, which began some time ago, have intensified", and that the threats warned her against making speeches abroad and against defending Iran's persecuted Baháʼí community.
2008 (June) – Ebadi volunteered to be the lawyer for the arrested Baháʼí leadership of Iran in June.
2008 (7 August) – Ebadi announced via the Muslim Network for Baháʼí Rights that she would defend in court the seven Baháʼí leaders arrested in the spring.
2008 (1 September) – Ebadi published her book Refugee Rights in Iran exposing the lack of rights given to Afghan refugees living in Iran.
2008 (21 December) – Ebadi's office of the Center for the Defense of Human Rights was raided and closed.
2008 (29 December) – Islamic authorities close Ebadi's Center for Defenders of Human Rights, raiding her private office, seizing her computers and files. Worldwide condemnation of raid.
2009 (1 January) – Pro-regime "demonstrators" attack Ebadi's home and office.
2009 (12 June) – Ebadi was at a seminar in Spain at the time of Iranian presidential election. "[W]hen the crackdown began colleagues told her not to come home" and as of October 2009 she has not returned to Iran.
2009 (16 June) – In the midst of nationwide protests against the very surprising and highly suspect election results giving incumbent President Mahmoud Ahmadinejad a landslide victory, Ebadi calls for new elections in an interview with Radio Free Europe.
2009 (24 September) – Touring abroad to lobby international leaders and highlight the Islamic regime's human rights abuses since June, Ebadi criticizes the British government for putting talks on the Islamic regime's nuclear program ahead of protesting its brutal suppression of opposition. Noting the British Ambassador attended President Ahmadinejad's inauguration, she said, "`That's when I felt that human rights were being neglected. ... Undemocratic countries are more dangerous than a nuclear bomb. It's undemocratic countries that jeopardize international peace.`" She calls for "the downgrading of Western embassies, the withdrawal of ambassadors and the freezing of the assets of Iran's leaders."
2009 (November) – The Iranian authorities seize Ebadi's Nobel medal together with other belongings from her safe-deposit box.
2009 (29 December) – Ebadi's sister Noushin Ebadi was detained apparently to silence Ebadi who is abroad. "She was neither politically active nor had a role in any rally. It's necessary to point out that in the past two months she had been summoned several times to the Intelligence Ministry, who told her to persuade me to give up my human rights activities. I have been arrested solely because of my activities in human rights," Ebadi said.
2010 (June) – Ebadi's husband denounced her on state television. According to Ebadi this was a coerced confession after his arrest and torture.
2012 (26 January) — in a statement released by the International Campaign for Human Rights in Iran, Ebadi called on "all freedom-loving people across the globe" to work for the release of three opposition leaders — Zahra Rahnavard, Mir Hossein Mousavi, and Mehdi Karroubi — who have been confined to house arrest for nearly a year.
Lawsuits
Lawsuit against the United States
In 2004, Ebadi filed a lawsuit against the U.S. Department of Treasury because of restrictions she faced over publishing her memoir in the United States. American trade laws prohibit writers from embargoed countries. The law also banned American literary agent Wendy Strothman from working with Ebadi. Azar Nafisi wrote a letter in support of Ebadi. Nafisi said that the law infringes on the First Amendment. After a lengthy legal battle, Ebadi won and was able to publish her memoir in the United States.
Other activities
Apne Aap Women Worldwide, Co-Chair of the International Advisory Board
Aurora Prize, Member of the Selection Committee (since 2015)
Business for Peace Award Committee, Member (2009)
Reporters Without Borders (RWB), Member of the Emeritus Board
Scholars at Risk (SAR), Member of the Ambassadors Council
Nuremberg International Human Rights Award, Member of the Jury (2004–2020)
Recognition
Awards
Awarded plate by Human Rights Watch, 1996
Official spectator of Human Rights Watch, 1996
Awarded Rafto Prize, Human Rights Prize in Norway, 2001
Nobel Peace Prize in October 2003
Women's eNews 21 Leaders for the 21st Century Award, 2004
International Democracy Award, 2004
James Parks Morton Interfaith Award from the Interfaith Center of New York, 2004
‘Lawyer of the Year’ award, 2004
UCI Citizen Peacebuilding Award, 2005
The Golden Plate Award by the American Academy of Achievement, 2005
Legion of Honor award, 2006
Toleranzpreis der Evangelischen Akademie Tutzing, 2008
Award for the Global Defence of Human Rights, International Service Human Rights Award, 2009
Wolfgang Friedmann Memorial Award, Columbia Journal of Transnational Law, 2013
Honorary degrees
Doctor of Laws, Williams College, 2004
Doctor of Laws, Brown University, 2004
Doctor of Laws, University of British Columbia, 2004
Honorary doctorate, University of Maryland, College Park, 2004
Honorary doctorate, University of Toronto, 2004
Honorary doctorate, Simon Fraser University, 2004
Honorary doctorate, University of Akureyri, 2004
Honorary doctorate, Australian Catholic University, 2005
Honorary doctorate, University of San Francisco, 2005
Honorary doctorate, Concordia University, 2005
Honorary doctorate, The University of York, The University of Canada, 2005
Honorary doctorate, Université Jean Moulin in Lyon, 2005
Honorary doctorate, Loyola University Chicago, 2007
Honorary Doctorate The New School University, 2007
Honorary Doctor of Laws, Marquette University, 2009
Honorary Doctor of Law, University of Cambridge, 2011
Honorary Doctorate, School of Oriental and African Studies (SOAS) University of London, 2012
Honorary Doctor of Laws, Law Society of Upper Canada, 2012
Books published
Iran Awakening: One Woman's Journey to Reclaim Her Life and Country (2007)
Refugee Rights in Iran (2008)
The Golden Cage: Three brothers, Three choices, One destiny (2011)
Until We Are Free (2016)
See also
Iranian women
List of famous Persian women
List of peace activists
Intellectual movements in Iran
Persian women's movement
Islamic feminism
References
Further reading
Monshipouri, M. (2009). "Shirin Ebadi" in Encyclopedia of human rights. Volume 2. David Forsythe (Ed.). Oxford University Press.
External links
Shirin Ebadi's biography, Iowa State University
Interview With Iranian Nobel Prize Winner: Shirin Ebadi. PBS
Gruber Distinguished Lecture in Global Justice: Dr. Shirin Ebadi, Yale Law School
Nobel Women's Initiative
Quotes from Shirin Ebadi Speeches
TIME.com: 10 Questions for Shirin Ebadi
Shirin Ebadi, avocate pour les droits de l'homme en Iran Jean Albert, Ludivine Tomasso and edited by Jacqueline Duband, Emilie Dessens
Press interviews
Iranian elections – Nobel Peace Prize winner Shirin Ebadi talks to Euronews 2013 June 12
David Batty in conversation with Shirin Ebadi, "If you want to help Iran, don't attack", The Guardian, 13 June 2008
Nermeen Shaikh, AsiaSource Interview with Shirin Ebadi
"Iran's Quiet Revolution" Winter 2007 article from Ms. magazine about activism and feminism in Iran.
Video
Video: Shirin Ebadi on 'What's Ahead for Iran', Asia Society, New York, 3 March 2010
Shirin Ebadi Presses Iran on Human Rights and Warns Against International Sanctions – video by Democracy Now!
Pictures
Picture Gallery
1947 births
Living people
Iranian democracy activists
Iranian dissidents
Iranian human rights activists
Iranian women activists
Iranian women's rights activists
Iranian exiles
Children's rights activists
Iranian feminists
Iranian emigrants to the United Kingdom
Nobel Peace Prize laureates
Iranian Nobel laureates
Academic staff of the University of Tehran
University of Tehran alumni
People from Hamadan
Commanders of the Legion of Honour
Iranian women lawyers
Iranian women judges
Pacifist feminists
Women Nobel laureates
Iranian women writers
Iranian writers
Nonviolence advocates
Carnegie Council for Ethics in International Affairs
Members of the National Council for Peace | Shirin Ebadi | [
"Technology"
] | 5,590 | [
"Women Nobel laureates",
"Women in science and technology"
] |
338,046 | https://en.wikipedia.org/wiki/Dihedral%20angle | A dihedral angle is the angle between two intersecting planes or half-planes. It is a plane angle formed on a third plane, perpendicular to the line of intersection between the two planes or the common edge between the two half-planes. In higher dimensions, a dihedral angle represents the angle between two hyperplanes. In chemistry, it is the clockwise angle between half-planes through two sets of three atoms, having two atoms in common.
Mathematical background
When the two intersecting planes are described in terms of Cartesian coordinates by the two equations
the dihedral angle, between them is given by:
and satisfies It can easily be observed that the angle is independent of and .
Alternatively, if and are normal vector to the planes, one has
where is the dot product of the vectors and is the product of their lengths.
The absolute value is required in above formulas, as the planes are not changed when changing all coefficient signs in one equation, or replacing one normal vector by its opposite.
However the absolute values can be and should be avoided when considering the dihedral angle of two half planes whose boundaries are the same line. In this case, the half planes can be described by a point of their intersection, and three vectors , and such that , and belong respectively to the intersection line, the first half plane, and the second half plane. The dihedral angle of these two half planes is defined by
,
and satisfies In this case, switching the two half-planes gives the same result, and so does replacing with In chemistry (see below), we define a dihedral angle such that replacing with changes the sign of the angle, which can be between and .
In polymer physics
In some scientific areas such as polymer physics, one may consider a chain of points and links between consecutive points. If the points are sequentially numbered and located at positions , , , etc. then bond vectors are defined by =−, =−, and =−, more generally. This is the case for kinematic chains or amino acids in a protein structure. In these cases, one is often interested in the half-planes defined by three consecutive points, and the dihedral angle between two consecutive such half-planes. If , and are three consecutive bond vectors, the intersection of the half-planes is oriented, which allows defining a dihedral angle that belongs to the interval . This dihedral angle is defined by
or, using the function atan2,
This dihedral angle does not depend on the orientation of the chain (order in which the point are considered) — reversing this ordering consists of replacing each vector by its opposite vector, and exchanging the indices 1 and 3. Both operations do not change the cosine, but change the sign of the sine. Thus, together, they do not change the angle.
A simpler formula for the same dihedral angle is the following (the proof is given below)
or equivalently,
This can be deduced from previous formulas by using the vector quadruple product formula, and the fact that a scalar triple product is zero if it contains twice the same vector:
Given the definition of the cross product, this means that is the angle in the clockwise direction of the fourth atom compared to the first atom, while looking down the axis from the second atom to the third. Special cases (one may say the usual cases) are , and , which are called the trans, gauche+, and gauche− conformations.
In stereochemistry
In stereochemistry, a torsion angle is defined as a particular example of a dihedral angle, describing the geometric relation of two parts of a molecule joined by a chemical bond. Every set of three non-colinear atoms of a molecule defines a half-plane. As explained above, when two such half-planes intersect (i.e., a set of four consecutively-bonded atoms), the angle between them is a dihedral angle. Dihedral angles are used to specify the molecular conformation. Stereochemical arrangements corresponding to angles between 0° and ±90° are called syn (s), those corresponding to angles between ±90° and 180° anti (a). Similarly, arrangements corresponding to angles between 30° and 150° or between −30° and −150° are called clinal (c) and those between 0° and ±30° or ±150° and 180° are called periplanar (p).
The two types of terms can be combined so as to define four ranges of angle; 0° to ±30° synperiplanar (sp); 30° to 90° and −30° to −90° synclinal (sc); 90° to 150° and −90° to −150° anticlinal (ac); ±150° to 180° antiperiplanar (ap). The synperiplanar conformation is also known as the syn- or cis-conformation; antiperiplanar as anti or trans; and synclinal as gauche or skew.
For example, with n-butane two planes can be specified in terms of the two central carbon atoms and either of the methyl carbon atoms. The syn-conformation shown above, with a dihedral angle of 60° is less stable than the anti-conformation with a dihedral angle of 180°.
For macromolecular usage the symbols T, C, G+, G−, A+ and A− are recommended (ap, sp, +sc, −sc, +ac and −ac respectively).
Proteins
A Ramachandran plot (also known as a Ramachandran diagram or a [φ,ψ] plot), originally developed in 1963 by G. N. Ramachandran, C. Ramakrishnan, and V. Sasisekharan, is a way to visualize energetically allowed regions for backbone dihedral angles ψ against φ of amino acid residues in protein structure.
In a protein chain three dihedral angles are defined:
ω (omega) is the angle in the chain Cα − C' − N − Cα,
φ (phi) is the angle in the chain C' − N − Cα − C'
ψ (psi) is the angle in the chain N − Cα − C' − N (called φ′ by Ramachandran)
The figure at right illustrates the location of each of these angles (but it does not show correctly the way they are defined).
The planarity of the peptide bond usually restricts ω to be 180° (the typical trans case) or 0° (the rare cis case). The distance between the Cα atoms in the trans and cis isomers is approximately 3.8 and 2.9 Å, respectively. The vast majority of the peptide bonds in proteins are trans, though the peptide bond to the nitrogen of proline has an increased prevalence of cis compared to other amino-acid pairs.
The side chain dihedral angles are designated with χn (chi-n). They tend to cluster near 180°, 60°, and −60°, which are called the trans, gauche−, and gauche+ conformations. The stability of certain sidechain dihedral angles is affected by the values φ and ψ. For instance, there are direct steric interactions between the Cγ of the side chain in the gauche+ rotamer and the backbone nitrogen of the next residue when ψ is near -60°. This is evident from statistical distributions in backbone-dependent rotamer libraries.
Geometry
Every polyhedron has a dihedral angle at every edge describing the relationship of the two faces that share that edge. This dihedral angle, also called the face angle, is measured as the internal angle with respect to the polyhedron. An angle of 0° means the face normal vectors are antiparallel and the faces overlap each other, which implies that it is part of a degenerate polyhedron. An angle of 180° means the faces are parallel, as in a tiling. An angle greater than 180° exists on concave portions of a polyhedron.
Every dihedral angle in an edge-transitive polyhedron has the same value. This includes the 5 Platonic solids, the 13 Catalan solids, the 4 Kepler–Poinsot polyhedra, the two quasiregular solids, and two quasiregular dual solids.
Law of cosines for dihedral angle
Given 3 faces of a polyhedron which meet at a common vertex P and have edges AP, BP and CP, the cosine of the dihedral angle between the faces containing APC and BPC is:
This can be deduced from the spherical law of cosines, but can also be found by other means.
See also
Atropisomer
References
External links
The Dihedral Angle in Woodworking at Tips.FM
Analysis of the 5 Regular Polyhedra gives a step-by-step derivation of these exact values.
Stereochemistry
Protein structure
Euclidean solid geometry
Angle
Planes (geometry) | Dihedral angle | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,850 | [
"Geometric measurement",
"Scalar physical quantities",
"Planes (geometry)",
"Physical quantities",
"Euclidean solid geometry",
"Protein structure",
"Stereochemistry",
"Mathematical objects",
"Infinity",
"Space",
"Structural biology",
"nan",
"Spacetime",
"Wikipedia categories named after ph... |
338,076 | https://en.wikipedia.org/wiki/Engine%20turning | Engine turning is a form of ornamental turning. The finishing technique may use lathes or engines to produce a pattern. Aluminium is often the metal chosen to decorate. The technique has been used in various industries, including aircraft and document verification.
Description
Engine turning is a form of ornamental turning. The technique geometrically applies a single-point cutting tool to produce a decorative metal surface finish pattern.
Traditionally, engine turning referred to Guilloché engraving. In the 20th century, it also came to refer to the different process of (also known as spotting, jewelling, perlage), which is a fine geometric pattern of overlapping circles abraded onto the surface.
Equipment
Guilloché engine turning may be done with various machines, including rose engines, straight-line engines, brocade engines, and ornamental turning lathes. Perlage uses an abrasive rotating disk or dowel.
Material
Aluminium is often the metal chosen to decorate with jewelling, but any appropriate surface can be finely machined to produce intricate repetitive patterns that offer reflective interest and fine detail.
Uses
Aircraft
-style engine turning was used on the sheet metal panels of the engine cowling (nose) of Charles Lindbergh's aircraft, the Spirit of St. Louis.
The sheet metal parts of the World War I Fokker Eindecker fighters aircraft series, especially around the engine cowl and associated sheet metal, are noted for having a "dragged" form of engine turning entirely covering them. The tool creating the "swirls" was repeatedly moved along a short, irregular path each time while pressed against the metal, to create the intricate appearance that was characteristic of the aircraft's sheet-metal parts. It is partly surmised to have been a mechanical method to "clad" a duralumin-alloy sheet-metal panel with a layer of pure aluminum, for corrosion protection.
Automobiles
In the 1920s and 1930s, automobile parts such as valve covers, which are right on top of the engine, were also decorated with perlage engine turning. Similarly, dashboards or the instrument panel of the same were often perlaged. Customizers also would similarly decorate their vehicles with perlage engine-turned panels.
Documents
Engravings produced by engine turning are often incorporated into the design of bank notes, and other high-value documents, to make counterfeiting difficult. The resulting graphics are called guillochés.
Firearms
Perlage engine turning is also used on various firearm components to prevent corrosion by holding traces of oil and lubricants on the surface, in turn to a polished surface resulting in a smooth operation.
Watchmaking
Guilloché and perlage are traditional techniques used in have been used in the watch-making.
See also
Guilloché
Rose engine lathe
References
External links
Engine turning on YouTube by The Unemployed Prop Guy
Visual motifs
Corrosion prevention | Engine turning | [
"Chemistry",
"Mathematics"
] | 571 | [
"Corrosion prevention",
"Symbols",
"Corrosion",
"Visual motifs"
] |
338,082 | https://en.wikipedia.org/wiki/Gastric%20intubation | Nasogastric intubation is a medical process involving the insertion of a plastic tube (nasogastric tube or NG tube) through the nose, down the esophagus, and down into the stomach. Orogastric intubation is a similar process involving the insertion of a plastic tube (orogastric tube) through the mouth. Abraham Louis Levin invented the NG tube. Nasogastric tube is also known as Ryle's tube in Commonwealth countries, after John Alfred Ryle.
Uses
A nasogastric tube is used for feeding and administering drugs and other oral agents such as activated charcoal. For drugs and for minimal quantities of liquid, a syringe is used for injection into the tube. For continuous feeding, a gravity based system is employed, with the solution placed higher than the patient's stomach. If accrued supervision is required for the feeding, the tube is often connected to an electronic pump which can control and measure the patient's intake and signal any interruption in the feeding. Nasogastric tubes may also be used as an aid in the treatment of life-threatening eating disorders, especially if the patient is not compliant with eating. In such cases, a nasogastric tube may be inserted by force for feeding against the patient's will under restraint. Such a practice may be highly distressing for both patients and healthcare staff.
Nasogastric aspiration (suction) is the process of draining the stomach's contents via the tube. Nasogastric aspiration is mainly used to remove gastrointestinal secretions and swallowed air in patients with gastrointestinal obstructions. Nasogastric aspiration can also be used in poisoning situations when a potentially toxic liquid has been ingested, for preparation before surgery under anesthesia, and to extract samples of gastric liquid for analysis.
If the tube is to be used for continuous drainage, it is usually appended to a collector bag placed below the level of the patient's stomach; gravity empties the stomach's contents. It can also be appended to a suction system, however this method is often restricted to emergency situations, as the constant suction can easily damage the stomach's lining. In non-emergency situations, intermittent suction is often applied giving the benefits of suction without the untoward effects of damage to the stomach lining.
Suction drainage is also used for patients who have undergone a pneumonectomy in order to prevent anesthesia-related vomiting and possible aspiration of any stomach contents. Such aspiration would represent a serious risk of complications to patients recovering from this surgery.
Types
Types of nasogastric tubes include:
Levin catheter, which is a single lumen, small bore NG tube. It is more appropriate for administration of medication or nutrition. This type of catheter tends to be more prone to suctioning against the stomach lining, which can cause damage and interfere with future function of the tube.
Salem Sump catheter, which is a large bore NG tube with double lumen. This avails for aspiration in one lumen, and venting in the other to reduce negative pressure and prevent gastric mucosa from being drawn into the catheter.
Dobhoff tube, which is a small bore NG tube with a weight at the end intended to pull it by gravity during insertion. The name "Dobhoff" refers to its inventors, surgeons Dr. Robert Dobbie and Dr. James Hoffmeister, who invented the tube in 1975.
Materials
Nasogastric tubes are available in a variety of different materials, each with their own unique properties.
Polypropylene - This material is most common. It is less likely to kink, which can be beneficial for placement, but its rigidity makes it less suitable to be used for long term feeding.
Latex - These tubes tend to be thicker and can be difficult to place without proper lubrication. Latex tends to break down at faster rates compared to other materials. Allergies to latex are relatively common and latex tubes are more likely to be recognized as a foreign object by the body.
Silicone - Especially useful in patients with known latex allergies. Silicone tubes tend to be thinner and more pliable. This can be useful in some situations but can also be more prone to rupture under stress.
Technique
Before an NG tube is inserted, it must be measured from the tip of the patient's nose, loop around their ear and then down to roughly below the xiphoid process. The tube is then marked at this level to ensure that the tube has been inserted far enough into the patient's stomach. Many commercially available stomach and duodenal tubes have several standard depth markings, for example , , and from distal end; infant feeding tubes often come with 1 cm depth markings. The end of a plastic tube is lubricated (local anesthetic, such as 2% xylocaine gel, may be used; in addition, nasal vasoconstrictor and/or anesthetic spray may be applied before the insertion) and inserted into one of the patient's anterior nares. Treatment with 2.0 mg of IV midazolam greatly reduces patient stress. The tube should be directed straight towards the back of the patient as it moves through the nasal cavity and down into the throat. When the tube enters the oropharynx and glides down the posterior pharyngeal wall, the patient may gag; in this situation the patient, if awake and alert, is asked to mimic swallowing or is given some water to sip through a straw, and the tube continues to be inserted as the patient swallows. Once the tube is past the pharynx and enters the esophagus, it is easily inserted down into the stomach. The tube must then be secured in place to prevent it from moving. There are several ways to secure an NG placement. One method and the least invasive is tape. Tape is positioned and wrapped around the NG tube onto the patients nose to prevent dislodgement.
Another securement device is a nasal bridle, or a device that enters one nare, around the nasal septum, and then to the other nare where it is secured in place around the nasogastric tube. There are two ways a bridle is put into place. One method, according to the Australian Journal of Otolaryngology, is performed by a physician to pull a material through the nares and then tied with the ends shortened to prevent removal of the tube. The other method is a device called the Applied Medical Technology, or AMT, bridle. This device uses a magnet inserted into both nares that connects at the nasal septum and then pulled through to one side and tied. This technology allows nurses to safely apply bridles. Several studies have proven the use of a nasal bridle prevents the loss of the NG placement that provides necessary nutrients or suctioning. A study conducted in the UK from 2014 through 2017, determined that 50% of feeding tubes secured with tape were lost inadvertently. The use of bridle securement decreased the percentage of NGs lost from 53% to 9%.
Great care must be taken to ensure that the tube has not passed through the larynx into the trachea and down into the bronchi. The reliable method is to aspirate some fluid from the tube with a syringe. This fluid is then tested with pH paper (note not litmus paper) to determine the acidity of the fluid. If the pH is 4 or below then the tube is in the correct position. If this is not possible then correct verification of tube position is obtained with an X-ray of the chest/abdomen. This is the most reliable means of ensuring proper placement of an NG tube. The use of a chest x-ray to confirm position is the expected standard in the UK, with Dr/ physician review and confirmation. Future techniques may include measuring the concentration of enzymes such as trypsin, pepsin, and bilirubin to confirm the correct placement of the NG tube. As enzyme testing becomes more practical, allowing measurements to be taken quickly and cheaply at the bedside, this technique may be used in combination with pH testing as an effective, less harmful replacement of X-ray confirmation. If the tube is to remain in place then a tube position check is recommended before each feed and at least once per day.
Only smaller diameter (12 Fr or less in adults) nasogastric tubes are appropriate for long-term feeding, so as to avoid irritation and erosion of the nasal mucosa. These tubes often have guidewires to facilitate insertion. If feeding is required for a longer period of time, other options, such as placement of a PEG tube, should be considered.
Function of an NG tube properly placed and used for suction is maintained by flushing. This may be done by flushing small amounts of saline and air using a syringe or by flushing larger amounts of saline or water, and air, and then assessing for the air to circulate through one lumen of the tube, into the stomach, and out the other lumen. When these two techniques of flushing were compared, the latter was more effective.
Contraindications
The use of nasogastric intubation is contraindicated in patients with moderate-to-severe neck and facial fractures due to the increased risk of airway obstruction or improper tube placement. Special attention is necessary during insertion under these circumstances in order to avoid undue trauma to the esophagus. There is also a greater risk to patients with bleeding disorders, particularly those resulting from the distended sub-mucosal veins in the lower third of the esophagus known as esophageal varices which may be easily ruptured due to their friability and also in GERD(Gastro Esophageal Reflux Disease).
Alternative measures, such as an orogastric intubation, should be considered under these circumstances, or if the patient will be incapable of meeting their nutritional and caloric needs for an extended time period (usually >24 hours).
Complications
Complications with nasogastric intubation can occur due to incorrect initial placement of the nasogastric tube or due to changes in tube position that go unrecognized. Nasogastric tubes mistakenly placed in the trachea or lungs can lead to aspiration of enteral feeds or medications administered through the NG tube. This can also lead to pneumothorax or pleural effusion, which often requires a chest tube to drain. Nasogastric tubes can also be mistakenly placed within the intracranial space; this is more likely to occur in patient who already have specific types of skull fractures.
Other complications include clogged or nonfunctional tubes, premature removal of the tube, erosion of the nasal mucosa, esophageal perforation esophageal reflux, nose bleeds, sinusitis, sore throat and gagging.
Fox News Digital reported about a voluntary field correction notice dated March 21, 2022, referenced 60 injuries and 23 deaths related to misplacement of a nasogastric tube. Avanos Medical's Cortrak2 EAS recall, has been classified as a Class I recall by the FDA, following these reports.
See also
Force feeding
Feeding tube
References
Medical equipment
Enteral feeding
Medical treatments | Gastric intubation | [
"Biology"
] | 2,369 | [
"Medical equipment",
"Medical technology"
] |
338,129 | https://en.wikipedia.org/wiki/Plateau%27s%20problem | In mathematics, Plateau's problem is to show the existence of a minimal surface with a given boundary, a problem raised by Joseph-Louis Lagrange in 1760. However, it is named after Joseph Plateau who experimented with soap films. The problem is considered part of the calculus of variations. The existence and regularity problems are part of geometric measure theory.
History
Various specialized forms of the problem were solved, but it was only in 1930 that general solutions were found in the context of mappings (immersions) independently by Jesse Douglas and Tibor Radó. Their methods were quite different; Radó's work built on the previous work of René Garnier and held only for rectifiable simple closed curves, whereas Douglas used completely new ideas with his result holding for an arbitrary simple closed curve. Both relied on setting up minimization problems; Douglas minimized the now-named Douglas integral while Radó minimized the "energy". Douglas went on to be awarded the Fields Medal in 1936 for his efforts.
In higher dimensions
The extension of the problem to higher dimensions (that is, for -dimensional surfaces in -dimensional space) turns out to be much more difficult to study. Moreover, while the solutions to the original problem are always regular, it turns out that the solutions to the extended problem may have singularities if . In the hypersurface case where , singularities occur only for . An example of such singular solution of the Plateau problem is the Simons cone, a cone over in that was first described by Jim Simons and was shown to be an area minimizer by Bombieri, De Giorgi and Giusti. To solve the extended problem in certain special cases, the theory of perimeters (De Giorgi) for codimension 1 and the theory of rectifiable currents (Federer and Fleming) for higher codimension have been developed. The theory guarantees existence of codimension 1 solutions that are smooth away from a closed set of Hausdorff dimension . In the case of higher codimension Almgren proved existence of solutions with singular set of dimension at most in his regularity theorem. S. X. Chang, a
student of Almgren, built upon Almgren’s work to show that the singularities of 2-dimensional area
minimizing integral currents (in arbitrary codimension) form a finite discrete set.
The axiomatic approach of Jenny Harrison and Harrison Pugh treats a wide variety of special cases. In particular, they solve the anisotropic Plateau problem in arbitrary dimension and codimension for any collection of rectifiable sets satisfying a combination of general homological, cohomological or homotopical spanning conditions. A different proof of Harrison-Pugh's results were obtained by Camillo De Lellis, Francesco Ghiraldin and Francesco Maggi.
Physical applications
Physical soap films are more accurately modeled by the -minimal sets of Frederick Almgren, but the lack of a compactness theorem makes it difficult to prove the existence of an area minimizer. In this context, a persistent open question has been the existence of a least-area soap film. Ernst Robert Reifenberg solved such a "universal Plateau's problem" for boundaries which are homeomorphic to single embedded spheres.
See also
Double Bubble conjecture
Dirichlet principle
Plateau's laws
Stretched grid method
Bernstein's problem
References
Calculus of variations
Minimal surfaces
Mathematical problems | Plateau's problem | [
"Chemistry",
"Mathematics"
] | 703 | [
"Foams",
"Mathematical problems",
"Minimal surfaces"
] |
338,161 | https://en.wikipedia.org/wiki/Null%20character | The null character (also null terminator) is a control character with the value zero.
It is present in many character sets, including those defined by the Baudot and ITA2 codes, ISO/IEC 646 (or ASCII), the C0 control code, the Universal Coded Character Set (or Unicode), and EBCDIC. It is available in nearly all mainstream programming languages. It is often abbreviated as NUL (or NULL, though in some contexts that term is used for the null pointer). In 8-bit codes, it is known as a null byte.
The original meaning of this character was like NOP—when sent to a printer or a terminal, it has no effect (some terminals, however, incorrectly display it as space). When electromechanical teleprinters were used as computer output devices, one or more null characters were sent at the end of each printed line to allow time for the mechanism to return to the first printing position on the next line. On punched tape, the character is represented with no holes at all, so a new unpunched tape is initially filled with null characters, and often text could be inserted at a reserved space of null characters by punching the new characters into the tape over the nulls.
Today the character has much more significance in the programming language C and its derivatives and in many data formats, where it serves as a reserved character used to signify the end of a string, often called a null-terminated string. This allows the string to be any length with only the overhead of one byte; the alternative of storing a count requires either a string length limit of 255 or an overhead of more than one byte (there are other advantages/disadvantages described in the null-terminated string article).
Representation
In source code, the null character is often represented as the escape sequence \0 in string literals (for example, "abc\0def") or in character constants ('\0'); the latter may also be written instead simply as 0 (without quotes nor slash). In many languages (such as C, which introduced this notation), this is not a separate escape sequence, but an octal escape sequence with a single octal digit 0; as a consequence, \0 must not be followed by any of the digits 0 through 7; otherwise it is interpreted as the start of a longer octal escape sequence. Other escape sequences that are found in use in various languages are \000, \x00, \z, or \u0000. A null character can be placed in a URL with the percent code %00.
The ability to represent a null character does not always mean the resulting string will be correctly interpreted, as many programs will consider the null to be the end of the string. Thus the ability to type it (in case of unchecked user input) creates a vulnerability known as null byte injection and can lead to security exploits.
In caret notation the null character is ^@. On some keyboards, one can enter a null character by holding down and pressing (on US layouts just will often work, there being no need for to get the @ sign).
The Hexadecimal notation for null is 00. Decoding the Base64 string AA== also yields the null character.
In documentation, the null character is sometimes represented as a single-em-width symbol containing the letters "NUL". In Unicode, there is a character for this: .
Encoding
In all modern character sets, the null character has a code point value of zero. In most encodings, this is translated to a single code unit with a zero value. For instance, in UTF-8 it is a single zero byte. However, in Modified UTF-8 the null character is encoded as two bytes: . This allows the byte with the value of zero, which is now not used for any character, to be used as a string terminator.
References
External links
Null Byte Injection WASC Threat Classification Null Byte Attack section
Poison Null Byte Introduction Introduction to Null Byte Attack
Apple null byte injection QR code vulnerability
Control characters
Computer security exploits | Null character | [
"Technology"
] | 851 | [
"Computer security exploits"
] |
338,191 | https://en.wikipedia.org/wiki/FLOX | FLOX is a flameless combustion process developed by WS Wärmeprozesstechnik GmbH.
History
In experiments with industrial gasoline engines conducted in April 1990, Joachim Alfred Wünning found that when combustion occurred at a temperature greater than 850 °C, the flames were blown away. Although this observation was initially thought to be an error, it turned out to be a discovery which led to the invention of what he called FLOX-Technology, a name derived from the German expression "flammenlose Oxidation" (flameless oxidation). The advantages of this technology attracted funding for a project at Stuttgart University called FloxCoal, a programme aiming to engineer a flameless atomizing coal burner. The reduced pollutant emission in FLOX combustion has been considered a promising candidate for use in coal pollution mitigation and the higher efficiency combustion in FLOX received increased interest as a result of the 1990 oil price shock. FLOX burners have since been used within furnaces in the steel and metallurgical industries.
Technology
FLOX requires the air and fuel components to be mixed in an environment in which exhaust gases are recirculated back into the combustion chamber. Flameless combustion also does not display the same high energy peaks as the traditional combustion observed within a swirl burner, resulting in a more smooth and stable combustion process.
When combustion occurs, NOx is formed at the front of the flame: suppression of peak flame offers the theoretical possibility of reducing NOx production to zero. Experiments with FLOX-Technology have established that it can reduce the amount of NOx generated by 20% in the case of Rhenisch brown coal, and by 65% in the case of Polish black coal.
The role of combustion temperature in NOx formation has been understood for some time. Reduction of the combustion temperature in gasoline engines, by reducing the compression ratio, was among the first steps taken to comply with the U.S. clean air act in the 1970s. This lowered the NOx emissions by lowering the temperature at the flame front.
References
External links
list of article at WS Wärmetechnik
Combustion | FLOX | [
"Chemistry"
] | 429 | [
"Combustion"
] |
338,192 | https://en.wikipedia.org/wiki/Common-ion%20effect | In chemistry, the common-ion effect refers to the decrease in solubility of an ionic precipitate by the addition to the solution of a soluble compound with an ion in common with the precipitate. This behaviour is a consequence of Le Chatelier's principle for the equilibrium reaction of the ionic association/dissociation. The effect is commonly seen as an effect on the solubility of salts and other weak electrolytes. Adding an additional amount of one of the ions of the salt generally leads to increased precipitation of the salt, which reduces the concentration of both ions of the salt until the solubility equilibrium is reached. The effect is based on the fact that both the original salt and the other added chemical have one ion in common with each other.
Examples of the common-ion effect
Dissociation of hydrogen sulfide in presence of hydrochloric acid
Hydrogen sulfide (H2S) is a weak electrolyte. It is partially ionized when in aqueous solution, therefore there exists an equilibrium between un-ionized molecules and constituent ions in an aqueous medium as follows:
H2S H+ + HS−
By applying the law of mass action, we have
Hydrochloric acid (HCl) is a strong electrolyte, which nearly completely ionizes as
HCl → H+ + Cl−
If HCl is added to the H2S solution, H+ is a common ion and creates a common ion effect. Due to the increase in concentration of H+ ions from the added HCl, the equilibrium of the dissociation of H2S shifts to the left and keeps the value of Ka constant. Thus the dissociation of H2S decreases, the concentration of un-ionized H2S increases, and as a result, the concentration of sulfide ions decreases.
Solubility of barium iodate in presence of barium nitrate
Barium iodate, Ba(IO3)2, has a solubility product Ksp = [Ba2+][IO3−]2 = 1.57 x 10−9. Its solubility in pure water is 7.32 x 10−4 M. However in a solution that is 0.0200 M in barium nitrate, Ba(NO3)2, the increase in the common ion barium leads to a decrease in iodate ion concentration. The solubility is therefore reduced to 1.40 x 10−4 M, about five times smaller.
Solubility effects
A practical example used very widely in areas drawing drinking water from chalk or limestone aquifers is the addition of sodium carbonate to the raw water to reduce the hardness of the water. In the water treatment process, highly soluble sodium carbonate salt is added to precipitate out sparingly soluble calcium carbonate. The very pure and finely divided precipitate of calcium carbonate that is generated is a valuable by-product used in the manufacture of toothpaste.
The salting-out process used in the manufacture of soaps benefits from the common-ion effect. Soaps are sodium salts of fatty acids. Addition of sodium chloride reduces the solubility of the soap salts. The soaps precipitate due to a combination of common-ion effect and increased ionic strength.
Sea, brackish and other waters that contain appreciable amount of sodium ions (Na+) interfere with the normal behavior of soap because of common-ion effect. In the presence of excess Na+, the solubility of soap salts is reduced, making the soap less effective.
Buffering effect
A buffer solution contains an acid and its conjugate base or a base and its conjugate acid. Addition of the conjugate ion will result in a change of pH of the buffer solution. For example, if both sodium acetate and acetic acid are dissolved in the same solution they both dissociate and ionize to produce acetate ions. Sodium acetate is a strong electrolyte, so it dissociates completely in solution. Acetic acid is a weak acid, so it only ionizes slightly. According to Le Chatelier's principle, the addition of acetate ions from sodium acetate will suppress the ionization of acetic acid and shift its equilibrium to the left. Thus the percent dissociation of the acetic acid will decrease, and the pH of the solution will increase. The ionization of an acid or a base is limited by the presence of its conjugate base or acid.
NaCH3CO2(s) → Na+(aq) + CH3CO2−(aq)
CH3CO2H(aq) H+(aq) + CH3CO2−(aq)
This will decrease the hydronium concentration, and thus the common-ion solution will be less acidic than a solution containing only acetic acid.
Exceptions
Many transition-metal compounds violate this rule due to the formation of complex ions, a scenario not part of the equilibria that are involved in simple precipitation of salts from ionic solution. For example, copper(I) chloride is insoluble in water, but it dissolves when chloride ions are added, such as when hydrochloric acid is added. This is due to the formation of soluble CuCl2− complex ions.
Uncommon-ion effect
Sometimes adding an ion other than the ones that are part of the precipitated salt itself can increase the solubility of the salt. This "salting in" is called the "uncommon-ion effect" (also "salt effect" or the "diverse-ion effect"). It occurs because as the total ion concentration increases, inter-ion attraction within the solution can become an important factor. This alternate equilibrium makes the ions less available for the precipitation reaction. This is also called odd ion effect.
References
Equilibrium chemistry
Solutions | Common-ion effect | [
"Chemistry"
] | 1,214 | [
"Homogeneous chemical mixtures",
"Solutions",
"Equilibrium chemistry"
] |
338,199 | https://en.wikipedia.org/wiki/Hasse%20diagram | In order theory, a Hasse diagram (; ) is a type of mathematical diagram used to represent a finite partially ordered set, in the form of a drawing of its transitive reduction. Concretely, for a partially ordered set one represents each element of as a vertex in the plane and draws a line segment or curve that goes upward from one vertex to another vertex whenever covers (that is, whenever , and there is no distinct from and with ). These curves may cross each other but must not touch any vertices other than their endpoints. Such a diagram, with labeled vertices, uniquely determines its partial order.
Hasse diagrams are named after Helmut Hasse (1898–1979); according to Garrett Birkhoff, they are so called because of the effective use Hasse made of them. However, Hasse was not the first to use these diagrams. One example that predates Hasse can be found in an 1895 work by Henri Gustave Vogt. Although Hasse diagrams were originally devised as a technique for making drawings of partially ordered sets by hand, they have more recently been created automatically using graph drawing techniques.
In some sources, the phrase "Hasse diagram" has a different meaning: the directed acyclic graph obtained from the covering relation of a partially ordered set, independently of any drawing of that graph.
Diagram design
Although Hasse diagrams are simple, as well as intuitive, tools for dealing with finite posets, it turns out to be rather difficult to draw "good" diagrams. The reason is that, in general, there are many different possible ways to draw a Hasse diagram for a given poset. The simple technique of just starting with the minimal elements of an order and then drawing greater elements incrementally often produces quite poor results: symmetries and internal structure of the order are easily lost.
The following example demonstrates the issue. Consider the power set of a 4-element set ordered by inclusion . Below are four different Hasse diagrams for this partial order. Each subset has a node labelled with a binary encoding that shows whether a certain element is in the subset (1) or not (0):
The first diagram makes clear that the power set is a graded poset. The second diagram has the same graded structure, but by making some edges longer than others, it emphasizes that the 4-dimensional cube is a combinatorial union of two 3-dimensional cubes, and that a tetrahedron (abstract 3-polytope) likewise merges two triangles (abstract 2-polytopes). The third diagram shows some of the internal symmetry of the structure. In the fourth diagram the vertices are arranged in a 4×4 grid.
Upward planarity
If a partial order can be drawn as a Hasse diagram in which no two edges cross, its covering graph is said to be upward planar. A number of results on upward planarity and on crossing-free Hasse diagram construction are known:
If the partial order to be drawn is a lattice, then it can be drawn without crossings if and only if it has order dimension at most two. In this case, a non-crossing drawing may be found by deriving Cartesian coordinates for the elements from their positions in the two linear orders realizing the order dimension, and then rotating the drawing counterclockwise by a 45-degree angle.
If the partial order has at most one minimal element, or it has at most one maximal element, then it may be tested in linear time whether it has a non-crossing Hasse diagram.
It is NP-complete to determine whether a partial order with multiple sources and sinks can be drawn as a crossing-free Hasse diagram. However, finding a crossing-free Hasse diagram is fixed-parameter tractable when parametrized by the number of articulation points and triconnected components of the transitive reduction of the partial order.
If the y-coordinates of the elements of a partial order are specified, then a crossing-free Hasse diagram respecting those coordinate assignments can be found in linear time, if such a diagram exists. In particular, if the input poset is a graded poset, it is possible to determine in linear time whether there is a crossing-free Hasse diagram in which the height of each vertex is proportional to its rank.
Use in UML notation
In software engineering / Object-oriented design, the classes of a software system and the inheritance relation between these classes is often depicted using a class diagram, a form of Hasse diagram in which the edges connecting classes are drawn as solid line segments with an open triangle at the superclass end.
Notes
References
External links
Diagrams
Directed acyclic graphs
Graph drawing
Graphical concepts in set theory
Order theory | Hasse diagram | [
"Mathematics"
] | 957 | [
"Basic concepts in set theory",
"Order theory",
"Graphical concepts in set theory"
] |
1,005,128 | https://en.wikipedia.org/wiki/Wave%20soldering | Wave soldering is a bulk soldering process used in printed circuit board manufacturing. The circuit board is passed over a pan of molten solder in which a pump produces an upwelling of solder that looks like a standing wave. As the circuit board makes contact with this wave, the components become soldered to the board. Wave soldering is used for both through-hole printed circuit assemblies, and surface mount. In the latter case, the components are glued onto the surface of a printed circuit board (PCB) by placement equipment, before being run through the molten solder wave. Wave soldering is mainly used in soldering of through hole components.
As through-hole components have been largely replaced by surface mount components, wave soldering has been supplanted by reflow soldering methods in many large-scale electronics applications. However, there is still significant wave soldering where surface-mount technology (SMT) is not suitable (e.g., large power devices and high pin count connectors), or where simple through-hole technology prevails (certain major appliances).
Wave solder process
There are many types of wave solder machines; however, the basic components and principles of these machines are the same. The basic equipment used during the process is a conveyor that moves the PCB through the different zones, a pan of solder used in the soldering process, a pump that produces the actual wave, the sprayer for the flux and the preheating pad. The solder is usually a mixture of metals. A typical leaded solder is composed of 50% tin, 49.5% lead, and 0.5% antimony. The Restriction of Hazardous Substances Directive (RoHS) has led to an ongoing transition away from 'traditional' leaded solder in modern manufacturing in favor of lead-free alternatives. Both tin-silver-copper and tin-copper-nickel alloys are commonly used, with one common alloy (SN100C) being 99.25% tin, 0.7% copper, 0.05% nickel and <0.01% germanium.
Fluxing
Flux in the wave soldering process has a primary and a secondary objective. The primary objective is to clean the components that are to be soldered, principally any oxide layers that may have formed. There are two types of flux, corrosive and noncorrosive. Noncorrosive flux requires precleaning and is used when low acidity is required. Corrosive flux is quick and requires little precleaning, but has a higher acidity.
Preheating
Preheating helps to accelerate the soldering process and to prevent thermal shock.
Cleaning
Some types of flux, called "no-clean" fluxes, do not require cleaning; their residues are benign after the soldering process. Typically no-clean fluxes are especially sensitive to process conditions, which may make them undesirable in some applications. Other kinds of flux, however, require a cleaning stage, in which the PCB is washed with solvents and/or deionized water to remove flux residue.
Finish and quality
Quality depends on proper temperatures when heating and on properly treated surfaces.
Solder types
Different combinations of tin, lead and other metals are used to create solder. The combinations used depend on the desired properties. The most popular combinations are SAC (Tin(Sn)/Silver(Ag)/Copper(Cu)) alloys for lead-free processes and Sn63Pb37 (Sn63A) which is a eutectic alloy consisting of 63% tin and 37% lead. This latter combination is strong, has a low melting range, and melts and sets quickly (i.e., no 'plastic' range between the solid and molten states like the older 60% tin / 40% lead alloy). Higher tin compositions give the solder higher corrosion resistances, but raise the melting point. Another common composition is 11% tin, 37% lead, 42% bismuth, and 10% cadmium. This combination has a low melting point and is useful for soldering components that are sensitive to heat.
Environmental and performance requirements also factor into alloy selection. Common restrictions include restrictions on lead (Pb) when RoHS compliance is required and restrictions on pure tin (Sn) when long term reliability is a concern.
Effects of cooling rate
It is important that the PCBs be allowed to cool at a reasonable rate. If they are cooled too fast, then the PCB can become warped and the solder can be compromised. On the other hand, if the PCB is allowed to cool too slowly, then the PCB can become brittle and some components may be damaged by heat. The PCB should be cooled by either a fine water spray or air cooled to decrease the amount of damage to the board.
Thermal profiling
Thermal profiling is the act of measuring several points on a circuit board to determine the thermal excursion it takes through the soldering process.
In the electronics manufacturing industry, SPC (Statistical Process Control) helps determine if the process is in control, measured against the reflow parameters defined by the soldering technologies and component requirements.
Products like the Solderstar WaveShuttle and the Optiminer have been developed special fixtures which are passed through the process and can measure the temperature profile, along with contact times, wave parallelism and wave heights. These fixture combined with analysis software allows the production engineer to establish and then control the wave solder process.
Solder wave height
The height of the solder wave is a key parameter that needs to be evaluated when setting up the wave solder process. The contact time between the solder wave and assembly being soldered is typically set to between 2 and 4 seconds. This contact time is controlled by two parameters on the machine, conveyor speed and wave height, changes to either of these parameters will result in a change in contact time. The wave height is typically controlled by increasing or decreasing the pump speed on the machine. Changes can be evaluated and checked using a tempered glass plate, if more detailed recording are required fixture are available which digitally record the contact times, height and speed. Also, some wave solder machines can give the operator a choice between a smooth laminar wave or a slightly higher-pressure 'dancer' wave.
See also
Dip soldering
Thermal profiling
Solder mask
References
Further reading
Seeling, Karl (1995). A study of lead-free alloys. AIM, 1, Retrieved April 18, 2008, from
Biocca, Peter (2005, April 5). Lead-free wave soldering. Retrieved April 18, 2008, from EMSnow Web site:
Electronic Production Design & Test (2015, February 13) The importance of wave height measurement in wave solder process control
Soldering
Articles containing video clips
Printed circuit board manufacturing | Wave soldering | [
"Engineering"
] | 1,401 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
1,005,746 | https://en.wikipedia.org/wiki/Supercritical%20flow | A supercritical flow is a flow whose velocity is larger than the wave velocity. The analogous condition in gas dynamics is supersonic speed.
According to the website Civil Engineering Terms, supercritical flow is defined as follows:
The flow at which depth of the channel is less than critical depth, velocity of flow is greater than critical velocity and slope of the channel is also greater than the critical slope is known as supercritical flow.
Information travels at the wave velocity. This is the velocity at which waves travel outwards from a pebble thrown into a lake. The flow velocity is the velocity at which a leaf in the flow travels. If a pebble is thrown into a supercritical flow then the ripples will all move down stream whereas in a subcritical flow some would travel up stream and some would travel down stream. It is only in supercritical flows that hydraulic jumps (bores) can occur. In fluid dynamics, the change from one behaviour to the other is often described by a dimensionless quantity, where the transition occurs whenever this number becomes less or more than one. One of these numbers is the Froude number:
where
U = velocity of the flow
g = acceleration due to gravity (9.81 m/s² or 32.2 ft/s²)
h = depth of flow relative to the channel bottom
If , we call the flow subcritical; if , we call the flow supercritical. If , it is critical.
See also
Supercritical fluid
Supercritical vs. subcritical flow
Supersonic
Hypersonic
Sonic black hole
References
The Hydraulics of Open Channel Flow: An Introduction. Physical Modelling of Hydraulics Chanson, Hubert (1999)
Fluid dynamics | Supercritical flow | [
"Chemistry",
"Engineering"
] | 348 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
1,005,852 | https://en.wikipedia.org/wiki/Artronix | Artronix Incorporated began in 1970 and has roots in a project in a computer science class at Washington University School of Medicine in St Louis. The class designed, built and tested a 12-bit minicomputer, which later evolved to become the PC12 minicomputer. The new company entered the bio-medical computing market with a set of peripherals and software for use in Radiation Treatment Planning (see full article and abstract) and ultrasound scanning. Software for the PC12 was written in assembly language and FORTRAN; later software was written in MUMPS. The company was located in two buildings in the Hanley Industrial Park off South Hanley Road in Maplewood, Missouri.
The company later developed another product line of brain-scanning or computed tomography equipment based on the Lockheed SUE 16-bit minicomputer (see also Pluribus); later designs included an optional vector processor using AMD Am2900 bipolar bit-slices to speed tomographic reconstruction calculations. In contrast to earlier designs, the Artronix scanner used a fan-shaped beam with 128 detectors on a rotating gantry. The system would take 540 degrees of data (1½ rotations) to average out noise in the samples. The beam allowed 3mm slices, but several slices would routinely be mathematically combined into one image for display purposes. The first generation of scanners was a head scanner while a later generation was a torso (whole-body) scanner. The CAT-3 (computerized axial tomography) system was a success at first, but the technology surrendered ground to PET (positron emission tomography) and MRI (magnetic resonance imaging) systems. Artronix closed its doors in 1978. A video of the Artronix torso scanner operating without a shroud is available on YouTube at Commissie NVvRadiologie with narration in Dutch.
Artronix was founded by Arne Roestel. Mr. Roestel went on to found Multidata Systems International. For his leadership of Artronix, Mr. Roestel was named as the Small Businessman of the Year for Missouri in 1976 by the Small Business Administration and was hosted at a luncheon by President Gerald Ford (source: Ford Library Museum).
References
Link broken when tested on 2017-07-13.
Defunct computer companies of the United States
Defunct computer hardware companies
Companies based in Missouri | Artronix | [
"Technology"
] | 483 | [
"Computing stubs",
"Computer hardware stubs"
] |
1,005,883 | https://en.wikipedia.org/wiki/Revolving%20door | A revolving door typically consists of three or four doors that hang on a central shaft and rotate around a vertical axis within a cylindrical enclosure. To use a revolving door, a person enters the enclosure between two of the doors and then moves continuously to the desired exit while keeping pace with the doors.
Revolving doors were designed to relieve the immense pressure caused by air rushing through high-rise buildings (referred to as stack effect pressure) while at the same time allowing large numbers of people to pass in and out. They are also energy efficient; they act as an airlock to prevent drafts, decreasing the loss of heating or cooling for the building as compared to a standard door.
Construction
Around the central shaft of the revolving door, there are usually three or four panels called wings or leaves. Large diameter revolving doors can accommodate pushchairs and wheeled luggage racks - such large capacity doors are sometimes H-shaped to split the circle into only two (hence larger) parts.
Some revolving door displays incorporate a small glass enclosure, permitting small objects such as sculpture, fashion mannequins, or plants to be displayed to pedestrians passing through. Such enclosures can either be mounted at the central pivot, or attached to the revolving door wings.
The wings of revolving doors usually incorporate glass, to allow people to see and anticipate each other while passing through the door. Manual revolving doors rotate with pushbars, causing all wings to rotate. Revolving doors typically have a "speed control" (governor) to prevent people from spinning the doors too fast.
Automatic revolving doors are powered above/below the central shaft, or along the perimeter. Automatic revolving doors have safety sensors, but there has been at least one fatality recorded.
Skyscraper design requires a means of draft block, such as revolving doors, to prevent the chimney effect of the tall structure from sucking in air at high speed at the base and ejecting it through vents in the roof while the building is being heated, or sucking in air through the vents and ejecting it through the doors while being cooled, both effects due to convection. Modern revolving doors permit the individual doors of the assembly to be unlocked from the central shaft to permit free flowing traffic in both directions. This feature, called breakout or break away, is typically used only during emergencies, or to admit oversize objects. The most effective method for this is the "bookfold" design, which allows all three or four wings to be broken away together. Normally, the revolving door is always closed so that wind and drafts cannot blow into the building, to efficiently minimize heating and air conditioning loads.
In right-hand traffic countries, revolving doors typically revolve counter-clockwise (as seen from above), allowing people to enter and exit only on the right side of the door. In left-hand traffic countries such as Australia and New Zealand, revolving doors revolve clockwise, but door rotations are mixed in Britain. Direction of rotation is often enforced by the door governor mechanism, or by the orientation of the door seal brush (weatherstrips).
Security
Revolving doors can also be used as security devices to restrict entry to a single person at a time if the spacing between the doors is small enough. This is in contrast to a normal door which allows a second person to easily "tailgate" behind an authorized person. Extreme security can require a particular type of bullet-resistant glass.
Sometimes a revolving door is designed for one-way traffic. An example is the now-common usage in airports to prevent a person from bypassing airport security checkpoints by entering the exit. Such doors are designed with a brake that is activated by a sensor should someone enter from the incorrect side. The door also revolves backwards to permit that person to exit, while also notifying security of the attempt.
Turnstile exit-only doors are also often used in subways and other rapid transit facilities to prevent people from bypassing fare payment. They are similarly used at large sports stadiums, amusement parks, and other such venues, to allow pedestrians to exit freely, but not to enter without paying admission fees. These doors usually work mechanically, with the door panels constructed of horizontal bars which pass through a "wall" of interlacing (interdigitated) bars, allowing the door leaves to pass through, but blocking people from illegally entering through the exit.
Emergency use
On November 28, 1942, the Cocoanut Grove, a popular nightclub in Boston, Massachusetts, went up in flames, killing 492 people. One of the main reasons cited for the large number of casualties was the single revolving door located at the entrance. As the mob of panicking patrons attempted to use the door as an escape it soon became jammed, trapping countless people between the door and the crowd pushing towards it. As a result, many people died from smoke inhalation, as they were not able to escape the burning nightclub.
In 1943, it became a Massachusetts state law requirement to flank a revolving door with an outward swinging hinged door or to make the revolving door collapsible (so it becomes a double partition collapsing at 180°), allowing people to pass on either side. American revolving doors are now collapsible. Some jurisdictions require them to be flanked by at least one hinged door either by common practice or required by law. For example, the Ontario Building Code 3.4.6.14. asserts that revolving doors needs to "(a) be collapsible, (b) have hinged doors providing equivalent exiting capacity located adjacent to it".
History
H. Bockhacker of Berlin was granted German patent DE18349 on December 22, 1881 for or , which used a rotating cylinder with a door which after entering, the user then turned around to the exiting direction.
Theophilus Van Kannel of Philadelphia was granted US patent 387,571 on August 7, 1888, for a "Storm-Door Structure". The patent drawings filed show a three-partition revolving door. The patent describes it as having "three radiating and equidistant wings ... provided with weather-strips or equivalent means to insure [sic] a snug fit". The door "possesses numerous advantages over a hinged-door structure ... it is perfectly noiseless ... effectually prevents the entrance of wind, snow, rain or dust ..." "Moreover, the door cannot be blown open by the wind ... there is no possibility of collision, and yet persons can pass both in and out at the same time." The patent further lists, "the excluding of noises of the street" as another advantage of the revolving door. It goes on to describe how a partition can be hinged so as to open to allow the passage of long objects through the revolving door. The patent itself does not use the term revolving door.
An urban legend, dating back to perhaps 2008, claims that the invention was motivated by his phobia of opening doors for others, especially women; according to Snopes, there is no evidence to support this.
In 1889, the Franklin Institute of Philadelphia awarded the John Scott Legacy Medal to Van Kannel for his contribution to society. In 1899, the world's first wooden revolving door was installed at Rector's, a restaurant on Times Square in Manhattan, located on Broadway between West 43rd and 44th Streets. In 2007 Theophilus Van Kannel was inducted into the National Inventors Hall of Fame for this invention.
Research
Research into the air and energy exchanges associated with revolving door usage have been carried out on a few occasions. The earliest such study was carried out in 1936 by , who worked for the van Kannel revolving door company at the time. Simpson's study was followed by a study by Schutrum et al. in 1961, and more recently a study by van Schijndel et al. in 2003. These studies focused on providing detailed measurements of the quantities of air and heat transferred inside the compartments of a door as it revolves. With the exception of the study by van Schijndel et al., which was purely theoretical, the measurements carried out for the other studies were used to provide design charts enabling engineers to estimate the quantity of air transferred by a door in function of the revolution rate and temperature contrast. However, none of these studies are referenced by existing design codes.
The aforementioned studies are specific to the type of door which they were acquired for, namely doors with four compartments. Although it appears that these dimensions were standard for four-compartment doors at the time, this is not the case nowadays. A more recent experimental study carried out at Imperial College London's Department of Civil and Environmental Engineering, provided more insight into the flow physics by which air is transferred across a revolving door.
Airflows and energy losses through revolving doors also occur as a result of leakages past the seals of the door. Leakages are common to any type of opening in an otherwise closed space, but have been investigated in the context of revolving doors by Zmeureanu et al. and by Schutrum et al. before that. The first study concluded that to avoid significant leakages, the seals of the doors should be maintained and periodically replaced if needed. The second study produced design charts for estimating the leakage rate through a revolving door. Unlike the curves for estimating the transfer rate also published in this study, the curves for estimating the leakage rate are more generic. As such these design curves still form the basis of the target leakage rates for revolving doors recommended by the ASHRAE standard 90.1 in the US. On May 25, 2006, an MIT Study entitled "Modifying Habits Towards Sustainability: A Study of Revolving Doors Usage on the MIT Campus" was published. In it, B. A. Cullum, Olivia Lee, Sittha Sukkasi and Dan Wesolowski concluded, "...substantial energy is saved when people use the revolving doors instead of swing doors – the smallest of habit changes contributes to energy conservation... Modification of one habit... indeed has the ability to eventually impact the environment on a global scale."
While preferred by building owners for energy conservation, revolving doors may be avoided by some people due to the perceived greater physical effort in using them.
See also
Paternoster lift
Revolving door (politics)
Turnstile
References
Further reading
Alan Beadmore, The Revolving Door since 1881: Architecture in Detail, 2000,
Harvey E. Van Kannel and Joanne Fox Marshall, T. Van Kannel, the inventor : his autobiography and journal, 1988,
External links
American inventions
Doors
1899 introductions
Articles containing video clips
Door | Revolving door | [
"Physics"
] | 2,158 | [
"Physical phenomena",
"Motion (physics)",
"Classical mechanics",
"Rotation"
] |
1,005,923 | https://en.wikipedia.org/wiki/List%20of%20academic%20databases%20and%20search%20engines | This page contains a representative list of major databases and search engines useful in an academic setting for finding and accessing articles in academic journals, institutional repositories, archives, or other collections of scientific and other articles. As the distinction between a database and a search engine is unclear for these complex document retrieval systems, see:
the general list of search engines for all-purpose search engines that can be used for academic purposes
the article about bibliographic databases for information about databases giving bibliographic information about finding books and journal articles.
Note that "free" or "subscription" can refer both to the availability of the database or of the journal articles included. This has been indicated as precisely as possible in the lists below.
See also
Academic publishing
Google Scholar
List of digital library projects
List of educational video websites
List of neuroscience databases
List of online databases
List of online encyclopedias
List of open access journals
List of open access projects
List of repositories
References
Academic publishing
Scholarly databases
academic databases
Bibliographic databases and indexes
Databases
Academic | List of academic databases and search engines | [
"Technology"
] | 211 | [
"Computing-related lists",
"Internet-related lists"
] |
1,005,946 | https://en.wikipedia.org/wiki/Squalene | Squalene is an organic compound. It is a triterpene with the formula C30H50. It is a colourless oil, although impure samples appear yellow. It was originally obtained from shark liver oil (hence its name, as Squalus is a genus of sharks). An estimated 12% of bodily squalene in humans is found in sebum. Squalene has a role in topical skin lubrication and protection.
Most plants, fungi, and animals produce squalene as biochemical precursor in sterol biosynthesis, including cholesterol and steroid hormones in the human body. It is also an intermediate in the biosynthesis of hopanoids in many bacteria.
Squalene is an important ingredient in some vaccine adjuvants: The Novartis and GlaxoSmithKline adjuvants are called MF59 and AS03, respectively.
Role in triterpenoid synthesis
Squalene is a biochemical precursor to both steroids and hopanoids. For sterols, the squalene conversion begins with oxidation (via squalene monooxygenase) of one of its terminal double bonds, resulting in 2,3-oxidosqualene. It then undergoes an enzyme-catalysed cyclisation to produce lanosterol, which can be elaborated into other steroids such as cholesterol and ergosterol in a multistep process by the removal of three methyl groups, the reduction of one double bond by NADPH and the migration of the other double bond. In many plants, this is then converted into stigmasterol, while in many fungi, it is the precursor to ergosterol.
The biosynthetic pathway is found in many bacteria, and most eukaryotes, though has not been found in Archaea.
Production
Biosynthesis
Squalene is biosynthesised by coupling two molecules of farnesyl pyrophosphate. The condensation requires NADPH and the enzyme squalene synthase.
Industry
Synthetic squalene is prepared commercially from geranylacetone.
Shark conservation
In 2020, conservationists raised concerns about the potential slaughter of sharks to obtain squalene for a COVID-19 vaccine.
Environmental and other concerns over shark hunting have motivated its extraction from other sources. Biosynthetic processes use genetically engineered yeast or bacteria.
Uses
As an adjuvant in vaccines
Immunologic adjuvants are substances, administered in conjunction with a vaccine, that stimulate the immune system and increase the response to the vaccine. Squalene is not itself an adjuvant, but it has been used in conjunction with surfactants in certain adjuvant formulations.
An adjuvant using squalene is Seqirus' proprietary MF59, which is added to influenza vaccines to help stimulate the human body's immune response through production of CD4 memory cells. It is the first oil-in-water influenza vaccine adjuvant to be commercialised in combination with a seasonal influenza virus vaccine. It was developed in the 1990s by researchers at Ciba-Geigy and Chiron; both companies were subsequently acquired by Novartis. The Influenza vaccine business of Novartis was later acquired by CSL Bering and created the company Seqirus. It is present in the form of an emulsion and is added to make the vaccine more immunogenic. However, the mechanism of action remains unknown. MF59 is capable of switching on a number of genes that partially overlap with those activated by other adjuvants. How these changes are triggered is unclear; to date, no receptors responding to MF59 have been identified. One possibility is that MF59 affects the cell behaviour by changing the lipid metabolism, namely by inducing accumulation of neutral lipids within the target cells. An influenza vaccine called FLUAD which used MF59 as an adjuvant was approved for use in the US in people 65 years of age and older, beginning with the 2016–2017 flu season.
A 2009 meta-analysis assessed data from 64 clinical trials of influenza vaccines with the squalene-containing adjuvant MF59 and compared them to the effects of vaccines with no adjuvant. The analysis reported that the adjuvated vaccines were associated with slightly lower risks of chronic diseases, but that neither type of vaccines altered the rate of autoimmune diseases; the authors concluded that their data "supports the good safety profile associated with MF59-adjuvated influenza vaccines and suggests there may be a clinical benefit over non-MF59-containing vaccines".
Safety
Toxicology studies indicate that in the concentrations used in cosmetics, squalene has low acute toxicity, and is not a significant contact allergen or irritant.
The World Health Organization and the US Department of Defense have both published extensive reports that emphasise that squalene is naturally occurring, even in oils of human fingerprints. The WHO goes further to explain that squalene has been present in over 22 million flu vaccines given to patients in Europe since 1997 without significant vaccine-related adverse events.
Controversies
Attempts to link squalene to Gulf War syndrome have been debunked.
References
External links
Squalene MS Spectrum
Chemical synthesis
Chemical processes
Toxicology
Polyenes
Biomolecules
Steroid hormone biosynthesis
Hydrocarbons
Triterpenes
Vaccination | Squalene | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,117 | [
"Hydrocarbons",
"Steroid hormone biosynthesis",
"Toxicology",
"Natural products",
"Biochemistry",
"Organic compounds",
"Chemical processes",
"Biosynthesis",
"Biomolecules",
"Chemical synthesis",
"Chemical process engineering",
"Structural biology",
"nan",
"Vaccination",
"Molecular biolog... |
1,005,990 | https://en.wikipedia.org/wiki/Mr.%20Zog%27s%20Sex%20Wax | Mr. Zog's Sex Wax is a brand of surfwax manufactured for use on surfboards that is produced in Carpinteria, California. This wax is rubbed on the top surface or "deck" of a surfboard to allow traction and grip for the surfer.
Mr. Zog's Sex Wax was first produced by Frederick Charles Herzog, III (also known as Mr. Zog) and chemist Nate Skinner in 1972. Hank Pitcher designed their original logo.
Due to the product name, promotional materials such as bumper stickers and t-shirts became extremely popular, even among those who had never ridden a surfboard. Their slogans, such as "The best for your stick", included innuendos of non-surfing uses. The materials confirmed their counterculture status by being banned from schools and amusement parks.
Different wax formulations are sold under the names: "Quick Humps", "Really Tacky", and "Navel Wax".
Market expansion
A snowboard wax is sold by Mr. Zog's, which is applied to the bottom of a snowboard to reduce friction between the snowboard and the snow. One variety is melted and applied to the bottom of the snowboard, another variety is rubbed on as a cold wax.
Mr. Zog's Sex Wax is also sold to ice hockey players. The wax is applied by rubbing it on over the tape on the blade of the stick. This does two things: it flattens the tape to allow a smoother shot, and it also makes the blade of the stick stickier, which helps hockey players control the puck for stick-handling, passing, and shooting purposes. Also the wax acts as a surfactant, also called surface-active agent, to aid moisture dispersion from the blade during ice hockey sessions. In addition, some goaltenders rub the wax on the shaft of the stick to help them hold the stick when it is struck by hard shots, some of which exceed 100 MPH.
Sex Wax for drummers is marketed and distributed by Big Bang Distribution company.
Product placement
The product has been used in a number of media productions:
Charlie's Angels: Full Throttle. The protagonists locate a murderer by determining that he used pineapple Sex Wax on a credit card that was in turn used to break into his victim's home.
Point Break: The protagonists determine that some bank robbers are surfers because a small amount of Sex Wax is found at the scene of the crime.
NCIS: Featured in the episode "Bikini Wax".
Mysterious Skin: The protagonist played by Joseph Gordon-Levitt wears a Mr Zogs Sex Wax logo singlet during the film.
References
External links
American brands
Carpinteria, California
Companies based in Santa Barbara County, California
Surfing equipment
Waxes | Mr. Zog's Sex Wax | [
"Physics"
] | 568 | [
"Materials",
"Matter",
"Waxes"
] |
1,006,006 | https://en.wikipedia.org/wiki/Apomorphy%20and%20synapomorphy | In phylogenetics, an apomorphy (or derived trait) is a novel character or character state that has evolved from its ancestral form (or plesiomorphy). A synapomorphy is an apomorphy shared by two or more taxa and is therefore hypothesized to have evolved in their most recent common ancestor. In cladistics, synapomorphy implies homology.
Examples of apomorphy are the presence of erect gait, fur, the evolution of three middle ear bones, and mammary glands in mammals but not in other vertebrate animals such as amphibians or reptiles, which have retained their ancestral traits of a sprawling gait and lack of fur. Thus, these derived traits are also synapomorphies of mammals in general as they are not shared by other vertebrate animals.
Etymology
The word —coined by German entomologist Willi Hennig—is derived from the Ancient Greek words (sún), meaning "with, together"; (apó), meaning "away from"; and (morphḗ), meaning "shape, form".
Examples
Lampreys and sharks share some features, like a nervous system, that are not synapomorphic because they are also shared by invertebrates. In contrast, the presence of jaws and paired appendages in both sharks and dogs, but not in lampreys or close invertebrate relatives, identifies these traits as synapomorphies. This supports the hypothesis that dogs and sharks are more closely related to each other than to lampreys.
Clade analysis
The concept of synapomorphy depends on a given clade in the tree of life. Cladograms are diagrams that depict evolutionary relationships within groups of taxa. These illustrations are accurate predictive device in modern genetics. They are usually depicted in either tree or ladder form. Synapomorphies then create evidence for historical relationships and their associated hierarchical structure. Evolutionarily, a synapomorphy is the marker for the most recent common ancestor of the monophyletic group consisting of a set of taxa in a cladogram. What counts as a synapomorphy for one clade may well be a primitive character or plesiomorphy at a less inclusive or nested clade. For example, the presence of mammary glands is a synapomorphy for mammals in relation to tetrapods but is a symplesiomorphy for mammals in relation to one another—rodents and primates, for example. So the concept can be understood as well in terms of "a character newer than" (autapomorphy) and "a character older than" (plesiomorphy) the apomorphy: mammary glands are evolutionarily newer than vertebral column, so mammary glands are an autapomorphy if vertebral column is an apomorphy, but if mammary glands are the apomorphy being considered then vertebral column is a plesiomorphy.
Relations to other terms
These phylogenetic terms are used to describe different patterns of ancestral and derived character or trait states as stated in the above diagram in association with apomorphies and synapomorphies.
Symplesiomorphy – an ancestral trait shared by two or more taxa.
Plesiomorphy – a symplesiomorphy discussed in reference to a more derived state.
Pseudoplesiomorphy – a trait that cannot be identified as either a plesiomorphy or an apomorphy that is a reversal.
Reversal – a loss of derived trait present in ancestor and the reestablishment of a plesiomorphic trait.
Convergence – independent evolution of a similar trait in two or more taxa.
Apomorphy – a derived trait. Apomorphy shared by two or more taxa and inherited from a common ancestor is synapomorphy. Apomorphy unique to a given taxon is autapomorphy.
Synapomorphy/homology – a derived trait that is found in some or all terminal groups of a clade, and inherited from a common ancestor, for which it was an autapomorphy (i.e., not present in its immediate ancestor).
Underlying synapomorphy – a synapomorphy that has been lost again in many members of the clade. If lost in all but one, it can be hard to distinguish from an autapomorphy.
Autapomorphy – a distinctive derived trait that is unique to a given taxon or group.
Homoplasy in biological systematics is when a trait has been gained or lost independently in separate lineages during evolution. This convergent evolution leads to species independently sharing a trait that is different from the trait inferred to have been present in their common ancestor.
Parallel homoplasy – derived trait present in two groups or species without a common ancestor due to convergent evolution.
Reverse homoplasy – trait present in an ancestor but not in direct descendants that reappears in later descendants.
Hemiplasy is the case where a character that appears homoplastic given the species tree actually has a single origin on the associated gene tree. Hemiplasy reflects gene tree-species tree discordance due to the multispecies coalescent.
References
External links
Cladistics, Berkeley
Phylogenetics
Evolutionary biology terminology
de:Apomorphie#Unterteilung von Apomorphien | Apomorphy and synapomorphy | [
"Biology"
] | 1,158 | [
"Bioinformatics",
"Evolutionary biology terminology",
"Taxonomy (biology)",
"Phylogenetics"
] |
1,006,035 | https://en.wikipedia.org/wiki/Unix%20time | Unix time is a date and time representation widely used in computing. It measures time by the number of non-leap seconds that have elapsed since 00:00:00 UTC on 1 January 1970, the Unix epoch. For example, at midnight on 1 January 2010, Unix time was 1262304000.
Unix time originated as the system time of Unix operating systems. It has come to be widely used in other computer operating systems, file systems, programming languages, and databases. In modern computing, values are sometimes stored with higher granularity, such as microseconds or nanoseconds.
Definition
Unix time is currently defined as the number of non-leap seconds which have passed since 00:00:00UTC on Thursday, 1 January 1970, which is referred to as the Unix epoch. Unix time is typically encoded as a signed integer.
The Unix time is exactly midnight UTC on 1 January 1970, with Unix time incrementing by 1 for every non-leap second after this. For example, 00:00:00UTC on 1 January 1971 is represented in Unix time as . Negative values, on systems that support them, indicate times before the Unix epoch, with the value decreasing by 1 for every non-leap second before the epoch. For example, 00:00:00UTC on 1 January 1969 is represented in Unix time as . Every day in Unix time consists of exactly seconds.
Unix time is sometimes referred to as Epoch time. This can be misleading since Unix time is not the only time system based on an epoch and the Unix epoch is not the only epoch used by other time systems.
Leap seconds
Unix time differs from both Coordinated Universal Time (UTC) and International Atomic Time (TAI) in its handling of leap seconds. UTC includes leap seconds that adjust for the discrepancy between precise time, as measured by atomic clocks, and solar time, relating to the position of the earth in relation to the sun. International Atomic Time (TAI), in which every day is precisely seconds long, ignores solar time and gradually loses synchronization with the Earth's rotation at a rate of roughly one second per year. In Unix time, every day contains exactly seconds. Each leap second uses the timestamp of a second that immediately precedes or follows it.
On a normal UTC day, which has a duration of seconds, the Unix time number changes in a continuous manner across midnight. For example, at the end of the day used in the examples above, the time representations progress as follows:
When a leap second occurs, the UTC day is not exactly seconds long and the Unix time number (which always increases by exactly each day) experiences a discontinuity. Leap seconds may be positive or negative. No negative leap second has ever been declared, but if one were to be, then at the end of a day with a negative leap second, the Unix time number would jump up by 1 to the start of the next day. During a positive leap second at the end of a day, which occurs about every year and a half on average, the Unix time number increases continuously into the next day during the leap second and then at the end of the leap second jumps back by 1 (returning to the start of the next day). For example, this is what happened on strictly conforming POSIX.1 systems at the end of 1998:
Unix time numbers are repeated in the second immediately following a positive leap second. The Unix time number is thus ambiguous: it can refer either to start of the leap second (2016-12-31 23:59:60) or the end of it, one second later (2017-01-01 00:00:00). In the theoretical case when a negative leap second occurs, no ambiguity is caused, but instead there is a range of Unix time numbers that do not refer to any point in UTC time at all.
A Unix clock is often implemented with a different type of positive leap second handling associated with the Network Time Protocol (NTP). This yields a system that does not conform to the POSIX standard. See the section below concerning NTP for details.
When dealing with periods that do not encompass a UTC leap second, the difference between two Unix time numbers is equal to the duration in seconds of the period between the corresponding points in time. This is a common computational technique. However, where leap seconds occur, such calculations give the wrong answer. In applications where this level of accuracy is required, it is necessary to consult a table of leap seconds when dealing with Unix times, and it is often preferable to use a different time encoding that does not suffer from this problem.
A Unix time number is easily converted back into a UTC time by taking the quotient and modulus of the Unix time number, modulo . The quotient is the number of days since the epoch, and the modulus is the number of seconds since midnight UTC on that day. If given a Unix time number that is ambiguous due to a positive leap second, this algorithm interprets it as the time just after midnight. It never generates a time that is during a leap second. If given a Unix time number that is invalid due to a negative leap second, it generates an equally invalid UTC time. If these conditions are significant, it is necessary to consult a table of leap seconds to detect them.
Non-synchronous Network Time Protocol-based variant
Commonly a Mills-style Unix clock is implemented with leap second handling not synchronous with the change of the Unix time number. The time number initially decreases where a leap should have occurred, and then it leaps to the correct time 1 second after the leap. This makes implementation easier, and is described by Mills' paper. This is what happens across a positive leap second:
This can be decoded properly by paying attention to the leap second state variable, which unambiguously indicates whether the leap has been performed yet. The state variable change is synchronous with the leap.
A similar situation arises with a negative leap second, where the second that is skipped is slightly too late. Very briefly the system shows a nominally impossible time number, but this can be detected by the TIME_DEL state and corrected.
In this type of system the Unix time number violates POSIX around both types of leap second. Collecting the leap second state variable along with the time number allows for unambiguous decoding, so the correct POSIX time number can be generated if desired, or the full UTC time can be stored in a more suitable format.
The decoding logic required to cope with this style of Unix clock would also correctly decode a hypothetical POSIX-conforming clock using the same interface. This would be achieved by indicating the TIME_INS state during the entirety of an inserted leap second, then indicating TIME_WAIT during the entirety of the following second while repeating the seconds count. This requires synchronous leap second handling. This is probably the best way to express UTC time in Unix clock form, via a Unix interface, when the underlying clock is fundamentally untroubled by leap seconds.
Variant that counts leap seconds
Another, much rarer, non-conforming variant of Unix time keeping involves incrementing the value for all seconds, including leap seconds; some Linux systems are configured this way. Time kept in this fashion is sometimes referred to as "TAI" (although timestamps can be converted to UTC if the value corresponds to a time when the difference between TAI and UTC is known), as opposed to "UTC" (although not all UTC time values have a unique reference in systems that do not count leap seconds).
Because TAI has no leap seconds, and every TAI day is exactly 86400 seconds long, this encoding is actually a pure linear count of seconds elapsed since 1970-01-01T00:00:10TAI. This makes time interval arithmetic much easier. Time values from these systems do not suffer the ambiguity that strictly conforming POSIX systems or NTP-driven systems have.
In these systems it is necessary to consult a table of leap seconds to correctly convert between UTC and the pseudo-Unix-time representation. This resembles the manner in which time zone tables must be consulted to convert to and from civil time; the IANA time zone database includes leap second information, and the sample code available from the same source uses that information to convert between TAI-based timestamps and local time. Conversion also runs into definitional problems prior to the 1972 commencement of the current form of UTC (see section UTC basis below).
This system, despite its superficial resemblance, is not Unix time. It encodes times with values that differ by several seconds from the POSIX time values. A version of this system, in which the epoch was 1970-01-01T00:00:00TAI rather than 1970-01-01T00:00:10TAI, was proposed for inclusion in ISO C's , but only the UTC part was accepted in 2011. A does, however, exist in C++20.
Representing the number
A Unix time number can be represented in any form capable of representing numbers. In some applications the number is simply represented textually as a string of decimal digits, raising only trivial additional problems. However, certain binary representations of Unix times are particularly significant.
The Unix time_t data type that represents a point in time is, on many platforms, a signed integer, traditionally of 32bits (but see below), directly encoding the Unix time number as described in the preceding section. A signed 32-bit value covers about 68 years before and after the 1970-01-01 epoch. The minimum representable date is Friday 1901-12-13, and the maximum representable date is Tuesday 2038-01-19. One second after 2038-01-19T03:14:07Z this representation will overflow in what is known as the year 2038 problem.
In some newer operating systems, time_t has been widened to 64 bits. This expands the times representable to about in both directions, which is over twenty times the present age of the universe.
There was originally some controversy over whether the Unix time_t should be signed or unsigned. If unsigned, its range in the future would be doubled, postponing the 32-bit overflow (by 68 years). However, it would then be incapable of representing times prior to the epoch. The consensus is for time_t to be signed, and this is the usual practice. The software development platform for version 6 of the QNX operating system has an unsigned 32-bit time_t, though older releases used a signed type.
The POSIX and Open Group Unix specifications include the C standard library, which includes the time types and functions defined in the <time.h> header file. The ISO C standard states that time_t must be an arithmetic type, but does not mandate any specific type or encoding for it. POSIX requires time_t to be an integer type, but does not mandate that it be signed or unsigned.
Unix has no tradition of directly representing non-integer Unix time numbers as binary fractions. Instead, times with sub-second precision are represented using composite data types that consist of two integers, the first being a time_t (the integral part of the Unix time), and the second being the fractional part of the time number in millionths (in struct timeval) or billionths (in struct timespec). These structures provide a decimal-based fixed-point data format, which is useful for some applications, and trivial to convert for others.
UTC basis
The present form of UTC, with leap seconds, is defined only starting from 1 January 1972. Prior to that, since 1 January 1961 there was an older form of UTC in which not only were there occasional time steps, which were by non-integer numbers of seconds, but also the UTC second was slightly longer than the SI second, and periodically changed to continuously approximate the Earth's rotation. Prior to 1961 there was no UTC, and prior to 1958 there was no widespread atomic timekeeping; in these eras, some approximation of GMT (based directly on the Earth's rotation) was used instead of an atomic timescale.
The precise definition of Unix time as an encoding of UTC is only uncontroversial when applied to the present form of UTC. The Unix epoch predating the start of this form of UTC does not affect its use in this era: the number of days from 1 January 1970 (the Unix epoch) to 1 January 1972 (the start of UTC) is not in question, and the number of days is all that is significant to Unix time.
The meaning of Unix time values below (i.e., prior to 1 January 1972) is not precisely defined. The basis of such Unix times is best understood to be an unspecified approximation of UTC. Computers of that era rarely had clocks set sufficiently accurately to provide meaningful sub-second timestamps in any case. Unix time is not a suitable way to represent times prior to 1972 in applications requiring sub-second precision; such applications must, at least, define which form of UT or GMT they use.
, the possibility of ending the use of leap seconds in civil time is being considered. A likely means to execute this change is to define a new time scale, called International Time, that initially matches UTC but thereafter has no leap seconds, thus remaining at a constant offset from TAI. If this happens, it is likely that Unix time will be prospectively defined in terms of this new time scale, instead of UTC. Uncertainty about whether this will occur makes prospective Unix time no less predictable than it already is: if UTC were simply to have no further leap seconds the result would be the same.
History
The earliest versions of Unix time had a 32-bit integer incrementing at a rate of 60 Hz, which was the rate of the system clock on the hardware of the early Unix systems. Timestamps stored this way could only represent a range of a little over two and a quarter years. The epoch being counted from was changed with Unix releases to prevent overflow, with midnight on 1 January 1971 and 1 January 1972 both being used as epochs during Unix's early development. Early definitions of Unix time also lacked timezones.
The current epoch of 1 January 1970 00:00:00 UTC was selected arbitrarily by Unix engineers because it was considered a convenient date to work with. The precision was changed to count in seconds in order to avoid short-term overflow.
When POSIX.1 was written, the question arose of how to precisely define time_t in the face of leap seconds. The POSIX committee considered whether Unix time should remain, as intended, a linear count of seconds since the epoch, at the expense of complexity in conversions with civil time or a representation of civil time, at the expense of inconsistency around leap seconds. Computer clocks of the era were not sufficiently precisely set to form a precedent one way or the other.
The POSIX committee was swayed by arguments against complexity in the library functions, and firmly defined the Unix time in a simple manner in terms of the elements of UTC time. This definition was so simple that it did not even encompass the entire leap year rule of the Gregorian calendar, and would make 2100 a leap year.
The 2001 edition of POSIX.1 rectified the faulty leap year rule in the definition of Unix time, but retained the essential definition of Unix time as an encoding of UTC rather than a linear time scale. Since the mid-1990s, computer clocks have been routinely set with sufficient precision for this to matter, and they have most commonly been set using the UTC-based definition of Unix time. This has resulted in considerable complexity in Unix implementations, and in the Network Time Protocol, to execute steps in the Unix time number whenever leap seconds occur.
Usage
Unix time is widely adopted in computing beyond its original application as the system time for Unix. Unix time is available in almost all system programming APIs, including those provided by both Unix-based and non-Unix operating systems. Almost all modern programming languages provide APIs for working with Unix time or converting them to another data structure. Unix time is also used as a mechanism for storing timestamps in a number of file systems, file formats, and databases.
The C standard library uses Unix time for all date and time functions, and Unix time is sometimes referred to as time_t, the name of the data type used for timestamps in C and C++. C's Unix time functions are defined as the system time API in the POSIX specification. The C standard library is used extensively in all modern desktop operating systems, including Microsoft Windows and Unix-like systems such as macOS and Linux, where it is a standard programming interface.
iOS provides a Swift API which defaults to using an epoch of 1 January 2001 but can also be used with Unix timestamps. Android uses Unix time alongside a timezone for its system time API.
Windows does not use Unix time for storing time internally but does use it in system APIs, which are provided in C++ and implement the C standard library specification. Unix time is used in the PE format for Windows executables.
Unix time is typically available in major programming languages and is widely used in desktop, mobile, and web application programming. Java provides an Instant object which holds a Unix timestamp in both seconds and nanoseconds. Python provides a time library which uses Unix time. JavaScript provides a Date library which provides and stores timestamps in milliseconds since the Unix epoch and is implemented in all modern desktop and mobile web browsers as well as in JavaScript server environments like Node.js.
Filesystems designed for use with Unix-based operating systems tend to use Unix time. APFS, the file system used by default across all Apple devices, and ext4, which is widely used on Linux and Android devices, both use Unix time in nanoseconds for file timestamps. Several archive file formats can store timestamps in Unix time, including RAR and tar. Unix time is also commonly used to store timestamps in databases, including in MySQL and PostgreSQL.
Limitations
Unix time was designed to encode calendar dates and times in a compact manner intended for use by computers internally. It is not intended to be easily read by humans or to store timezone-dependent values. It is also limited by default to representing time in seconds, making it unsuited for use when a more precise measurement of time is needed, such as when measuring the execution time of programs.
Range of representable times
Unix time by design does not require a specific size for the storage, but most common implementations of Unix time use a signed integer with the same size as the word size of the underlying hardware. As the majority of modern computers are 32-bit or 64-bit, and a large number of programs are still written in 32-bit compatibility mode, this means that many programs using Unix time are using signed 32-bit integer fields. The maximum value of a signed 32-bit integer is , and the minimum value is , making it impossible to represent dates before 13 December 1901 (at 20:45:52 UTC) or after 19 January 2038 (at 03:14:07 UTC). The early cutoff can have an impact on databases that are storing historical information; in some databases where 32-bit Unix time is used for timestamps, it may be necessary to store time in a different form of field, such as a string, to represent dates before 1901. The late cutoff is known as the Year 2038 problem and has the potential to cause issues as the date approaches, as dates beyond the 2038 cutoff would wrap back around to the start of the representable range in 1901.
Date range cutoffs are not an issue with 64-bit representations of Unix time, as the effective range of dates representable with Unix time stored in a signed 64-bit integer is over 584 billion years, or 292 billion years in either direction of the 1970 epoch.
Alternatives
Unix time is not the only standard for time that counts away from an epoch. On Windows, the FILETIME type stores time as a count of 100-nanosecond intervals that have elapsed since 0:00 GMT on 1 January 1601. Windows epoch time is used to store timestamps for files and in protocols such as the Active Directory Time Service and Server Message Block.
The Network Time Protocol used to coordinate time between computers uses an epoch of 1 January 1900, counted in an unsigned 32-bit integer for seconds and another unsigned 32-bit integer for fractional seconds, which rolls over every 2 seconds (about once every 136 years).
Many applications and programming languages provide methods for storing time with an explicit timezone. There are also a number of time format standards which exist to be readable by both humans and computers, such as ISO 8601.
Notable events in Unix time
Unix enthusiasts have a history of holding "time_t parties" (pronounced "time tea parties") to celebrate significant values of the Unix time number. These are directly analogous to the new year celebrations that occur at the change of year in many calendars. As the use of Unix time has spread, so has the practice of celebrating its milestones. Usually it is time values that are round numbers in decimal that are celebrated, following the Unix convention of viewing time_t values in decimal. Among some groups round binary numbers are also celebrated, such as +230 which occurred at 13:37:04 UTC on Saturday, 10 January 2004.
The events that these celebrate are typically described as "N seconds since the Unix epoch", but this is inaccurate; as discussed above, due to the handling of leap seconds in Unix time the number of seconds elapsed since the Unix epoch is slightly greater than the Unix time number for times later than the epoch.
At 18:36:57 UTC on Wednesday, 17 October 1973, the first appearance of the date in ISO 8601 format within the digits of Unix time (119731017) took place.
At 01:46:40 UTC on Sunday, 9 September 2001, the Unix billennium (Unix time number ) was celebrated. The name billennium is a portmanteau of billion and millennium. Some programs which stored timestamps using a text representation encountered sorting errors, as in a text sort, times after the turnover starting with a 1 digit erroneously sorted before earlier times starting with a 9 digit. Affected programs included the popular Usenet reader KNode and e-mail client KMail, part of the KDE desktop environment. Such bugs were generally cosmetic in nature and quickly fixed once problems became apparent. The problem also affected many Filtrix document-format filters provided with Linux versions of WordPerfect; a patch was created by the user community to solve this problem, since Corel no longer sold or supported that version of the program.
At 23:31:30 UTC on Friday, 13 February 2009, the decimal representation of Unix time reached seconds. Google celebrated this with a Google Doodle. Parties and other celebrations were held around the world, among various technical subcultures, to celebrate the th second.
In popular culture
Vernor Vinge's novel A Deepness in the Sky describes a spacefaring trading civilization thousands of years in the future that still uses the Unix epoch. The "programmer-archaeologist" responsible for finding and maintaining usable code in mature computer systems first believes that the epoch refers to the time when man first walked on the Moon, but then realizes that it is "the 0-second of one of humankind's first computer operating systems".
See also
Epoch (computing)
System time
Notes
References
External links
Unix Programmer's Manual, first edition
Personal account of the POSIX decisions by Landon Curt Noll
chrono-Compatible Low-Level Date Algorithms – algorithms to convert between Gregorian and Julian dates and the number of days since the start of Unix time
Calendaring standards
Network time-related software
Time measurement systems
Time scales
Time
1970 in computing | Unix time | [
"Physics",
"Astronomy"
] | 4,963 | [
"Physical quantities",
"Time measurement systems",
"Time",
"Astronomical coordinate systems",
"Spacetime",
"Time scales"
] |
1,006,293 | https://en.wikipedia.org/wiki/Biorobotics | Biorobotics is an interdisciplinary science that combines the fields of biomedical engineering, cybernetics, and robotics to develop new technologies that integrate biology with mechanical systems to develop more efficient communication, alter genetic information, and create machines that imitate biological systems.
Cybernetics
Cybernetics focuses on the communication and system of living organisms and machines that can be applied and combined with multiple fields of study such as biology, mathematics, computer science, engineering, and much more.
This discipline falls under the branch of biorobotics because of its combined field of study between biological bodies and mechanical systems. Studying these two systems allow for advanced analysis on the functions and processes of each system as well as the interactions between them.
History
Cybernetic theory is a concept that has existed for centuries, dating back to the era of Plato where he applied the term to refer to the "governance of people". The term cybernetique is seen in the mid-1800s used by physicist André-Marie Ampère. The term cybernetics was popularized in the late 1940s to refer to a discipline that touched on, but was separate, from established disciplines, such as electrical engineering, mathematics, and biology.
Science
Cybernetics is often misunderstood because of the breadth of disciplines it covers. In the early 20th century, it was coined as an interdisciplinary field of study that combines biology, science, network theory, and engineering. Today, it covers all scientific fields with system related processes. The goal of cybernetics is to analyze systems and processes of any system or systems in an attempt to make them more efficient and effective.
Applications
Cybernetics is used as an umbrella term so applications extend to all systems related scientific fields such as biology, mathematics, computer science, engineering, management, psychology, sociology, art, and more. Cybernetics is used amongst several fields to discover principles of systems, adaptation of organisms, information analysis and much more.
Genetic engineering
Genetic engineering is a field that uses advances in technology to modify biological organisms. Through different methods, scientists are able to alter the genetic material of microorganisms, plants and animals to provide them with desirable traits. For example, making plants grow bigger, better, and faster. Genetic engineering is included in biorobotics because it uses new technologies to alter biology and change an organism's DNA for their and society's benefit.
History
Although humans have modified genetic material of animals and plants through artificial selection for millennia (such as the genetic mutations that developed teosinte into corn and wolves into dogs), genetic engineering refers to the deliberate alteration or insertion of specific genes to an organism's DNA. The first successful case of genetic engineering occurred in 1973 when Herbert Boyer and Stanley Cohen were able to transfer a gene with antibiotic resistance to a bacterium.
Science
There are three main techniques used in genetic engineering: The plasmid method, the vector method and the biolistic method.
Plasmid method
This technique is used mainly for microorganisms such as bacteria. Through this method, DNA molecules called plasmids are extracted from bacteria and placed in a lab where restriction enzymes break them down. As the enzymes break the molecules down, some develop a rough edge that resembles that of a staircase which is considered 'sticky' and capable of reconnecting. These 'sticky' molecules are inserted into another bacteria where they will connect to the DNA rings with the altered genetic material.
Vector method
The vector method is considered a more precise technique than the plasmid method as it involves the transfer of a specific gene instead of a whole sequence. In the vector method, a specific gene from a DNA strand is isolated through restriction enzymes in a laboratory and is inserted into a vector. Once the vector accepts the genetic code, it is inserted into the host cell where the DNA will be transferred.
Biolistic method
The biolistic method is typically used to alter the genetic material of plants. This method embeds the desired DNA with a metallic particle such as gold or tungsten in a high speed gun. The particle is then bombarded into the plant. Due to the high velocities and the vacuum generated during bombardment, the particle is able to penetrate the cell wall and inserts the new DNA into the cell.
Applications
Genetic engineering has many uses in the fields of medicine, research and agriculture. In the medical field, genetically modified bacteria are used to produce drugs such as insulin, human growth hormones and vaccines. In research, scientists genetically modify organisms to observe physical and behavioral changes to understand the function of specific genes. In agriculture, genetic engineering is extremely important as it is used by farmers to grow crops that are resistant to herbicides and to insects such as BTCorn.
Bionics
Bionics is a medical engineering field and a branch of biorobotics consisting of electrical and mechanical systems that imitate biological systems, such as prosthetics and hearing aids. It's a portmanteau that combines biology and electronics.
History
The history of bionics goes as far back in time as ancient Egypt. A prosthetic toe made out of wood and leather was found on the foot of a mummy. The time period of the mummy corpse was estimated to be from around the fifteenth century B.C. Bionics can also be witnessed in ancient Greece and Rome. Prosthetic legs and arms were made for amputee soldiers. In the early 16th century, a French military surgeon by the name of Ambroise Pare became a pioneer in the field of bionics. He was known for making various types of upper and lower prosthetics. One of his most famous prosthetics, Le Petit Lorrain, was a mechanical hand operated by catches and springs. During the early 19th century, Alessandro Volta further progressed bionics. He set the foundation for the creation of hearing aids with his experiments. He found that electrical stimulation could restore hearing by inserting an electrical implant to the saccular nerve of a patient's ear. In 1945, the National Academy of Sciences created the Artificial Limb Program, which focused on improving prosthetics since there were a large number of World War II amputee soldiers. Since this creation, prosthetic materials, computer design methods, and surgical procedures have improved, creating modern-day bionics.
Science
Prosthetics
The important components that make up modern-day prosthetics are the pylon, the socket, and the suspension system. The pylon is the internal frame of the prosthetic that is made up of metal rods or carbon-fiber composites. The socket is the part of the prosthetic that connects the prosthetic to the person's missing limb. The socket consists of a soft liner that makes the fit comfortable, but also snug enough to stay on the limb. The suspension system is important in keeping the prosthetic on the limb. The suspension system is usually a harness system made up of straps, belts or sleeves that are used to keep the limb attached.
The operation of a prosthetic could be designed in various ways. The prosthetic could be body-powered, externally-powered, or myoelectrically powered. Body-powered prosthetics consist of cables attached to a strap or harness, which is placed on the person's functional shoulder, allowing the person to manipulate and control the prosthetic as he or she deems fit. Externally-powered prosthetics consist of motors to power the prosthetic and buttons and switches to control the prosthetic. Myoelectrically powered prosthetics are new, advanced forms of prosthetics where electrodes are placed on the muscles above the limb. The electrodes will detect the muscle contractions and send electrical signals to the prosthetic to move the prosthetic. The downside to this type of prosthetic is that if the sensors are not placed correctly on the limb then the electrical impulses will fail to move the prosthetic. TrueLimb is a specific brand of prosthetics that uses myoelectrical sensors which enable a person to have control of their bionic limb.
Hearing aids
Four major components make up the hearing aid: the microphone, the amplifier, the receiver, and the battery. The microphone takes in outside sound, turns that sound to electrical signals, and sends those signals to the amplifier. The amplifier increases the sound and sends that sound to the receiver. The receiver changes the electrical signal back into sound and sends the sound into the ear. Hair cells in the ear will sense the vibrations from the sound, convert the vibrations into nerve signals, and send it to the brain so the sounds can become coherent to the person. The battery simply powers the hearing aid.
Applications
Cochlear Implant
Cochlear implants are a type of hearing aid for those who are deaf. Cochlear implants send electrical signals straight to the auditory nerve, the nerve responsible for sound signals, instead of just sending the signals to the ear canal like conventional hearing aids.
Bone-Anchored Hearing Aids
These hearing aids are also used for people with severe hearing loss. They attach to the bones of the middle ear to create sound vibrations in the skull and send those vibrations to the cochlea.
Artificial sensing skin
Artificial sensing-skin detects any pressure put on it and is meant for people who have lost any sense of feeling on parts of their bodies, such as diabetics with peripheral neuropathy.
Bionic eye
A bionic eye is a bioelectronic implant designed to restore vision for individuals with blindness.
Although the technology is still in development, it has enabled some legally blind individuals to distinguish letters again.
Replicating the retina, which contains millions of photoreceptors, and matching the human eye’s exceptional lensing and dynamic range capabilities pose significant challenges. Neural integration further complicates the process. Despite these difficulties, ongoing research and prototyping have led to several major achievements in recent years.
Orthopedic bionics
Orthopedic bionics consist of advanced bionic limbs that use a person's neuromuscular system to control the bionic limb. A new advancement in the comprehension of brain function has led to the development and implementation of brain-machine interfaces (BMIs). BMIs allow for the processing of neural messaging between motor regions of the brain to muscles of a specific limb to initiate movement. BMIs contribute greatly to the restoration of a person's independent movement who has a bionic limb and or an exoskeleton.
Endoscopic robotics
These robotics can remove a polyp during a colonoscopy.
Animal-robot interactions
Animal-robot interactions is a field of Biorobotics that focuses on the blending of robotic compounds with animal individuals or populations. The domain can be subdivided into two main branches, one that relates mechatronic devices with individual animals, and another one with animal populations. Both branches have a variety of applications, ranging from animal cyborgs benefiting from animals' superior motor capabilities to ethological studies around animal collective behaviour. While this representation draws a globally accurate view of the domain, some animal-robot interactions cannot be strictly classified into one or the other of these branches, or are sometimes a mixture of both. This is the case namely for ethological robots that interact on a one-to-one basis or when eusocial animals are considered as a single superorganism interacting with a single robotic device. In the latter case, the term Bio-Hybrid superorganism is used to describe the blending of a robotic device with a superorganism to enable interaction, control and thus studying of the latter superorganism.
Bio-Hybrid organisms
Mixed societies
Mixed societies blend together a set of animals (animal society) with a set of robotic devices (artificial society). Care should be take when using the word society, as the noun could be misleading within the zoologist community that is involved in this domain; a more accurate word would be populations, which is also the one used for the rest of this section.
Typically, the robotic population is composed of robotic replica of the target animal individuals aimed to integrate within the animal population. To do this, stimuli naturally perceived by the animals are emitted by the robotic individuals, and this through different communication channels: visual cues, thermal pulses, vibration signals, etc. The degree to which the robotic individuals successfully blend with the animal population is related to as bio-acceptance, and is often key to enable further behavioural study of the target species.
Once interactions between the animal and robot population is achieved by establishing relevant communication channels, mixed societies offer the potential for adaptive robotic behaviours driven by real-time feedback from the animal population. By responding directly to animal behaviour, the robots can dynamically adjust their actions to better integrate into the group. This capability is particularly valuable for understanding collective behaviours in animal populations. Adaptive robots can be used to implement models of specific roles or interactions within a group, enabling the testing of hypotheses about coordination, decision-making, or social organisation. This approach bridges experimental and modelling techniques, in an attempt to offer insights into the underlying mechanisms of collective behaviour.
See also
Android (robot)
Bio-inspired robotics
Molecular machine#Biological
Biological devices
Biomechatronics
Biomimetics
Cultured neural networks
Cyborg
Cylon (reimagining)
Nanobot
Nanomedicine
Plantoid
Remote control animal
Replicant
Roborat
Technorganic
References
External links
The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
The BioRobotics Lab. Robotics Institute, Carnegie Mellon University *
Bioroïdes - A timeline of the popularization of the idea (in French)
Harvard BioRobotics Laboratory, Harvard University
Locomotion in Mechanical and Biological Systems (LIMBS) Laboratory, Johns Hopkins University
BioRobotics Lab in Korea
Laboratory of Biomedical Robotics and Biomicrosystems, Italy
Tiny backpacks for cells (MIT News)
Biologically Inspired Robotics Lab, Case Western Reserve University
Bio-Robotics and Human Modeling Laboratory - Georgia Institute of Technology
Biorobotics Laboratory at École Polytechnique Fédérale de Lausanne (Switzerland)
BioRobotics Laboratory, Free University of Berlin (Germany)
Biorobotics research group, Institute of Movement Science, CNRS/Aix-Marseille University (France)
Center for Biorobotics, Tallinn University of Technology (Estonia)
Biopunk
Biotechnology
Cyberpunk
Cybernetics
Fictional technology
Postcyberpunk
Health care robotics
Science fiction themes
Robotics | Biorobotics | [
"Engineering",
"Biology"
] | 2,974 | [
"Biotechnology",
"nan",
"Robotics",
"Automation"
] |
1,006,356 | https://en.wikipedia.org/wiki/Creode | Creode or chreod is a neologistic portmanteau term coined by the English 20th century biologist C. H. Waddington to represent the developmental pathway followed by a cell as it grows to form part of a specialized organ. Combining the Greek roots for "necessary" and "path," the term was inspired by the property of regulation. When development is disturbed by external forces, the embryo attempts to regulate its growth and differentiation by returning to its normal developmental trajectory.
Developmental biology
Waddington used the term along with canalisation and homeorhesis, which describes a system that returns to a steady trajectory, in contrast to homeostasis, which describes a system which returns to a steady state. Waddington explains development with the metaphor of a ball rolling down a hillside, where the hill's contours channel the ball in a particular direction. In the case of a pathway or creode which is deeply carved in the hillside, external disturbance is unlikely to prevent normal development. He notes that creodes tend to have steeper sides earlier in development, when external disturbance rarely suffices to alter the developmental trajectory. Small differences in placement atop the hill can lead to dramatically different results by the time the ball reaches the bottom. This represents the tendency of neighboring regions of the early embryo to develop into different organs with radically different structures. Since intermediate structures rarely exist between organs, each ball that rolls down the hill is "canalised" to a region distinct from other regions, just as an eye, for instance, is distinct from an ear.
Waddington refers to the network of creodes carved into the hillside as an "epigenetic landscape," meaning that the formation of the body depends on not only its genetic makeup but the different ways genes are expressed in different regions of the embryo. He expands his metaphor by describing the underside of the epigenetic landscape. Here we see that the "landscape" is really more like a giant sheet that would blow away except that a series of tension-bearing cables holds it down. The pegs that connect the cables to the ground are the genes. The cables themselves are the epigenetic factors that influence gene expression in various regions of the embryo. The depth and direction of the channels is thus determined by a combination of genetic makeup and the epigenetic feedback loops by which genes are regulated.
While Waddington does assert that the process of development is genetically driven, he makes no attempt to explain how this works and even offers evidence to the contrary. He observes, for instance, that genes ordinarily determine peripheral traits, such as eye color, rather than "focal" traits, such as the structure of the eye itself. Moreover, when genetic mutation influences basic structures, the result tends to be the complete transformation of a structure into another rather than piecemeal change, which Waddington illustrates with the developmental ball rolling out of one creode into another. Thus his account gives the impression that genes influence development, perhaps altering the course of a region of cells, without determining the endpoints toward which the embryo develops.
This interpretation is further reinforced by Waddington's discussion of the organization of the gene pool, where he points out that "the epigenetic process occurring during the development of the organism might be so buffered or canalized that the optimum end-result is produced irrespective of the genes which the individual contains." The more deeply creodes are carved into the epigenetic landscape, the weaker the influence of genes over development. He also argues that deep creodes will resist not only genetic but environmental pressures to change course. This phenomenon, which he calls "stabilizing selection," puts genes and environment on a par in secondary importance compared to the epigenetic system.
Waddington's emphasis on epigenetics over genes prefigured the current interest in evolutionary developmental biology. As Sean B. Carroll and others have explained, genes involved in development are roughly the same in all animal species, from insect to primate. Instead of mutations in developmental genes, evolution has been driven by changes in gene expression, namely which genes are expressed at which times and locations in the developing organism.
Architecture
Architectural theorist Sanford Kwinter described the concept of the chreod as "the most important concept of the 20th century." The word "chreod" also closely describes paths of decision within what Christopher Alexander has called configuration space, his term for what he notes that Stuart Kaufmann calls "fitness landscape." By Alexander's theory, because conscious human design decisions do not need to follow these chreods, conscious human design can lead to mixed results. Therefore, he proposes that discovering ways to allow architecture to follow these paths is the best way to get good results in the built environment. Alexander sees his theories of "The Fundamental Process", "structure preserving transformations" and "15 fundamental properties" which he outlines in his work The Nature of Order as instrumentally shaping paths through configuration space.
See also
Cell biology
Systems biology
References
Sources
C.H. Waddington, The Strategy of the Genes, George Allen & Unwin, 1957
Cell biology
1950s neologisms
Developmental biology | Creode | [
"Biology"
] | 1,058 | [
"Behavior",
"Cell biology",
"Developmental biology",
"Reproduction"
] |
1,006,468 | https://en.wikipedia.org/wiki/A%20Night%20at%20the%20Opera%20%28Blind%20Guardian%20album%29 | A Night at the Opera is the seventh studio album by the German power metal band Blind Guardian, released in 2002. It is named after the 1975 Queen album of the same name, which is itself named after the Marx brothers film of the same name.
This album continues a stylistic change from power metal into a more progressive sound, with multiple overlaid vocals, choirs, orchestral keys and guitar leads and less emphasis on powerful guitar riffs and heavy rhythms. As a result, drummer Thomen Stauch would leave the group, citing dissatisfaction with the direction the group was going in.
Album content
There are seven different studio versions and two official live versions of "Harvest of Sorrow" – two in English, two in Spanish ("Mies Del Dolor", "La Cosecha Del Dolor"), one in Italian ("Frutto Del Buio"), one in French ("Moisson de Peine"), and one in a mix of all of the versions except the English acoustic and Italian (also called "Harvest of the World").
The song "Battlefield" is featured as the music in the heavy metal edition of the Adult Swim game Robot Unicorn Attack.
Track listing
Lyrical references
The album features the concepts and themes familiar to Blind Guardian fans, such as historical battles and religious references.
"Precious Jerusalem" is based on the final days of Jesus of Nazareth and his temptation in the desert.
"Battlefield" is based on Song of Hildebrandt, an old German tale of a father and son who find themselves in a duel to the death.
"Under the Ice" has connections to the Iliad, focuses on Cassandra and what happened to her after the Trojan War, particularly from The Oresteia.
"Sadly Sings Destiny" is based on the religious aspect of the Messiah in the Old Testament, and tells of the crucifixion of Jesus from the point of view of a character who reluctantly helps fulfil the prophecy, by doing such things as building the True Cross and weaving the Crown of Thorns.
"The Maiden and the Minstrel Knight" is based on an episode from the story of Tristan und Isolde.
"Wait for an Answer" allegorically concerns the Nazi propaganda machine.
"The Soulforged" is based on the Dragonlance saga's tales of the mage Raistlin Majere.
"Age of False Innocence" is about Galileo Galilei.
"Punishment Divine" is about Friedrich Nietzsche's decline into insanity where he imagines himself being judged by a court of saints.
"And Then There Was Silence" is about Cassandra's visions of the coming Trojan War. It was inspired by Homer's Iliad and Odyssey and Virgil's Aeneid.
"Harvest of Sorrow" is based on Tolkien's tragic story of Túrin Turambar, which appears in the Silmarillion.
Personnel
Blind Guardian
Hansi Kürsch – lead and backing vocals
André Olbrich – lead, rhythm and acoustic guitars
Marcus Siepen – rhythm guitar
Thomen Stauch – drums & percussion
Guest musicians
Oliver Holzwarth – bass guitar
Mathias Wiesner – keyboards & orchestral arrangements
Michael Schüren – piano on "Age of False Innocence"
Pad Bender, Boris Schmidt & Sascha Pierro – keyboards and sound effects
Rolf Köhler, Thomas Hackmann, Olaf Senkbeil & Billy King – The Choir Company
Production
Charlie Bauerfeind – production, mixing, recording
Nordin Hammadi Amrani – assistant engineer, additional recordings
Clemens von Witte – recordings (backing vocals)
Detlef – recordings (backing vocals)
Paul Raymond Gregory – cover painting
André Olbrich – front cover concept
Dennis "Sir" Kostroman – booklet design
Axel Jusseit – photos
Charts
References
2002 albums
Blind Guardian albums
Century Media Records albums
Virgin Records albums
Albums produced by Charlie Bauerfeind
Cultural depictions of the Marx Brothers
Cassandra
Depictions of Jesus in music
Cultural depictions of Galileo Galilei
Cultural depictions of Friedrich Nietzsche | A Night at the Opera (Blind Guardian album) | [
"Astronomy"
] | 809 | [
"Cultural depictions of astronomers",
"Cultural depictions of Galileo Galilei"
] |
1,006,597 | https://en.wikipedia.org/wiki/Nanorobotics | Nanoid robotics, or for short, nanorobotics or nanobotics, is an emerging technology field creating machines or robots, which are called nanorobots or simply nanobots, whose components are at or near the scale of a nanometer (10−9 meters). More specifically, nanorobotics (as opposed to microrobotics) refers to the nanotechnology engineering discipline of designing and building nanorobots with devices ranging in size from 0.1 to 10 micrometres and constructed of nanoscale or molecular components. The terms nanobot, nanoid, nanite, nanomachine and nanomite have also been used to describe such devices currently under research and development.
Nanomachines are largely in the research and development phase, but some primitive molecular machines and nanomotors have been tested. An example is a sensor having a switch approximately 1.5 nanometers across, able to count specific molecules in the chemical sample. The first useful applications of nanomachines may be in nanomedicine. For example, biological machines could be used to identify and destroy cancer cells. Another potential application is the detection of toxic chemicals, and the measurement of their concentrations, in the environment. Rice University has demonstrated a single-molecule car developed by a chemical process and including Buckminsterfullerenes (buckyballs) for wheels. It is actuated by controlling the environmental temperature and by positioning a scanning tunneling microscope tip.
Another definition is a robot that allows precise interactions with nanoscale objects, or can manipulate with nanoscale resolution. Such devices are more related to microscopy or scanning probe microscopy, instead of the description of nanorobots as molecular machines. Using the microscopy definition, even a large apparatus such as an atomic force microscope can be considered a nanorobotic instrument when configured to perform nanomanipulation. For this viewpoint, macroscale robots or microrobots that can move with nanoscale precision can also be considered nanorobots.
Nanorobotics theory
According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micro-machines (see biological machine). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the surgeon". The idea was incorporated into Feynman's case study 1959 essay There's Plenty of Room at the Bottom.
Since nano-robots would be microscopic in size, it would probably be necessary for very large numbers of them to work together to perform microscopic and macroscopic tasks. These nano-robot swarms, both those unable to replicate (as in utility fog) and those able to replicate unconstrained in the natural environment (as in grey goo and synthetic biology), are found in many science fiction stories, such as the Borg nano-probes in Star Trek and The Outer Limits episode "The New Breed".
Some proponents of nano-robotics, in reaction to the grey goo scenarios that they earlier helped to propagate, hold the view that nano-robots able to replicate outside of a restricted factory environment do not form a necessary part of a purported productive nanotechnology, and that the process of self-replication, were it ever to be developed, could be made inherently safe. They further assert that their current plans for developing and using molecular manufacturing do not in fact include free-foraging replicators.
A detailed theoretical discussion of nanorobotics, including specific design issues such as sensing, power communication, navigation, manipulation, locomotion, and onboard computation, has been presented in the medical context of nanomedicine by Robert Freitas. Some of these discussions remain at the level of unbuildable generality and do not approach the level of detailed engineering.
Legal and ethical implications
Open technology
A document with a proposal on nanobiotech development using open design technology methods, as in open-source hardware and open-source software, has been addressed to the United Nations General Assembly. According to the document sent to the United Nations, in the same way that open source has in recent years accelerated the development of computer systems, a similar approach should benefit the society at large and accelerate nanorobotics development. The use of nanobiotechnology should be established as a human heritage for the coming generations, and developed as an open technology based on ethical practices for peaceful purposes. Open technology is stated as a fundamental key for such an aim.
Nanorobot race
In the same ways that technology research and development drove the space race and nuclear arms race, a race for nanorobots is occurring. There is plenty of ground allowing nanorobots to be included among the emerging technologies. Some of the reasons are that large corporations, such as General Electric, Hewlett-Packard, Synopsys, Northrop Grumman and Siemens have been recently working in the development and research of nanorobots; surgeons are getting involved and starting to propose ways to apply nanorobots for common medical procedures; universities and research institutes were granted funds by government agencies exceeding $2 billion towards research developing nanodevices for medicine; bankers are also strategically investing with the intent to acquire beforehand rights and royalties on future nanorobots commercialisation. Some aspects of nanorobot litigation and related issues linked to monopoly have already arisen. A large number of patents have been granted recently on nanorobots, mostly by patent agents, companies specializing solely on building patent portfolios, and lawyers. After a long series of patents and eventually litigations, see for example the invention of radio, or the war of currents, emerging fields of technology tend to become a monopoly, which normally is dominated by large corporations.
Approaches to manufacturing
Manufacturing nanomachines assembled from molecular components is a very challenging task. Because of the level of difficulty, many engineers and scientists continue working cooperatively across multidisciplinary approaches to achieve breakthroughs in this new area of development. Thus, it is quite understandable the importance of the following distinct techniques currently applied towards manufacturing nanorobots:
Biochip
The joint use of nanoelectronics, photolithography, and new biomaterials provides a possible approach to manufacturing nanorobots for common medical uses, such as surgical instrumentation, diagnosis, and drug delivery. This method for manufacturing on nanotechnology scale is in use in the electronics industry since 2008. So, practical nanorobots should be integrated as nanoelectronics devices, which will allow tele-operation and advanced capabilities for medical instrumentation.
Nubots
A nucleic acid robot (nubot) is an organic molecular machine at the nanoscale. DNA structure can provide means to assemble 2D and 3D nanomechanical devices. DNA based machines can be activated using small molecules, proteins and other molecules of DNA. Biological circuit gates based on DNA materials have been engineered as molecular machines to allow in-vitro drug delivery for targeted health problems. Such material based systems would work most closely to smart biomaterial drug system delivery, while not allowing precise in vivo teleoperation of such engineered prototypes.
Surface-bound systems
Several reports have demonstrated the attachment of synthetic molecular motors to surfaces. These primitive nanomachines have been shown to undergo machine-like motions when confined to the surface of a macroscopic material. The surface anchored motors could potentially be used to move and position nanoscale materials on a surface in the manner of a conveyor belt.
Positional nanoassembly
Nanofactory Collaboration, founded by Robert Freitas and Ralph Merkle in 2000 and involving 23 researchers from 10 organizations and 4 countries, focuses on developing a practical research agenda specifically aimed at developing positionally-controlled diamond mechanosynthesis and a diamondoid nanofactory that would have the capability of building diamondoid medical nanorobots.
Biohybrids
The emerging field of bio-hybrid systems combines biological and synthetic structural elements for biomedical or robotic applications. The constituting elements of bio-nanoelectromechanical systems (BioNEMS) are of nanoscale size, for example DNA, proteins or nanostructured mechanical parts. Thiol-ene e-beams resist allow the direct writing of nanoscale features, followed by the functionalization of the natively reactive resist surface with biomolecules. Other approaches use a biodegradable material attached to magnetic particles that allow them to be guided around the body.
Bacteria-based
This approach proposes the use of biological microorganisms, like the bacterium Escherichia coli and Salmonella typhimurium. Thus the model uses a flagellum for propulsion purposes. Electromagnetic fields normally control the motion of this kind of biological integrated device. Chemists at the University of Nebraska have created a humidity gauge by fusing a bacterium to a silicon computer chip.
Virus-based
Retroviruses can be retrained to attach to cells and replace DNA. They go through a process called reverse transcription to deliver genetic packaging in a vector. Usually, these devices are Pol – Gag genes of the virus for the Capsid and Delivery system. This process is called retroviral gene therapy, having the ability to re-engineer cellular DNA by usage of viral vectors. This approach has appeared in the form of retroviral, adenoviral, and lentiviral gene delivery systems. These gene therapy vectors have been used in cats to send genes into the genetically modified organism (GMO), causing it to display the trait.
Magnetic helical nanorobots
Research has led to the creation of helical silica particles coated with magnetic materials that can be maneuvered using a rotating magnetic field.
Such nanorobots are not dependent on chemical reactions to fuel the propulsion. A triaxial Helmholtz coil can provide directed rotating field in space. It was shown how such nanomotors can be used to measure viscosity of non-newtonian fluids at a resolution of a few microns. This technology promises creation of viscosity map inside cells and the extracellular milieu. Such nanorobots have been demonstrated to move in blood. Researchers have managed to controllably move such nanorobots inside cancer cells allowing them to trace out patterns inside a cell. Nanorobots moving through the tumor microenvironment have demonstrated the presence of sialic acid in the cancer-secreted extracellular matrix.
Summary of helical nanorobots
A magnetic helical nanorobot consists of at least two components - one being a helical body, and the other being a magnetic material. The helical body provides a structure to the nanorobot capable of translation along the helical axis. The magnetic material, on the other hand, allows the structure to rotate by following an externally applied rotating magnetic field. Not only do magnetic helical nanorobots take advantage of magnetic actuation, but they also take advantage of helical propulsion methods.
In short, magnetic helical nanorobots translate a rotational motion into translational movement through a fluid in low Reynolds number environments. These nanorobots have been inspired by naturally occurring microorganisms such as flagella, cilia, and Escheric coli (otherwise known as E. coli) which rotate in a helical wave.
Movement of magnetic helical nanorobots
One approach to the wireless manipulation of helical swimmers is through externally applied gradient rotation magnetic field. This can be done through Helmholtz coil as the helical swimmers are actuated by a rotating magnetic field. All magnetized objects within an externally imposed magnetic field will have both forces and torques exerted on them. The helical swimmers can rotate due the magnetic field received by the magnetic head and the forces acting upon it. Once the whole structure feels the field then the helical shape of its body converts this rotational movement into a propulsive force. Magnetic forces (fm) are proportional to the gradient of the magnetic field (∇B) on the magnetized object, and act to move the object to local maxima. Also, magnetic torques (τ) are proportional to the magnetic field (B) and act to align the internal magnetization of an object (M) with the field. The equations that express the interactions are as follows where V is the volume of the magnetized object.
(Equation 1)
(Equation 2)
Equation one indicates that, increasing the volume of the magnetic material will increase the force experienced by the material proportionally. If the volume is doubled, the force will also double, assuming the magnetization (M) and the gradient of the magnetic field (∇B) remain constant. This would be the same for the torque of the magnetic material too since it is proportional to the volume.
This increase in magnetic dipoles enhances the overall magnetic response of the material to an external magnetic field, resulting in greater force and torque. Hence when the magnetic material gets bigger than the helical swimmer can move faster.
Movement of a helical swimmer with square magnetic head
To use the rotation magnetic field, a permanent magnet can be planted in the helical swimmer’s head, whose magnetization direction would be perpendicular to the swimmer body. When a rotating magnetic field is applied, the swimmer’s head experiences a magnetic torque, causing it to rotate. The helical shape converts this rotational movement into a propulsive force. As the swimmer’s head rotates, its helical tail generates a force against the surrounding fluid, propelling it forward. According to equation 2, the magnetic torque around the x-axis is zero
at the initial position. After the magnet manipulator turns 45°, the magnetic field near the head position of the square magnet turns at an angle around the x-axis, as shown in
the figure below. If the square magnet stays in its initial position, it will be subject to a magnetic torque around the x-axis
Thus, the helical swimmer will follow the magnetic field. If the magnet manipulator rotates one turn, the magnetic field near the head position of the swimmer projected on the plane yoz rotates a whole turn around the x-axis. This results in the helical shape to move, resulting in propulsion as follows:
This propulsion helps the helical structure to rotate with the angle of the force. As a result, the magnetic robot rotates around the x-axis by the action of the rotating magnetic field.
Example biomedical applications
Due to its small scale and helical shape providing propulsion, helical swimmers can be used in some biomedical applications such as; targeted drug delivery and targeted cell delivery. In 2018, there was a proposed biocompatible and biodegradable chistosan-based helical micro/nanoswimmer loaded with doxorubicin (DOX), a common anticancer drug that was designed to deliver its payload to a desired location. Using 3.4 × 10–1 W/cm2 intensity UV light radiation, when the swimmer approached the target location, a dose of 60% of the total DOX was released within 5 minutes. However, it was seen that the dosage release rate slowed down after the initial 5 minutes that were reported. This was theorized to be caused by a decreasing diffusion rate of DOX molecules coming from the center of the swimmer. Another group’s spirulina-based helical micro/nanoswimmer also carrying DOX used a different method for controlled drug release. Once the swimmer had reached its destination, near-infrared (NIR) laser irradiation was used to heat up the location to dissolve the swimmer into individual particles, releasing the drug in the process. Through multiple tests, it was found that weak acidic external environments led to an increase in the dosage release rate.
Using magnetic helical micro/nanorobots for cell transport can also lead to opportunities in solving male infertility, repairing damaged tissue, and cell assembly. In 2015, a helical micro-/nanomotor with a holding ring on the head was used to successfully capture and transport sperm cells with motion deficiencies. The helix device would approach the sperm cell’s tail and confine it with the body of the micro-/nanomotor. It would then use the holding ring to loosely capture the head of the sperm cell to prevent escape. After reaching the target location, the sperm cell would be released into the membrane of the oocyte by reversing the rotation of the helix device. This strategy was considered to be an efficient strategy while also reducing risk of damage to the sperm cells.
3D printing
3D printing is the process by which a three-dimensional structure is built through the various processes of additive manufacturing. Nanoscale 3D printing involves many of the same process, incorporated at a much smaller scale. To print a structure in the 5-400 μm scale, the precision of the 3D printing machine needs to be improved greatly. A two-step process of 3D printing, using a 3D printing and laser etched plates method was incorporated as an improvement technique. To be more precise at a nanoscale, the 3D printing process uses a laser etching machine, which etches the details needed for the segments of nanorobots into each plate. The plate is then transferred to the 3D printer, which fills the etched regions with the desired nanoparticle. The 3D printing process is repeated until the nanorobot is built from the bottom up.
This 3D printing process has many benefits. First, it increases the overall accuracy of the printing process. Second, it has the potential to create functional segments of a nanorobot. The 3D printer uses a liquid resin, which is hardened at precisely the correct spots by a focused laser beam. The focal point of the laser beam is guided through the resin by movable mirrors and leaves behind a hardened line of solid polymer, just a few hundred nanometers wide. This fine resolution enables the creation of intricately structured sculptures as tiny as a grain of sand. This process takes place by using photoactive resins, which are hardened by the laser at an extremely small scale to create the structure. This process is quick by nanoscale 3D printing standards. Ultra-small features can be made with the 3D micro-fabrication technique used in multiphoton photopolymerisation. This approach uses a focused laser to trace the desired 3D object into a block of gel. Due to the nonlinear nature of photo excitation, the gel is cured to a solid only in the places where the laser was focused while the remaining gel is then washed away. Feature sizes of under 100 nm are easily produced, as well as complex structures with moving and interlocked parts.
Challenges in designing nanorobots
There are number of challenges and problems that should be addressed when designing and building nanoscale machines with movable parts. The most obvious one is the need of developing very fine tools and manipulation techniques capable of assembling individual nanostructures with high precision into operational device. Less evident challenge is related to peculiarities of adhesion and friction on nanoscale. It is impossible to take existing design of macroscopic device with movable parts and just reduce it to the nanoscale. Such approach will not work due to high surface energy of nanostructures, which means that all contacting parts will stick together following the energy minimization principle. The adhesion and static friction between parts can easily exceed the strength of materials, so the parts will break before they start to move relative to each other. This leads to the need to design movable structures with minimal contact area [].
In spite of the fast development of nanorobots, most of the nanorobots designed for drug delivery purposes, there is "still a long way to go before their commercialization and clinical applications can be achieved."
Potential uses
Nanomedicine
Potential uses for nanorobotics in medicine include early diagnosis and targeted drug-delivery for cancer, biomedical instrumentation, surgery, pharmacokinetics, monitoring of diabetes, and health care.
In such plans, future medical nanotechnology is expected to employ nanorobots injected into the patient to perform work at a cellular level. Such nanorobots intended for use in medicine should be non-replicating, as replication would needlessly increase device complexity, reduce reliability, and interfere with the medical mission.
Nanotechnology provides a wide range of new technologies for developing customized means to optimize the delivery of pharmaceutical drugs. Today, harmful side effects of treatments such as chemotherapy are commonly a result of drug delivery methods that don't pinpoint their intended target cells accurately. Researchers at Harvard and MIT, however, have been able to attach special RNA strands, measuring nearly 10 nm in diameter, to nanoparticles, filling them with a chemotherapy drug. These RNA strands are attracted to cancer cells. When the nanoparticle encounters a cancer cell, it adheres to it, and releases the drug into the cancer cell. This directed method of drug delivery has great potential for treating cancer patients while avoiding negative effects (commonly associated with improper drug delivery). The first demonstration of nanomotors operating in living organisms was carried out in 2014 at University of California, San Diego. MRI-guided nanocapsules are one potential precursor to nanorobots.
Another useful application of nanorobots is assisting in the repair of tissue cells alongside white blood cells. Recruiting inflammatory cells or white blood cells (which include neutrophil granulocytes, lymphocytes, monocytes, and mast cells) to the affected area is the first response of tissues to injury. Because of their small size, nanorobots could attach themselves to the surface of recruited white cells, to squeeze their way out through the walls of blood vessels and arrive at the injury site, where they can assist in the tissue repair process. Certain substances could possibly be used to accelerate the recovery.
The science behind this mechanism is quite complex. Passage of cells across the blood endothelium, a process known as transmigration, is a mechanism involving engagement of cell surface receptors to adhesion molecules, active force exertion and dilation of the vessel walls and physical deformation of the migrating cells. By attaching themselves to migrating inflammatory cells, the robots can in effect "hitch a ride" across the blood vessels, bypassing the need for a complex transmigration mechanism of their own.
, in the United States, Food and Drug Administration (FDA) regulates nanotechnology on the basis of size.
Nanocomposite particles that are controlled remotely by an electromagnetic field was also developed. This series of nanorobots that are now enlisted in the Guinness World Records, can be used to interact with the biological cells. Scientists suggest that this technology can be used for the treatment of cancer.
Magnetic nanorobots have demonstrated capabilities to prevent and treat antimicrobial resistant bacteria. Application of nanomotor implants have been proposed to achieve thorough disinfection of the dentine.
Cultural references
The Nanites are characters on the TV show Mystery Science Theater 3000. They're self-replicating, bio-engineered organisms that work on the ship and reside in the SOL's computer systems. They made their first appearance in Season 8.
Nanites are used in a number of episodes in the television series Travelers. They be programmed and injected into injured people to perform repairs, and first appear in season 1.
Nanites also feature in the Rise of Iron 2016 expansion for the video game Destiny in which SIVA, a self-replicating nanotechnology is used as a weapon.
Nanites (referred to more often as nanomachines) are often referenced in Konami's Metal Gear series, being used to enhance and regulate abilities and body functions.
In the Star Trek franchise TV shows nanites play an important plot device. Starting with "Evolution" in the third season of The Next Generation, Borg Nanoprobes perform the function of maintaining the Borg cybernetic systems, as well as repairing damage to the organic parts of a Borg. They generate new technology inside a Borg when needed, as well as protecting them from many forms of disease.
Nanites play a role in the Deus Ex video game series, being the basis of the nano-augmentation technology which gives augmented people superhuman abilities.
Nanites are also mentioned in the Arc of a Scythe book series by Neal Shusterman and are used to heal all nonfatal injuries, regulate bodily functions, and considerably lessen pain.
Nanites are also an integral part of Stargate SG1 and Stargate Atlantis, where grey goo scenarios are portrayed.
Nanomachines are central to the plot of the Silo book series, in which they are used as a weapon of mass destruction propagated via the air, and enter undetected into the human body where, when receiving a signal, they kill the recipient. They are then used to wipe out the majority of the human race.
See also
Diamondoid
Microswimmer
Molecular machine
Nanoelectromechanical systems
Nanomotors
Programmable matter
References
Further reading
External links
A Review in Nanorobotics – US Department of Energy
Nanomachines
Nanotechnology
Robotics | Nanorobotics | [
"Materials_science",
"Engineering",
"Biology"
] | 5,171 | [
"Materials science",
"Automation",
"Medical robotics",
"Robotics",
"Nanotechnology",
"Nanomachines",
"Medical technology"
] |
1,006,651 | https://en.wikipedia.org/wiki/Amorphous%20carbon | Amorphous carbon is free, reactive carbon that has no crystalline structure. Amorphous carbon materials may be stabilized by terminating dangling-π bonds with hydrogen. As with other amorphous solids, some short-range order can be observed. Amorphous carbon is often abbreviated to aC for general amorphous carbon, aC:H or HAC for hydrogenated amorphous carbon, or to ta-C for tetrahedral amorphous carbon (also called diamond-like carbon).
In mineralogy
In mineralogy, amorphous carbon is the name used for coal, carbide-derived carbon, and other impure forms of carbon that are neither graphite nor diamond. In a crystallographic sense, however, the materials are not truly amorphous but rather polycrystalline materials of graphite or diamond within an amorphous carbon matrix. Commercial carbon also usually contains significant quantities of other elements, which may also form crystalline impurities.
In modern science
With the development of modern thin film deposition and growth techniques in the latter half of the 20th century, such as chemical vapour deposition, sputter deposition, and cathodic arc deposition, it became possible to fabricate truly amorphous carbon materials.
True amorphous carbon has localized π electrons (as opposed to the aromatic π bonds in graphite), and its bonds form with lengths and distances that are inconsistent with any other allotrope of carbon. It also contains a high concentration of dangling bonds; these cause deviations in interatomic spacing (as measured using diffraction) of more than 5% as well as noticeable variation in bond angle.
The properties of amorphous carbon films vary depending on the parameters used during deposition. The primary method for characterizing amorphous carbon is through the ratio of sp2 to sp3 hybridized bonds present in the material. Graphite consists purely of sp2 hybridized bonds, whereas diamond consists purely of sp3 hybridized bonds. Materials that are high in sp3 hybridized bonds are referred to as tetrahedral amorphous carbon, owing to the tetrahedral shape formed by sp3 hybridized bonds, or as diamond-like carbon (owing to the similarity of many physical properties to those of diamond).
Experimentally, sp2 to sp3 ratios can be determined by comparing the relative intensities of various spectroscopic peaks (including EELS, XPS, and Raman spectroscopy) to those expected for graphite or diamond. In theoretical works, the sp2 to sp3 ratios are often obtained by counting the number of carbon atoms with three bonded neighbors versus those with four bonded neighbors. (This technique requires deciding on a somewhat arbitrary metric for determining whether neighboring atoms are considered bonded or not, and is therefore merely used as an indication of the relative sp2-sp3 ratio.)
Although the characterization of amorphous carbon materials by the sp2-sp3 ratio may seem to indicate a one-dimensional range of properties between graphite and diamond, this is most definitely not the case. Research is currently ongoing into ways to characterize and expand on the range of properties offered by amorphous carbon materials.
All practical forms of hydrogenated carbon (e.g. smoke, chimney soot, mined coal such as bitumen and anthracite) contain large amounts of polycyclic aromatic hydrocarbon tars, and are therefore almost certainly carcinogenic.
Q-carbon
Q-carbon, short for quenched carbon, is claimed to be a type of amorphous carbon that is ferromagnetic, electrically conductive, harder than diamond, and able to exhibit high-temperature superconductivity. A research group led by Professor Jagdish Narayan and graduate student Anagh Bhaumik at North Carolina State University announced the discovery of Q-carbon in 2015. They have published numerous papers on the synthesis and characterization of Q-carbon, but years later, there is no independent experimental confirmation of this substance and its properties.
According to the researchers, Q-carbon exhibits a random amorphous structure that is a mix of 3-way (sp2) and 4-way (sp3) bonding, rather than the uniform sp3 bonding found in diamonds. Carbon is melted using nanosecond laser pulses, then quenched rapidly to form Q-carbon, or a mixture of Q-carbon and diamond. Q-carbon can be made to take multiple forms, from nanoneedles to large-area diamond films. The researchers also reported the creation of nitrogen-vacancy nanodiamonds and Q-boron nitride (Q-BN), as well as the conversion of carbon into diamond and h-BN into c-BN at ambient temperatures and air pressures. The group obtained patents on q-materials and intended to commercialize them.
In 2018, a team at University of Texas at Austin used simulations to propose theoretical explanations of the reported properties of Q-carbon, including the record high-temperature superconductivity, ferromagnetism and hardness. However, their simulations have not been verified by other researchers.
See also
Glassy carbon
Diamond-like carbon
Carbon black
Soot
Carbon
References
Allotropes of carbon
Amorphous solids | Amorphous carbon | [
"Physics",
"Chemistry"
] | 1,071 | [
"Amorphous solids",
"Allotropes of carbon",
"Unsolved problems in physics",
"Allotropes"
] |
1,006,658 | https://en.wikipedia.org/wiki/Ephraim%20Katzir | Ephraim Katzir (; – 30 May 2009) was an Israeli biophysicist and Labor Party politician. He was the fourth President of Israel from 1973 until 1978.
Biography
Efraim Katchalski (later Katzir) was the son of Yudel-Gersh (Yehuda) and Tzilya Katchalski, in Kiev, in the Russian Empire (today in Ukraine). In 1925 (several publications cite 1922), he immigrated to Mandatory Palestine with his family and settled in Jerusalem. In 1932, he graduated from Gymnasia Rehavia. A fellow classmate, Shulamit Laskov, remembers him as the "shining star" of the grade level. He was “an especially tall young man, a little pudgy, whose goodness of heart was splashed across his smiling face.” He excelled in all areas, “even in drawing and in gymnastics, where he was no slouch. He was the first in the class in arithmetic, and later on in mathematics. No one came close to him.”
Like his elder brother, Aharon, Katzir was interested in science. He studied botany, zoology, chemistry and bacteriology at the Hebrew University of Jerusalem. In 1938 he received an MSc, and in 1941 he received a PhD degree. In 1939, he graduated from the first Haganah officers' course, and became commander of the student unit in the field forces (Hish).
He and his brother worked on the development of new methods of warfare. In late 1947, after the outbreak of the 1948 Palestine war, and in anticipation of the War for Israel’s Independence, Katzir met the biochemist David Rittenberg, then working at Columbia University, stating: ‘I need germs and poisons for the [impending/ongoing Israeli] war of independence,’ Rittenberg referred the matter to Chaim Weizmann. Weizmann initially dismissed the request, branding Katzir a ‘savage’ and requested his dismissal from the Sieff Scientific Institute in Rehovot, but weeks later he relented, and his dismissal was rescinded. Shortly afterwards, in March 1948, his brother Aharon, who decades later was one of the victims of the Lod Airport Massacre, was appointed director of a research unit, HEMED, in Mandatory Palestine involving biological warfare. A decision to use such material against Palestinians was then taken in early April. In May Ben-Gurion appointed Ephraim to replace his brother as director of the HEMED research unit, given his success abroad in procuring biological warfare materials and equipment to produce them.
Katzir was married to Nina (née Gottlieb), born in Poland, who died in 1986. As an English teacher, Nina developed a unique method for teaching language. As the president's wife, she introduced the custom of inviting the authors of children's books and their young readers to the President's Residence. She established the Nurit Katzir Jerusalem Theater Center in 1978 in memory of their deceased daughter, Nurit, who died from accidental carbon monoxide exposure. Another daughter, Irit, killed herself. They had a son, Meir, and three grandchildren. Katzir died on 30 May 2009 at his home in Rehovot.
Scientific career
After continuing his studies at the Polytechnic Institute of Brooklyn, Columbia University and Harvard University, he returned to Israel and became head of the Department of Biophysics at the Weizmann Institute of Science in Rehovot, an institution he helped to found. In 1966–1968, Katzir was Chief Scientist of the Israel Defense Forces. His initial research centered on simple synthetic protein models, but he also developed a method for binding enzymes, which helped lay the groundwork for what is now called enzyme engineering.
Presidency
In 1973, Golda Meir contacted Katzir at Harvard University, asking him to accept the presidency. He hebraicized his family name to Katzir, which means 'harvest'.
On 10 March 1973, Katzir was elected by the Knesset to serve as the fourth President of Israel. He received 66 votes to 41 cast in favour of his opponent Ephraim Urbach and he assumed office on 24 May 1973. During his appointment, UN approved resolution 3379 which condemned "Zionism as Racism". He was involved in the dispute between Mexico (where the resolution was initially promoted during the World Conference on Women, 1975) and the US Jewish community because of a touristic boycott directed from the latter to that country.
In November 1977, he hosted President Anwar Sadat of Egypt in the first ever official visit of an Arab head of state. In 1978, he declined to stand for a second term due to his wife's illness, and was succeeded by Yitzhak Navon. After stepping down as President, he returned to his scientific work.
Awards and recognition
In 1959, Katzir was awarded the Israel Prize in life sciences.
In 1966, he was elected to the American Philosophical Society
In 1966, he was elected to the United States National Academy of Sciences
In 1972, he was awarded the Sir Hans Krebs Medal of the Federation of European Biochemical Societies
In 1976, he was elected to the American Philosophical Society
In 1977, he was elected a Foreign Member of the Royal Society (ForMemRS)
In 1985, he was awarded the Japan Prize.
In 2000, the Rashi Foundation established the Katzir Scholarship Program in honor of Katzir, one of the first members of its board of directors.
He is also a recipient of the Tchernichovsky Prize for exemplary translation.
He also received honorary degrees from various scientific societies and universities worldwide. The Department of Biotechnology Engineering at the ORT Braude Academic College of Engineering in Karmiel was named after him during his lifetime.
See also
List of Israel Prize recipients
References
External links
My Contributions to Science and Society, Ephraim Katchalski-Katzir
Ephraim Katzir Israel Ministry of Foreign Affairs
PM Netanyahu eulogizes former President Ephraim Katzir
Ephraim Katzir (Katchelsky) (1916–2009)
Ehud Gazit, A vision of a scientific superpower, Ha'aretz, 8 June 2009
1916 births
2009 deaths
Israeli Ashkenazi Jews
Columbia University alumni
Members of the French Academy of Sciences
Foreign members of the Royal Society
Foreign associates of the National Academy of Sciences
Harvard University alumni
Israel Prize in life sciences recipients who were biophysicists
Israel Prize in life sciences recipients
Israeli biophysicists
Israeli Labor Party politicians
Jewish scientists
Members of the Israel Academy of Sciences and Humanities
People from Kiev Governorate
Jews from the Russian Empire
People who emigrated to escape Bolshevism
Presidents of Israel
Soviet emigrants to Mandatory Palestine
Ukrainian Jews
People from Rehovot
Academic staff of Weizmann Institute of Science
Polytechnic Institute of New York University alumni
Hebrew University of Jerusalem alumni
20th-century Israeli biologists
Members of the American Philosophical Society
Weizmann Prize recipients
People related to biological warfare | Ephraim Katzir | [
"Biology"
] | 1,419 | [
"People related to biological warfare",
"Biological warfare"
] |
1,007,030 | https://en.wikipedia.org/wiki/Project%20Longshot | Project Longshot was a conceptual interstellar spacecraft design. It would have been an uncrewed starship (about 400 tonnes), intended to fly to and enter orbit around Alpha Centauri B powered by nuclear pulse propulsion.
History
Developed by the US Naval Academy and NASA, from 1987 to 1988, Longshot was designed to be built at Space Station Freedom, the precursor to the existing International Space Station. Similar to Project Daedalus, Longshot was designed with existing technology in mind, although some development would have been required; for example, the Project Longshot concept assumes "a three-order-of-magnitude leap over current propulsion technology".
Mission
Unlike Daedalus, which used an open-cycle fusion engine, Longshot would use a long-lived nuclear fission reactor for power. Initially generating 300 kilowatts, the reactor would power a number of lasers in the engine that would be used to ignite inertial confinement fusion similar to that in Daedalus. The main design difference is that Daedalus also relied on the fusion reaction to power the ship, whereas in the Longshot design the internal reactor would provide this power.
The reactor would also be used to power a laser for communications back to Earth, with a maximum power of 250 kW. For most of the journey, this would be used at a much lower power for sending data about the interstellar medium; but during the flyby, the main engine section would be discarded and the entire power capacity dedicated to communications at about 1 kilobit per second.
Longshot would have a mass of at the start of the mission including 264 tonnes of helium-3/deuterium pellet fuel/propellant. The active mission payload, which includes the fission reactor but not the discarded main propulsion section, would have a mass of around 30 tonnes.
A difference in the mission architecture between Longshot and the Daedalus study is that Longshot would go into orbit about the target star, while the higher-speed Daedalus would do a one shot fly-by lasting a comparatively short time.
A travel to Alpha Centauri with a Longshot spacecraft would take about one century.
See also
Alpha Centauri Bb
Breakthrough Starshot
Interstellar travel
Nuclear pulse propulsion
Project Daedalus
Project Orion (nuclear propulsion)
Project Icarus
Spacecraft propulsion
References
Bibliography
Beals, K. A., M. Beaulieu, F. J. Dembia, J. Kerstiens, D. L. Kramer, J. R. West and J. A. Zito. Project Longshot: An Unmanned Probe To Alpha Centauri. U S Naval Academy. NASA-CR-184718. 1988.
External links
(This article refers to an Alpha and Beta Centauri as the orbital target of the mission, but the correct nomenclature for these two components of the Alpha Centauri binary star system is Alpha Centauri A and B. Beta Centauri is an entirely different, unassociated star.)
Hypothetical spacecraft
Longshot
Interstellar travel
NASA programs
United States Naval Academy
Alpha Centauri
1987 in science
1988 in science | Project Longshot | [
"Astronomy",
"Technology"
] | 633 | [
"Hypothetical spacecraft",
"Astronomical hypotheses",
"Interstellar travel",
"Exploratory engineering"
] |
1,007,110 | https://en.wikipedia.org/wiki/Analytic%20proof | In mathematics, an analytic proof is a proof of a theorem in analysis that only makes use of methods from analysis, and that does not predominantly make use of algebraic or geometrical methods. The term was first used by Bernard Bolzano, who first provided a non-analytic proof of his intermediate value theorem and then, several years later provided a proof of the theorem that was free from intuitions concerning lines crossing each other at a point, and so he felt happy calling it analytic (Bolzano 1817).
Bolzano's philosophical work encouraged a more abstract reading of when a demonstration could be regarded as analytic, where a proof is analytic if it does not go beyond its subject matter (Sebastik 2007). In proof theory, an analytic proof has come to mean a proof whose structure is simple in a special way, due to conditions on the kind of inferences that ensure none of them go beyond what is contained in the assumptions and what is demonstrated.
Structural proof theory
In proof theory, the notion of analytic proof provides the fundamental concept that brings out the similarities between a number of essentially distinct proof calculi, so defining the subfield of structural proof theory. There is no uncontroversial general definition of analytic proof, but for several proof calculi there is an accepted notion. For example:
In Gerhard Gentzen's natural deduction calculus the analytic proofs are those in normal form; that is, no formula occurrence is both the principal premise of an elimination rule and the conclusion of an introduction rule;
In Gentzen's sequent calculus the analytic proofs are those that do not use the cut rule.
However, it is possible to extend the inference rules of both calculi so that there are proofs that satisfy the condition but are not analytic. For example, a particularly tricky example of this is the analytic cut rule, used widely in the tableau method, which is a special case of the cut rule where the cut formula is a subformula of side formulae of the cut rule: a proof that contains an analytic cut is by virtue of that rule not analytic.
Furthermore, proof calculi that are not analogous to Gentzen's calculi have other notions of analytic proof. For example, the calculus of structures organises its inference rules into pairs, called the up fragment and the down fragment, and an analytic proof is one that only contains the down fragment.
See also
Proof-theoretic semantics
References
Bernard Bolzano (1817). Purely analytic proof of the theorem that between any two values which give results of opposite sign, there lies at least one real root of the equation. In Abhandlungen der koniglichen bohmischen Gesellschaft der Wissenschaften Vol. V, pp.225-48.
Frank Pfenning (1984). Analytic and Non-analytic Proofs. In Proc. 7th International Conference on Automated Deduction.
Jan Šebestik (2007). Bolzano's Logic. Entry in the Stanford Encyclopedia of Philosophy.
Proof theory
Methods of proof | Analytic proof | [
"Mathematics"
] | 628 | [
"Mathematical logic",
"Methods of proof",
"Proof theory"
] |
1,007,168 | https://en.wikipedia.org/wiki/Stored%20Waste%20Examination%20Pilot%20Plant | The Stored Waste Examination Pilot Plant (SWEPP) is a facility at the Idaho National Laboratory for nondestructively examining containers of radioactive waste to determine if they meet criteria to be stored at the Waste Isolation Pilot Plant. SWEPP is part of the Radioactive Waste Management Complex, located southwest of EBR-I.
External links
Radioactive Waste Management Complex at the Idaho National Laboratory
Industrial buildings and structures in Idaho
Radioactive waste
Nuclear technology in the United States | Stored Waste Examination Pilot Plant | [
"Physics",
"Chemistry",
"Technology"
] | 92 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Hazardous waste",
"Radioactivity",
"Nuclear physics",
"Environmental impact of nuclear power",
"Radioactive waste"
] |
1,007,171 | https://en.wikipedia.org/wiki/Nightlight | A nightlight is a small light fixture, usually electrical, placed for comfort or convenience in dark areas or areas that may become dark at certain times, such as at night or during an emergency. Small long-burning candles serving a similar function are referred to as "tealights".
Uses and cultures
People usually use nightlights for the sense of security which having a light on provides, or to relieve fear of the dark, especially in young children. Nightlights are also useful to the general public by revealing the general layout of a room without requiring a major light to be switched on, for avoiding tripping over stairs, obstacles, or pets, or to mark an emergency exit. Exit signs often use tritium radioluminescence. Homeowners usually place nightlights in bathrooms, kitchens and hallways to avoid turning on the main light fixture, especially late at night, and causing their eyes to adjust to the brighter light.
Some frequent travelers carry small nightlights for temporary installation in their guestroom and bathroom, to avoid tripping or falls in an unfamiliar nighttime environment. Gerontologists have recommended use of nightlights to help prevent falls, which can often be life-threatening to the elderly.
The low cost of nightlights has enabled a proliferation of different decorative designs, some featuring superheroes and fantastical designs, while others feature the basic simplicity of a small luminous disc.
The 1990 song "Birdhouse in Your Soul" by They Might Be Giants is a song sung from the perspective of a nightlight.
Light source and variants
Early electrical nightlights used small incandescent lamps or small neon lamps to provide light, and were much safer than small candles using an open flame. The neon versions consumed very little energy and had a long life, but had a tendency to flicker on and off (reminiscent of a candle), which some users liked and others found annoying. In the 1960s, small nightlights appeared that featured a low-power electroluminescent panel emitting soft green or blue light; similar lights are still available today.
Some nightlights include a photocell, which enables them to switch off when the ambient light is sufficiently bright. Other designs also feature a built-in passive infrared sensor to detect motion, and only switch on when somebody is passing by in the dark. With the availability of low-cost LEDs, many different variants have become available, featuring different colours, sometimes changing automatically or in a user-controllable way.
Safety hazard
The US Consumer Product Safety Commission, or UCPSC, reports it receives about 10 reports per year where nightlights close to flammable materials were cited as responsible for fires; they recommend the use of nightlights with LED bulbs cooler than the four or seven watt incandescent light bulbs still used in some older products.
Potential health issues and benefits
A University of Pennsylvania study indicated that sleeping with the light on or with a nightlight was associated with a greater incidence of nearsightedness in children. However, a later study at Ohio State University contradicted the earlier conclusion. Both studies were published in the journal Nature.
Another study has indicated that sleeping with the light on may protect the eyes of diabetics from retinopathy, a condition that can lead to blindness. However, the initial study is still inconclusive.
The optimal sleeping light condition is said by some to be total darkness. If a nightlight is used within a sleeping area, it is recommended to choose a dim reddish or amber light to minimize disruptive effects on sleep cycles. In addition, nightlights may be useful in locations other than sleeping areas, such as hallways, bathrooms, or kitchens, to allow late night trips to be made without turning on the full light, while preserving a dark sleeping environment.
References
External links
Example of stylized night light: https://galaxsleep.com/
Light fixtures
Light | Nightlight | [
"Astronomy"
] | 780 | [
"Time in astronomy",
"Night"
] |
1,007,331 | https://en.wikipedia.org/wiki/Astaxanthin | Astaxanthin is a keto-carotenoid within a group of chemical compounds known as carotenones or terpenes. Astaxanthin is a metabolite of zeaxanthin and canthaxanthin, containing both hydroxyl and ketone functional groups.
It is a lipid-soluble pigment with red coloring properties, which result from the extended chain of conjugated (alternating double and single) double bonds at the center of the compound. The presence of the hydroxyl functional groups and the hydrophobic hydrocarbons render the molecule amphiphilic.
Astaxanthin is produced naturally in the freshwater microalgae Haematococcus pluvialis, the yeast fungus Xanthophyllomyces dendrorhous (also known as Phaffia rhodozyma) and the bacteria Paracoccus carotinifaciens. When the algae are stressed by lack of nutrients, increased salinity, or excessive sunshine, they create astaxanthin. Animals who feed on the algae, such as salmon, red trout, red sea bream, flamingos, and crustaceans (shrimp, krill, crab, lobster, and crayfish), subsequently reflect the red-orange astaxanthin pigmentation.
Astaxanthin is used as a dietary supplement for human, animal, and aquaculture consumption. Astaxanthin from algae, synthetic and bacterial sources is generally recognized as safe in the United States. The US Food and Drug Administration has approved astaxanthin as a food coloring (or color additive) for specific uses in animal and fish foods. The European Commission considers it as a food dye with E number E161j. The European Food Safety Authority has set an Acceptable Daily Intake of 0.2 mg per kg body weight, as of 2019. As a food color additive, astaxanthin and astaxanthin dimethyldisuccinate are restricted for use in Salmonid fish feed only.
Natural sources
Astaxanthin is present in most red-coloured aquatic organisms. The content varies from species to species, but also from individual to individual as it is highly dependent on diet and living conditions. Astaxanthin and other chemically related asta-carotenoids have also been found in a number of lichen species of the Arctic zone.
The primary natural sources for industrial production of astaxanthin comprise the following:
Euphausia pacifica (Pacific krill)
Euphausia superba (Antarctic krill)
Haematococcus pluvialis (algae)
Pandalus borealis (Arctic shrimp)
Astaxanthin concentrations in nature are approximately:
Algae are the primary natural source of astaxanthin in the aquatic food chain. The microalgae Haematococcus pluvialis contains high levels of astaxanthin (about 3.8% of dry weight), and is the primary industrial source of natural astaxanthin.
In shellfish, astaxanthin is almost exclusively concentrated in the shells, with only low amounts in the flesh itself, and most of it only becomes visible during cooking as the pigment separates from the denatured proteins that otherwise bind it. Astaxanthin is extracted from Euphausia superba (Antarctic krill) and from shrimp processing waste.
Biosynthesis
Astaxanthin biosynthesis starts with three molecules of isopentenyl pyrophosphate (IPP) and one molecule of dimethylallyl pyrophosphate (DMAPP) that are combined by IPP isomerase and converted to geranylgeranyl pyrophosphate (GGPP) by GGPP synthase. Two molecules of GGPP are then coupled by phytoene synthase to form phytoene. Next, phytoene desaturase creates four double bonds in the phytoene molecule to form lycopene. After desaturation, lycopene cyclase first forms γ-carotene by converting one of the ψ acyclic ends of the lycopene as a β-ring, then subsequently converts the other to form β-carotene. From β-carotene, hydrolases (blue) are responsible for the inclusion of two 3-hydroxy groups, and ketolases (green) for the addition of two 4-keto groups, forming multiple intermediate molecules until the final molecule, astaxanthin, is obtained.
Synthetic sources
The structure of astaxanthin by synthesis was described in 1975. Nearly all commercially available astaxanthin for aquaculture is produced synthetically, with an annual market of about $1 billion in 2019.
An efficient synthesis from isophorone, cis-3-methyl-2-penten-4-yn-1-ol and a symmetrical C10-dialdehyde has been discovered and is used in industrial production. It combines these chemicals together with an ethynylation and then a Wittig reaction. Two equivalents of the proper ylide combined with the proper dialdehyde in a solvent of methanol, ethanol, or a mixture of the two, yields astaxanthin in up to 88% yields.
Metabolic engineering
The cost of astaxanthin extraction, high market price, and lack of efficient fermentation production systems, combined with the intricacies of chemical synthesis, discourage its commercial development. The metabolic engineering of bacteria (Escherichia coli) enables efficient astaxanthin production from beta-carotene via either zeaxanthin or canthaxanthin.
Structure
Stereoisomers
In addition to structural isomeric configurations, astaxanthin also contains two chiral centers at the 3- and 3-positions, resulting in three unique stereoisomers (3R,3R and 3R,3'S meso and 3S,3'S). While all three stereoisomers are present in nature, relative distribution varies considerably from one organism to another. Synthetic astaxanthin contains a mixture of all three stereoisomers, in approximately 1:2:1 proportions.
Esterification
Astaxanthin exists in two predominant forms, non-esterified (yeast, synthetic) or esterified (algal) with various length fatty acid moieties whose composition is influenced by the source organism as well as growth conditions. The astaxanthin fed to salmon to enhance flesh coloration is in the non-esterified form
The predominance of evidence supports a de-esterification of fatty acids from the astaxanthin molecule in the intestine prior to or concomitant with absorption resulting in the circulation and tissue deposition of non-esterified astaxanthin. European Food Safety Authority (EFSA) published a scientific opinion on a similar xanthophyll carotenoid, lutein, stating that "following passage through the gastrointestinal tract and/or uptake lutein esters are hydrolyzed to form free lutein again". While it can be assumed that non-esterified astaxanthin would be more bioavailable than esterified astaxanthin due to the extra enzymatic steps in the intestine needed to hydrolyse the fatty acid components, several studies suggest that bioavailability is more dependent on formulation than configuration.
Uses
Astaxanthin is used as a dietary supplement and feed supplement as food colorant for salmon, crabs, shrimp, chickens and egg production.
For seafood and animals
The primary use of synthetic astaxanthin today is as an animal feed additive to impart coloration, including farm-raised salmon and chicken egg yolks. Synthetic carotenoid pigments colored yellow, red or orange represent about 15–25% of the cost of production of commercial salmon feed. In the 21st century, most commercial astaxanthin for aquaculture is produced synthetically.
Class action lawsuits were filed against some major grocery store chains for not clearly labeling the astaxanthin-treated salmon as "color added". The chains followed up quickly by labeling all such salmon as "color added". Litigation persisted with the suit for damages, but a Seattle judge dismissed the case, ruling that enforcement of the applicable food laws was up to government and not individuals.
Dietary supplement
The primary human application for astaxanthin is as a dietary supplement, and it remains under preliminary research. In 2020, the European Food Safety Authority reported that an intake of 8 mg astaxanthin per day from food supplements is safe for adults.
Role in the food chain
Lobsters, shrimp, and some crabs turn red when cooked because the astaxanthin, which was bound to the protein in the shell, becomes free as the protein denatures and unwinds. The freed pigment is thus available to absorb light and produce the red color.
Regulations
In April 2009, the United States Food and Drug Administration approved astaxanthin as an additive for fish feed only as a component of a stabilized color additive mixture. Color additive mixtures for fish feed made with astaxanthin may contain only those diluents that are suitable. The color additives astaxanthin, ultramarine blue, canthaxanthin, synthetic iron oxide, dried algae meal, Tagetes meal and extract, and corn endosperm oil are approved for specific uses in animal foods. Haematococcus algae meal (21 CFR 73.185) and Phaffia yeast (21 CFR 73.355) for use in fish feed to color salmonoids were added in 2000.
In the European Union, astaxanthin-containing food supplements derived from sources that have no history of use as a source of food in Europe, fall under the remit of the Novel Food legislation, EC (No.) 258/97. Since 1997, there have been five novel food applications concerning products that contain astaxanthin extracted from these novel sources. In each case, these applications have been simplified or substantial equivalence applications, because astaxanthin is recognised as a food component in the EU diet.
References
Articles containing video clips
Carotenoids
Cyclohexenes
Food colorings
Secondary alcohols
Tetraterpenes | Astaxanthin | [
"Biology"
] | 2,170 | [
"Biomarkers",
"Carotenoids"
] |
1,007,484 | https://en.wikipedia.org/wiki/Heat%20recovery%20ventilation | Heat recovery ventilation (HRV), also known as mechanical ventilation heat recovery (MVHR) is a ventilation system that recovers energy by operating between two air sources at different temperatures. It is used to reduce the heating and cooling demands of buildings.
By recovering the residual heat in the exhaust gas, the fresh air introduced into the air conditioning system is preheated (or pre-cooled) before it enters the room, or the air cooler of the air conditioning unit performs heat and moisture treatment. A typical heat recovery system in buildings comprises a core unit, channels for fresh and exhaust air, and blower fans. Building exhaust air is used as either a heat source or heat sink, depending on the climate conditions, time of year, and requirements of the building. Heat recovery systems typically recover about 60–95% of the heat in the exhaust air and have significantly improved the energy efficiency of buildings.
Energy recovery ventilation (ERV) is the energy recovery process in residential and commercial HVAC systems that exchanges the energy contained in normally exhausted air of a building or conditioned space, using it to treat (precondition) the incoming outdoor ventilation air. The specific equipment involved may be called an Energy Recovery Ventilator, also commonly referred to simply as an ERV.
An ERV is a type of air-to-air heat exchanger that transfers latent heat as well as sensible heat. Because both temperature and moisture are transferred, ERVs are described as total enthalpic devices. In contrast, a heat recovery ventilator (HRV) can only transfer sensible heat. HRVs can be considered sensible only devices because they only exchange sensible heat. In other words, all ERVs are HRVs, but not all HRVs are ERVs. It is incorrect to use the terms HRV, AAHX (air-to-air heat exchanger), and ERV interchangeably.
During the warmer seasons, an ERV system pre-cools and dehumidifies; during cooler seasons the system humidifies and pre-heats. An ERV system helps HVAC design meet ventilation and energy standards (e.g., ASHRAE), improves indoor air quality and reduces total HVAC equipment capacity, thereby reducing energy consumption. ERV systems enable an HVAC system to maintain a 40-50% indoor relative humidity, essentially in all conditions. ERV's must use power for a blower to overcome the pressure drop in the system, hence incurring a slight energy demand.
Working principle
A heat recovery system is designed to supply conditioned air to the occupied space to maintain a certain temperature. A heat recovery system helps keep a house ventilated while recovering heat being emitted from the inside environment. The purpose of heat recovery systems is to transfer the thermal energy from one fluid to another fluid, from one fluid to a solid, or from a solid surface to a fluid at different temperatures and in thermal contact. There is no direct interaction between fluid and fluid or fluid and solid in most heat recovery systems. In some heat recovery systems, fluid leakage is observed due to pressure differences between fluids, resulting in a mixture of the two fluids.The purpose of an energy recovery system is to reduce the energy required for heating, cooling, or ventilating the space by repurposing the exhaust air's energy.
Types
Thermal wheel
Fixed plate heat exchanger
Fixed plate heat exchangers have no moving parts, and consist of alternating layers of plates that are separated and sealed. Typical flow is cross current and since the majority of plates are solid and non permeable, sensible only transfer is the result.
The tempering of incoming fresh air is done by a heat or energy recovery core. In this case, the core is made of aluminum or plastic plates. Humidity levels are adjusted through the transferring of water vapor. This is done with a rotating wheel either containing a desiccant material or permeable plates.
Enthalpy plates were introduced in 2006 by Paul, a special company for ventilation systems for passive houses. A crosscurrent countercurrent air-to-air heat exchanger built with a humidity permeable material. Polymer fixed-plate countercurrent energy recovery ventilators were introduced in 1998 by Building Performance Equipment (BPE), a residential, commercial, and industrial air-to-air energy recovery manufacturer. These heat exchangers can be both introduced as a retrofit for increased energy savings and fresh air as well as an alternative to new construction. In new construction situations, energy recovery will effectively reduce the required heating/cooling capacity of the system. The percentage of the total energy saved will depend on the efficiency of the device (up to 90% sensible) and the latitude of the building.
Due to the need to use multiple sections, fixed plate energy exchangers are often associated with high pressure drop and larger footprints. Due to their inability to offer a high amount of latent energy transfer these systems also have a high chance of frosting in colder climates.
The technology patented by Finnish company RecyclingEnergy Int. Corp. is based on a regenerative plate heat exchanger taking advantage of humidity of air by cyclical condensation and evaporation, e.g. latent heat, enabling not only high annual thermal efficiency but also microbe-free plates due to self-cleaning/washing method. Therefore, the unit is called an enthalpy recovery ventilator rather than heat or energy recovery ventilator. Company's patented LatentHeatPump is based on its enthalpy recovery ventilator having COP of 33 in the summer and 15 in the winter.
Fixed plate heat exchangers are the most commonly used type of heat exchanger and have been developed for 40 years. Thin metal plates are stacked with a small spacing between plates. Two different air streams pass through these spaces, adjacent to each other. Heat transfer occurs as the temperature transfers through the plate from one air stream to the other. The efficiency of these devices has reached 90% sensible heat efficiency in transferring sensible heat from one air stream to another. The high levels of efficiency are attributed to the high heat transfer coefficients of the materials used, operational pressure and temperature range.
Heat pipes
Heat pipes are a heat recovery device that uses a multi-phase process to transfer heat from one air stream to another. Heat is transferred using an evaporator and condenser within a wicked, sealed pipe containing a fluid which undergoes a constant phase change to transfer heat. The fluid within the pipes changes from a fluid to a gas in the evaporator section, absorbing the thermal energy from the warm air stream. The gas condenses back to a fluid in the condenser section where the thermal energy is dissipated into the cooler air stream raising the temperature. The fluid/gas is transported from one side of the heat pipe to the other through pressure, wick forces or gravity, depending on the arrangement of the heat pipe.
Run-around
Run-around systems are hybrid heat recovery system that incorporates characteristics from other heat recovery technology to form a single device, capable of recovering heat from one air stream and delivering to another a significant distance away. The general case of run-around heat recovery, two fixed plate heat exchangers are located in two separate air streams and are linked by a closed loop containing a fluid that is continually pumped between the two heat exchangers. The fluid is heated and cooled constantly as it flows around the loop, providing heat recovery. The constant flow of the fluid through the loop requires pumps to move between the two heat exchangers. Though this is an additional energy demand, using pumps to circulate fluid is less energy intensive than fans to circulate air.
Phase change materials
Phase change materials, or PCMs, are a technology that is used to store sensible and latent heat within a building structure at a higher storage capacity than standard building materials. PCMs have been studied extensively due to their ability to store heat and transfer heating and cooling demands from conventional peak times to off-peak times.
The concept of the thermal mass of a building for heat storage, that the physical structure of the building absorbs heat to help cool the air, has long been understood and investigated. A study of PCMs in comparison to traditional building materials has shown that the thermal storage capacity of PCMs is twelve times higher than standard building materials over the same temperature range. The pressure drop across PCMs has not been investigated to be able to comment on the effect that the material may have on air streams. However, as the PCM can be incorporated directly into the building structure, this would not affect the flow in the same way other heat exchanger technologies do, it can be suggested that there is no pressure loss created by the inclusion of PCMs in the building fabric.
Applications
Fixed plate heat exchangers
Mardiana et al. integrated a fixed plate heat exchanger into a commercial wind tower, highlighting the advantages of this type of system as a means of zero energy ventilation which can be simply modified. Full scale laboratory testing was undertaken in order to determine the effects and efficiency of the combined system. A wind tower was integrated with a fixed plate heat exchanger and was mounted centrally in a sealed test room.
The results from this study indicate that the combination of a wind tower passive ventilation system and a fixed plate heat recovery device could provide an effective combined technology to recover waste heat from exhaust air and cool incoming warm air with zero energy demand. Though no quantitative data for the ventilation rates within the test room was provided, it can be assumed that due to the high-pressure loss across the heat exchanger that these were significantly reduced from the standard operation of a wind tower. Further investigation of this combined technology is essential in understanding the air flow characteristics of the system.
Heat pipes
Due to the low-pressure loss of heat pipe systems, more research has been conducted into the integration of this technology into passive ventilation than other heat recovery systems. Commercial wind towers were again used as the passive ventilation system for integrating this heat recovery technology. This further enhances the suggestion that commercial wind towers provide a worthwhile alternative to mechanical ventilation, capable of supplying and exhausting air at the same time.
Run-around systems
Flaga-Maryanczyk et al. conducted a study in Sweden which examined a passive ventilation system which integrated a run-around system using a ground source heat pump as the heat source to warm incoming air. Experimental measurements and weather data were taken from the passive house used in the study. A CFD model of the passive house was created with the measurements taken from the sensors and weather station used as input data. The model was run to calculate the effectiveness of the run-around system and the capabilities of the ground source heat pump.
Ground source heat pumps provide a reliable source of consistent thermal energy when buried 10–20 m below the ground surface. The ground temperature is warmer than the ambient air in winter and cooler than the ambient air in summer, providing both a heat source and a heat sink. It was found that in February, the coldest month in the climate, the ground source heat pump was capable of delivering almost 25% of the heating needs of the house and occupants.
Phase change materials
The majority of research interest in PCMs is the application of phase change material integration into traditional porous building materials such as concrete and wall boards. Kosny et al. analyzed the thermal performance of buildings that have PCM-enhanced construction materials within the structure. Analysis showed that the addition of PCMs is beneficial in terms of improving thermal performance.
A significant drawback of PCM used in a passive ventilation system for heat recovery is the lack of instantaneous heat transfer across different airstreams. Phase change materials are a heat storage technology, whereby the heat is stored within the PCM until the air temperature has fallen to a significant level where it can be released back into the air stream. No research has been conducted into the use of PCMs between two airstreams of different temperatures where continuous, instantaneous heat transfer can occur. An investigation into this area would be beneficial for passive ventilation heat recovery research.
Advantages and disadvantages
Source:
Types of energy recovery devices
**Total energy exchange only available on hygroscopic units and condensate return units
Environmental impacts
Source:
Energy saving is one of the key issues for both fossil fuel consumption and the protection of the global environment. The rising cost of energy and global warming underlined that developing improved energy systems is necessary to increase energy efficiency while reducing greenhouse gas emissions. One of the most effective ways to reduce energy demand is to use energy more efficiently. Therefore, waste heat recovery is becoming popular in recent years since it improves energy efficiency. About 26% of industrial energy is still wasted as hot gas or fluid in many countries. However, during last two decades there has been remarkable attention to recover waste heat from various industries and to optimize the units which are used to absorb heat from waste gases. Thus, these attempts enhance reducing of global warming as well as of energy demand.
Energy consumption
Energy recovery ventilation
Importance
Nearly half of global energy is used in buildings,and half of heating/cooling cost is caused by ventilation when it is done by the "open window" method according to the regulations. Secondly, energy generation and grid is made to meet the peak demand of power. To use proper ventilation; recovery is a cost-efficient, sustainable and quick way to reduce global energy consumption and give better indoor air quality (IAQ) and protect buildings, and environment.
Methods of transfer
During the cooling season, the system works to cool and dehumidify the incoming, outside air. To do this, the system takes the rejected heat and sends it into the exhaust airstream. Subsequently, this air cools the condenser coil at a lower temperature than if the rejected heat had not entered the exhaust airstream. During the heating seasons, the system works in reverse. Instead of discharging the heat into the exhaust airstream, the system draws heat from the exhaust airstream in order to pre-heat the incoming air. At this stage, the air passes through a primary unit and then into the space being conditioned. With this type of system, it is normal during the cooling seasons for the exhaust air to be cooler than the ventilation air and, during the heating seasons, warmer than the ventilation air. It is for this reason the system works efficiently and effectively. The coefficient of performance (COP) will increase as the conditions become more extreme (i.e., more hot and humid for cooling and colder for heating).
Efficiency
The efficiency of an ERV system is the ratio of energy transferred between the two air streams compared with the total energy transported through the heat exchanger.
With the variety of products on the market, efficiency will vary as well. Some of these systems have been known to have heat exchange efficiencies as high as 70-80% while others have as low as 50%. Even though this lower figure is preferable to the basic HVAC system, it is not up to par with the rest of its class. Studies are being done to increase the heat transfer efficiency to 90%.
The use of modern low-cost gas-phase heat exchanger technology will allow for significant improvements in efficiency. The use of high conductivity porous material is believed to produce an exchange effectiveness in excess of 90%, producing a five times improvement in energy recovery.
The Home Ventilating Institute (HVI) has developed a standard test for any and all units manufactured within the United States. Regardless, not all have been tested. It is imperative to investigate efficiency claims, comparing data produced by HVI as well as that produced by the manufacturer. (Note: all units sold in Canada are placed through the R-2000 program, a standard test equivalent to the HVI test).
Exhaust air heat pump
An exhaust air heat pump (EAHP) extracts heat from the exhaust air of a building and transfers the heat to the supply air, hot tap water and/or hydronic heating system (underfloor heating, radiators). This requires at least mechanical exhaust but mechanical supply is optional; see mechanical ventilation. This type of heat pump requires a certain air exchange rate to maintain its output power. Since the inside air is approximately 20–22 degrees Celsius all year round, the maximum output power of the heat pump is not varying with the seasons and outdoor temperature.
Air leaving the building when the heat pump's compressor is running is usually at around −1° in most versions. Thus, the unit is extracting heat from the air that needs to be changed (at a rate of around a half an air change per hour). Air entering the house is of course generally warmer than the air processed through the unit so there is a net 'gain'. Care must be taken that these are only used in the correct type of houses. Exhaust air heat pumps have minimum flow rates so that when installed in a small flat, the airflow chronically over-ventilates the flat and increases the heat loss by drawing in large amounts of unwanted outside air. There are some models though that can take in additional outdoor air to negate this and this air is also feed to the compressor to avoid over ventilation.For most earlier exhaust air heat pumps there will be a low heat output to the hot water and heating of just around 1.8 kW from the compressor/heat pump process, but if that falls short of the building's requirements additional heat will be automatically triggered in the form of immersion heaters or an external gas boiler. The immersion heater top-up could be substantial ( if you select the wrong unit), and when a unit with a 6 kW immersion heater operates at the full output it will cost £1 per hour to run.
Issues
Between 2009 and 2013, some 15,000 brand new social homes were built in the UK with NIBE EAHPs used as primary heating. Owners and housing association tenants reported crippling electric bills. High running costs are usual with exhaust air heat pumps and should be expected, due to the very small heat recovery with these units. Typically the ventilation air stream is around 31 litres per second and the heat recovery is 750W and no more.
All additional heat necessary to provide heating and hot water is from electricity, either compressor electrical input or immersion heater.
At outside temperatures below 0 degrees Celsius, this type of heat pump removes more heat from a home than it supplies. Over a year around 60% of the energy input to a property with an exhaust air heat pump will be from electricity.
Many families are still battling with developers to have their EAHP systems replaced with more reliable and efficient heating, noting the success of residents in Coventry.
See also
Air Infiltration and Ventilation Centre
Energy recycling
Green building
Heat exchanger
HVAC
List of low-energy building techniques
Low energy building
Low-energy house
Passive cooling
Passive house
Renewable heat
Seasonal thermal energy storage
Solar air conditioning
Solar air heat
Sustainable architecture
Sustainable design
Water heat recycling
Zero energy building
References
External links
Animation explaining simply how HRV works
Heat recovery in Industry
Energy and Heat Recovery Ventilators (ERV/HRV)
Write-up of Single Room MHRV (SRMHRV) in UK home
Builder Insight Bulletin - Heat Recovery Ventilation
http://www.engineeringtoolbox.com/heat-recovery-efficiency-d_201.html
Energy and Heat Recovery Ventilators (ERV/HRV)
Ventilation
Heating, ventilation, and air conditioning
Low-energy building
Energy recovery
Heating
Residential heating
Sustainable building
Energy conservation
Heat pumps
pl:Rekuperator | Heat recovery ventilation | [
"Engineering"
] | 3,972 | [
"Construction",
"Sustainable building",
"Building engineering"
] |
1,007,613 | https://en.wikipedia.org/wiki/Bell%20state | In quantum information science, the Bell's states or EPR pairs are specific quantum states of two qubits that represent the simplest examples of quantum entanglement. The Bell's states are a form of entangled and normalized basis vectors. This normalization implies that the overall probability of the particles being in one of the mentioned states is 1: . Entanglement is a basis-independent result of superposition. Due to this superposition, measurement of the qubit will "collapse" it into one of its basis states with a given probability. Because of the entanglement, measurement of one qubit will "collapse" the other qubit to a state whose measurement will yield one of two possible values, where the value depends on which Bell's state the two qubits are in initially. Bell's states can be generalized to certain quantum states of multi-qubit systems, such as the GHZ state for three or more subsystems.
Understanding of Bell's states is useful in analysis of quantum communication, such as superdense coding and quantum teleportation. These mechanisms cannot transmit information faster than the speed of light, a result known as the no-communication theorem.
Bell states
The Bell states are four specific maximally entangled quantum states of two qubits. They are in a superposition of 0 and 1a linear combination of the two states. Their entanglement means the following:
The qubit held by Alice (subscript "A") can be in a superposition of 0 and 1. If Alice measured her qubit in the standard basis, the outcome would be either 0 or 1, each with probability 1/2; if Bob (subscript "B") also measured his qubit, the outcome would be the same as for Alice. Thus, Alice and Bob would each seemingly have random outcome. Through communication they would discover that, although their outcomes separately seemed random, these were perfectly correlated.
This perfect correlation at a distance is special: maybe the two particles "agreed" in advance, when the pair was created (before the qubits were separated), which outcome they would show in case of a measurement.
Hence, following Albert Einstein, Boris Podolsky, and Nathan Rosen in their famous 1935 "EPR paper", there is something missing in the description of the qubit pair given abovenamely this "agreement", called more formally a hidden variable. In his famous paper of 1964, John S. Bell showed by simple probability theory arguments that these correlations (the one for the 0, 1 basis and the one for the +, − basis) cannot both be made perfect by the use of any "pre-agreement" stored in some hidden variablesbut that quantum mechanics predicts perfect correlations. In a more refined formulation known as the Bell–CHSH inequality, it is shown that a certain correlation measure cannot exceed the value 2 if one assumes that physics respects the constraints of local "hidden-variable" theory (a sort of common-sense formulation of how information is conveyed), but certain systems permitted in quantum mechanics can attain values as high as . Thus, quantum theory violates the Bell inequality and the idea of local "hidden variables".
Bell basis
Four specific two-qubit states with the maximal value of are designated as "Bell states". They are known as the four maximally entangled two-qubit Bell states and form a maximally entangled basis, known as the Bell basis, of the four-dimensional Hilbert space for two qubits:
Creating Bell states via quantum circuits
Although there are many possible ways to create entangled Bell states through quantum circuits, the simplest takes a computational basis as the input, and contains a Hadamard gate and a CNOT gate (see picture). As an example, the pictured quantum circuit takes the two qubit input and transforms it to the first Bell state Explicitly, the Hadamard gate transforms into a superposition of . This will then act as a control input to the CNOT gate, which only inverts the target (the second qubit) when the control (the first qubit) is 1. Thus, the CNOT gate transforms the second qubit as follows .
For the four basic two-qubit inputs, , the circuit outputs the four Bell states (listed above). More generally, the circuit transforms the input in accordance with the equation
where is the negation of .
Properties of Bell states
The result of a measurement of a single qubit in a Bell state is indeterminate, but upon measuring the first qubit in the z-basis, the result of measuring the second qubit is guaranteed to yield the same value (for the Bell states) or the opposite value (for the Bell states). This implies that the measurement outcomes are correlated. John Bell was the first to prove that the measurement correlations in the Bell State are stronger than could ever exist between classical systems. This hints that quantum mechanics allows information processing beyond what is possible with classical mechanics. In addition, the Bell states form an orthonormal basis and can therefore be defined with an appropriate measurement. Because Bell states are entangled states, information on the entire system may be known, while withholding information on the individual subsystems. For example, the Bell state is a pure state, but the reduced density operator of the first qubit is a mixed state. The mixed state implies that not all the information on this first qubit is known. Bell States are either symmetric or antisymmetric with respect to the subsystems. Bell states are maximally entangled in the sense that its reduced density operators are maximally mixed, the multipartite generalization of Bell states in this spirit is called the absolutely maximally entangled (AME) state.
Bell state measurement
The Bell measurement is an important concept in quantum information science: It is a joint quantum-mechanical measurement of two qubits that determines which of the four Bell states the two qubits are in.
A helpful example of quantum measurement in the Bell basis can be seen in quantum computing. If a CNOT gate is applied to qubits A and B, followed by a Hadamard gate on qubit A, a measurement can be made in the computational basis. The CNOT gate performs the act of un-entangling the two previously entangled qubits. This allows the information to be converted from quantum information to a measurement of classical information.
Quantum measurement obeys two key principles. The first, the principle of deferred measurement, states that any measurement can be moved to the end of the circuit. The second principle, the principle of implicit measurement, states that at the end of a quantum circuit, measurement can be assumed for any unterminated wires.
The following are applications of Bell state measurements:
Bell state measurement is the crucial step in quantum teleportation. The result of a Bell state measurement is used by one's co-conspirator to reconstruct the original state of a teleported particle from half of an entangled pair (the "quantum channel") that was previously shared between the two ends.
Experiments that utilize so-called "linear evolution, local measurement" techniques cannot realize a complete Bell state measurement. Linear evolution means that the detection apparatus acts on each particle independent of the state or evolution of the other, and local measurement means that each particle is localized at a particular detector registering a "click" to indicate that a particle has been detected. Such devices can be constructed from, for example: mirrors, beam splitters, and wave platesand are attractive from an experimental perspective because they are easy to use and have a high measurement cross-section.
For entanglement in a single qubit variable, only three distinct classes out of four Bell states are distinguishable using such linear optical techniques. This means two Bell states cannot be distinguished from each other, limiting the efficiency of quantum communication protocols such as teleportation. If a Bell state is measured from this ambiguous class, the teleportation event fails.
Entangling particles in multiple qubit variables, such as (for photonic systems) polarization and a two-element subset of orbital angular momentum states, allows the experimenter to trace over one variable and achieve a complete Bell state measurement in the other. Leveraging so-called hyper-entangled systems thus has an advantage for teleportation. It also has advantages for other protocols such as superdense coding, in which hyper-entanglement increases the channel capacity.
In general, for hyper-entanglement in variables, one can distinguish between at most classes out of Bell states using linear optical techniques.
Bell state correlations
Independent measurements made on two qubits that are entangled in Bell states positively correlate perfectly if each qubit is measured in the relevant basis. For the state, this means selecting the same basis for both qubits. If an experimenter chose to measure both qubits in a Bell state using the same basis, the qubits would appear positively correlated when measuring in the basis, anti-correlated in the basis, and partially (probabilistically) correlated in other bases.
The correlations can be understood by measuring both qubits in the same basis and observing perfectly anti-correlated results. More generally, can be understood by measuring the first qubit in basis , the second qubit in basis , and observing perfectly positively correlated results.
Applications
Superdense coding
Superdense coding allows two individuals to communicate two bits of classical information by only sending a single qubit. The basis of this phenomenon is the entangled states or Bell states of a two qubit system. In this example, Alice and Bob are very far from each other, and have each been given one qubit of the entangled state.
.
In this example, Alice is trying to communicate two bits of classical information, one of four two bit strings: or . If Alice chooses to send the two bit message , she would perform the gate to her qubit. Similarly, if Alice wants to send , she would apply the phase flip ; if she wanted to send , she would apply the gate to her qubit; and finally, if Alice wanted to send the two bit message , she would do nothing to her qubit. Alice performs these quantum gate transformations locally, transforming the initial entangled state into one of the four Bell states.
The steps below show the necessary quantum gate transformations, and resulting Bell states, that Alice needs to apply to her qubit for each possible two bit message she desires to send to Bob.
.
After Alice applies the desired transformations to her qubit, she sends it to Bob. Bob then performs a measurement on the Bell state, which projects the entangled state onto one of the four two-qubit basis vectors, one of which will coincide with the original two bit message Alice was trying to send.
Quantum teleportation
Quantum teleportation is the transfer of a quantum state over a distance. It is facilitated by entanglement between A, the giver, and B, the receiver of this quantum state. This process has become a fundamental research topic for quantum communication and computing. More recently, scientists have been testing its applications in information transfer through optical fibers. The process of quantum teleportation is defined as the following:
Alice and Bob share an EPR pair and each took one qubit before they became separated. Alice must deliver a qubit of information to Bob, but she does not know the state of this qubit and can only send classical information to Bob.
It is performed step by step as the following:
Alice sends her qubits through a CNOT gate.
Alice then sends the first qubit through a Hadamard gate.
Alice measures her qubits, obtaining one of four results, and sends this information to Bob.
Given Alice's measurements, Bob performs one of four operations on his half of the EPR pair and recovers the original quantum state.
The following quantum circuit describes teleportation:
Quantum cryptography
Quantum cryptography is the use of quantum mechanical properties in order to encode and send information safely. The theory behind this process is the fact that it is impossible to measure a quantum state of a system without disturbing the system. This can be used to detect eavesdropping within a system.
The most common form of quantum cryptography is quantum key distribution. It enables two parties to produce a shared random secret key that can be used to encrypt messages. Its private key is created between the two parties through a public channel.
Quantum cryptography can be considered a state of entanglement between two multi-dimensional systems, also known as two-qudit (quantum digit) entanglement.
See also
Bell test experiments
Bell's inequality
EPR paradox
GHZ state
Dicke state
Superdense coding
Quantum teleportation
Quantum cryptography
Quantum circuits
Bell diagonal state
Notes
References
, pp. 75.
.
Quantum information science
Quantum states | Bell state | [
"Physics"
] | 2,648 | [
"Quantum states",
"Quantum mechanics"
] |
1,007,660 | https://en.wikipedia.org/wiki/Self-verifying%20theories | Self-verifying theories are consistent first-order systems of arithmetic, much weaker than Peano arithmetic, that are capable of proving their own consistency. Dan Willard was the first to investigate their properties, and he has described a family of such systems. According to Gödel's incompleteness theorem, these systems cannot contain the theory of Peano arithmetic nor its weak fragment Robinson arithmetic; nonetheless, they can contain strong theorems.
In outline, the key to Willard's construction of his system is to formalise enough of the Gödel machinery to talk about provability internally without being able to formalise diagonalisation. Diagonalisation depends upon being able to prove that multiplication is a total function (and in the earlier versions of the result, addition also). Addition and multiplication are not function symbols of Willard's language; instead, subtraction and division are, with the addition and multiplication predicates being defined in terms of these. Here, one cannot prove the sentence expressing totality of multiplication:
where is the three-place predicate which stands for
When the operations are expressed in this way, provability of a given sentence can be encoded as an arithmetic sentence describing termination of an analytic tableau. Provability of consistency can then simply be added as an axiom. The resulting system can be proven consistent by means of a relative consistency argument with respect to ordinary arithmetic.
One can further add any true sentence of arithmetic to the theory while still retaining consistency of the theory.
References
External links
Dan Willard's home page.
Proof theory
Theories of deduction | Self-verifying theories | [
"Mathematics"
] | 318 | [
"Mathematical logic",
"Theories of deduction",
"Proof theory"
] |
1,007,853 | https://en.wikipedia.org/wiki/JCB%20%28heavy%20equipment%20manufacturer%29 | J.C. Bamford Excavators Limited (JCB) is a British multinational manufacturer of equipment for construction, agriculture, waste handling, and demolition. It was founded in 1945 and is based in Rocester, Staffordshire, England.
The word "JCB" is also often used colloquially as a generic description for mechanical diggers and excavators, and the word even appears in the Oxford English Dictionary, although it is still held as a trademark.
History
Joseph Cyril Bamford Excavators Ltd. was founded by Joseph Cyril Bamford in October 1945 in Uttoxeter, Staffordshire, England. He rented a lock-up garage . In it, using a welding set which he bought second-hand for £1 from English Electric, he made his first vehicle, a tipping trailer from war-surplus materials. The trailer's sides and floor were made from steel sheet that had been part of air raid shelters. On the same day as his son Anthony was born, he sold the trailer at a nearby market for £45 (plus a part-exchanged farm cart) and at once made another trailer. At one time he made vehicles in Eckersley's coal yard in Uttoxeter. The first trailer and the welding set have been preserved.
In 1948, six people were working for the company, and it made the first hydraulic tipping trailer in Europe. In 1950, it moved to an old cheese factory in Rocester, still employing six. A year later, Bamford began painting his products yellow. In 1953, he developed JCB's first backhoe loader, and the JCB logo appeared for the first time. It was designed by Derby Media and advertising designer Leslie Smith. In 1957, the firm launched the "hydra-digga", incorporating the excavator and the major loader as a single all-purpose tool useful for the agricultural and construction industries.
By 1964, JCB had sold over 3,000 3C backhoe loaders. The next year, the first 360-degree excavator was introduced, the JCB 7.
In 1975, Anthony Bamford, Bamford's son, was made Chairman of the company.
In 1978, the Loadall machine was introduced. The next year, the firm started its operation in India. In 1991, the firm entered a joint venture with Sumitomo of Japan to produce excavators, which ended in 1998. Two years later, a JCB factory was completed in Pooler near Savannah, Georgia, in the US, and in 2012 a factory was opened in Brazil.
In 2005, JCB bought a company, purchasing the German equipment firm Vibromax. In the same year, it opened a new factory in Pudong, China. Planning of a new £40M JCB Heavy Products site began following the launch of an architectural design competition in 2007 managed by RIBA Competitions, and by the next year, the firm began to move from its old site on Pinfold Street in Uttoxeter to the new site beside the A50; the Pinfold Street site was demolished in 2009. During that year, JCB announced plans to make India its largest manufacturing hub. Its factory at Ballabgarh in Haryana was to become the world's largest backhoe loader manufacturing facility. Although JCB shed 2,000 jobs during the Great Recession, in 2010 it rehired up to 200 new workers.
In 2013, JCB set up its fourth manufacturing facility in India. In 2014, it was reported that three out of every four pieces of construction equipment sold in India was a JCB, and that its Indian operations accounted for 17.5% of its total revenue. JCB-based memes have also become prevalent in India.
JCB began manufacturing 20-30 tonne excavators in Solnechnogorsky District in Russia in 2017. Due to trade sanctions imposed following the 2022 Russian invasion of Ukraine, JCB suspended its operations in Russia in March 2022.
In 2020, JCB launched www.jcbexplore.com - a website dedicated to promoting constructive play and outdoor activities for kids.
Products
Many of the vehicles produced by JCB are variants of the backhoe loader, including tracked or wheeled variants, mini and large version and other variations, such as forklift vehicles and telescopic handlers for moving materials to the upper floors of a building site. The company also produces wheeled loading shovels and articulated dump trucks.
Its JCB Fastrac range of tractors, which entered production in 1990, can drive at speeds of up to 75 km/h (40 mph) on roads and was shown on the BBC television programme Tomorrow's World, and years later as Jeremy Clarkson's tractor of choice in Top Gear. The firm makes a range of military vehicles, including the JCB HMEE. It licenses a range of rugged feature phones and smartphones designed for construction sites. The design and marketing contract was awarded to Data Select in 2010, which then lost the exclusive rights in 2013.
JCB power systems make a hydrogen combustion engine which aims to be cost effective by reusing parts from the company's Dieselmax engines.
JCB Insurance Services is a fully owned subsidiary of JCB that provides insurance for customers with funding from another fully owned subsidiary, JCB Finance.
JCB Dieselmax
In April 2006, JCB announced that they were developing a diesel-powered land speed record vehicle known as the 'JCB Dieselmax'. The car is powered by two modified JCB 444 diesel power plants using a two-stage turbocharger to generate , one engine driving the front wheels and the other the rear wheels.
On 22 August 2006 the Dieselmax, driven by Andy Green, broke the diesel engine land speed record, attaining a speed of . The following day, the record was again broken, this time with a speed of .
Controversies and criticism
Violation of EU antitrust law
In December 2000, JCB was fined €39.6M by the European Commission for violating European Union antitrust law. The fine related to restrictions on sales outside allotted territories, purchases between authorised distributors, bonuses and fees which restricted out of territory sales, and occasional joint fixing of resale prices and discounts across different territories. JCB appealed the decision, with the European Court of First Instance upholding portions of the appeal and reducing the original fine by 25%. JCB appealed to the European Court of Justice but this final appeal was rejected in 2006, with the court slightly increasing the reduced fine by €864,000.
Tax avoidance
In 2017, a Reuters study of JCB group accounts found that between 2001 and 2013, the JCB group paid £577M to JCB Research, an unlimited company that does not have to file public accounts and which has only two shares, both owned by Anthony Bamford. JCB Research has been described as an obscure company, allegedly worth £27,000, but which donated £2M to the Conservative Party in the run up to the 2010 election, making it the largest donor. Ownership of the company which has never filed accounts is disputed by the Bamford brothers. According to a Guardian report, much of the Bamford money was held in shares in offshore trusts.
JCB Service, the main JCB holding company, is owned by a Dutch parent company, ‘Transmissions and engineering Netherlands BV’, which is ultimately controlled by “Bamford family interests”. According to Ethical Consumer, JCB has six subsidiaries in jurisdictions considered to be tax havens, in Singapore, the Netherlands, Hong Kong, Delaware and Switzerland.
Involvement in Israeli settlements
On 12 February 2020, the United Nations published a database of all business enterprises involved in certain specified activities related to the Israeli settlements in the Occupied Palestinian Territories, including East Jerusalem, and in the occupied Golan Heights. JCB has been listed on the database in light of its involvement in activities related to "the supply of equipment and materials facilitating the construction and the expansion of settlements and the wall, and associated infrastructures". The international community considers Israeli settlements built on land occupied by Israel to be in violation of international law.
In October 2020, the British government decided to investigate a complaint that JCB’s sale of equipment to Israel did not comply with the human rights guidelines set by the Organisation for Economic Co-operation and Development. The UK National Contact Point (NCP), part of the UK’s Department of International Trade, agreed to review a complaint against JCB submitted by a charity, Lawyers for Palestinian Human Rights. JCB said it had no “legal ownership” of its machinery once sold to Comasco, its sole distributor of JCB equipment in Israel.
Bailout loan
In 2020, JCB received a £600M loan in emergency financial aid from the UK government, during the coronavirus pandemic, despite its ultimate ownership being in the Netherlands and having reported a record £447M profit the previous year. Its chief executive Graeme Macdonald said: “Although not a public company, we are eligible for CCF because of our contribution to the UK economy. We don’t expect to utilise it in the short-term but it gives us an insurance policy if there is further disruption from or second spike or other impact around the world.”
Politics
JCB is a significant donor to the UK Conservative Party. Between 2007 and 2017, JCB and related Bamford entities donated £8.1m in cash or kind to the party. Between 2019 and 2021 JCB donated a further £2.5m.
In 2016, Anthony Bamford donated £100,000 to Vote Leave, the official pro-Brexit group, and wrote to JCB's 6,500 staff explaining why he supported the UK leaving the EU.
In October 2016, it was reported that JCB had left the CBI business lobby group in the summer of the same year due to the organisation's anti-Brexit stance. In May 2021, Anthony Bamford rejected an invitation to rejoin CBI, after previously having called it a "waste of time" that "didn’t represent my business or private companies".
References
External links
Construction equipment manufacturers of the United Kingdom
Mining equipment companies
Engine manufacturers of the United Kingdom
Manufacturing companies of England
Forklift truck manufacturers
Agricultural machinery manufacturers of the United Kingdom
Tractor manufacturers of the United Kingdom
Mobile phone manufacturers
English brands
Defence companies of the United Kingdom
Privately held companies of England
Family-owned companies of England
British companies established in 1945
Manufacturing companies established in 1945
Multinational companies headquartered in England
1945 establishments in England
Borough of East Staffordshire
Companies based in Staffordshire
Conservative Party (UK) donors
Electrical generation engine manufacturers
Automotive transmission makers | JCB (heavy equipment manufacturer) | [
"Engineering"
] | 2,164 | [
"Mining equipment",
"Mining equipment companies"
] |
1,007,903 | https://en.wikipedia.org/wiki/Generalized%20singular%20value%20decomposition | In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition (SVD). The two versions differ because one version decomposes two matrices (somewhat like the higher-order or tensor SVD) and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD.
First version: two-matrix decomposition
The generalized singular value decomposition (GSVD) is a matrix decomposition on a pair of matrices which generalizes the singular value decomposition. It was introduced by Van Loan in 1976 and later developed by Paige and Saunders, which is the version described here. In contrast to the SVD, the GSVD decomposes simultaneously a pair of matrices with the same number of columns. The SVD and the GSVD, as well as some other possible generalizations of the SVD, are extensively used in the study of the conditioning and regularization of linear systems with respect to quadratic semi-norms. In the following, let , or .
Definition
The generalized singular value decomposition of matrices and iswhere
is unitary,
is unitary,
is unitary,
is unitary,
is real diagonal with positive diagonal, and contains the non-zero singular values of in decreasing order,
,
is real non-negative block-diagonal, where with , , and ,
is real non-negative block-diagonal, where with , , and ,
,
,
,
.
We denote , , , and . While is diagonal, is not always diagonal, because of the leading rectangular zero matrix; instead is "bottom-right-diagonal".
Variations
There are many variations of the GSVD. These variations are related to the fact that it is always possible to multiply from the left by where is an arbitrary unitary matrix. We denote
, where is upper-triangular and invertible, and is unitary. Such matrices exist by RQ-decomposition.
. Then is invertible.
Here are some variations of the GSVD:
MATLAB (gsvd):
LAPACK (LA_GGSVD):
Simplified:
Generalized singular values
A generalized singular value of and is a pair such that
We have
By these properties we can show that the generalized singular values are exactly the pairs . We haveTherefore
This expression is zero exactly when and for some .
In, the generalized singular values are claimed to be those which solve . However, this claim only holds when , since otherwise the determinant is zero for every pair ; this can be seen by substituting above.
Generalized inverse
Define for any invertible matrix , for any zero matrix , and for any block-diagonal matrix. Then defineIt can be shown that as defined here is a generalized inverse of ; in particular a -inverse of . Since it does not in general satisfy , this is not the Moore–Penrose inverse; otherwise we could derive for any choice of matrices, which only holds for certain class of matrices.
Suppose , where and . This generalized inverse has the following properties:
Quotient SVD
A generalized singular ratio of and is . By the above properties, . Note that is diagonal, and that, ignoring the leading zeros, contains the singular ratios in decreasing order. If is invertible, then has no leading zeros, and the generalized singular ratios are the singular values, and and are the matrices of singular vectors, of the matrix . In fact, computing the SVD of is one of the motivations for the GSVD, as "forming and finding its SVD can lead to unnecessary and large numerical errors when is ill-conditioned for solution of equations". Hence the sometimes used name "quotient SVD", although this is not the only reason for using GSVD. If is not invertible, then is still the SVD of if we relax the requirement of having the singular values in decreasing order. Alternatively, a decreasing order SVD can be found by moving the leading zeros to the back: , where and are appropriate permutation matrices. Since rank equals the number of non-zero singular values, .
Construction
Let
be the SVD of , where is unitary, and and are as described,
, where and ,
, where and ,
by the SVD of , where , and are as described,
by a decomposition similar to a QR-decomposition, where and are as described.
ThenWe also haveThereforeSince has orthonormal columns, . ThereforeWe also have for each such that thatTherefore , and
Applications
The GSVD, formulated as a comparative spectral decomposition, has been successfully applied to signal processing and data science, e.g., in genomic signal processing.
These applications inspired several additional comparative spectral decompositions, i.e., the higher-order GSVD (HO GSVD) and the tensor GSVD.
It has equally found applications to estimate the spectral decompositions of linear operators when the eigenfunctions are parameterized with a linear model, i.e. a reproducing kernel Hilbert space.
Second version: weighted single-matrix decomposition
The weighted version of the generalized singular value decomposition (GSVD) is a constrained matrix decomposition with constraints imposed on the left and right singular vectors of the singular value decomposition. This form of the GSVD is an extension of the SVD as such. Given the SVD of an m×n real or complex matrix M
where
Where I is the identity matrix and where and are orthonormal given their constraints ( and ). Additionally, and are positive definite matrices (often diagonal matrices of weights). This form of the GSVD is the core of certain techniques, such as generalized principal component analysis and Correspondence analysis.
The weighted form of the GSVD is called as such because, with the correct selection of weights, it generalizes many techniques (such as multidimensional scaling and linear discriminant analysis).
References
Further reading
LAPACK manual
Linear algebra
Singular value decomposition | Generalized singular value decomposition | [
"Mathematics"
] | 1,213 | [
"Linear algebra",
"Algebra"
] |
1,007,921 | https://en.wikipedia.org/wiki/Identification%20%28information%29 | For data storage, identification is the capability to find, retrieve, report, change, or delete specific data without ambiguity. This applies especially to information stored in databases. In database normalisation, the process of organizing the fields and tables of a relational database to minimize redundancy and dependency, is the central, defining function of the discipline.
|SHAFTS SPRINT v.2.1-1.0.141 Net 8./ Net 9.1=</ref> built -in Security Robustness
the universal -Js"n Index sovereign
Licence faut program technology company
Weighed Ethical self policy
Notice :Licence Arthur.Ruby Jane D. Aquino
Ownership entity Licence work Atributionu
^|Source of code Compliance team= Company
Limited partners Alphabet Inc.
Work Experience |the work< internal> |Computer technology |trademark data-
Source of Code Compliance team
Microsoft Corporation
Ohne Microsoft Way
Raymond ,VA 98052 USA
INFORMATION
Do Not Translate or Lacalize
This source software incorporated
Material from work experience
Experience make certain reason is the
Component including the product name ,open source component name and Version of number Not wit standin any other terms may
reverse engeneer is Software to the extent
Required to debugg changes automation
To any libraries detect program investigation libraries licensed under GNU Lesser General public licences
or you make order for US $110,000 component encluding product name
andriod.activity/activity Compose of Apache 2.0
Under https://3rdparty.source.microsoft
Licence compliance v.1 negotiation investigation
most of weighed license Author Royalty S 0
Hardware Software GPU tools
SORCE PROJECT
APACHE LICENCE
>Com.squareup/0tto1.2.3 apache 2.0
>Org.slf4j/slf-2.0.7 -MT investigation
See also
Authentication
Domain Name System
Identification (disambiguation)
Forensic profiling
Profiling (information science)
Unique identifier
References
Data modeling | Identification (information) | [
"Engineering"
] | 394 | [
"Data modeling",
"Data engineering"
] |
1,008,028 | https://en.wikipedia.org/wiki/Sudo | sudo () is a program for Unix-like computer operating systems that enables users to run programs with the security privileges of another user, by default the superuser. It originally stood for "superuser do", as that was all it did, and this remains its most common usage; however, the official Sudo project page lists it as "su 'do. The current Linux manual pages for su define it as "substitute user", making the correct meaning of sudo "substitute user, do", because sudo can run a command as other users as well.
Unlike the similar command su, users must, by default, supply their own password for authentication, rather than the password of the target user. After authentication, and if the configuration file (typically /etc/sudoers) permits the user access, the system invokes the requested command. The configuration file offers detailed access permissions, including enabling commands only from the invoking terminal; requiring a password per user or group; requiring re-entry of a password every time or never requiring a password at all for a particular command line. It can also be configured to permit passing arguments or multiple commands.
History
Robert Coggeshall and Cliff Spencer wrote the original subsystem around 1980 at the Department of Computer Science at SUNY/Buffalo. Robert Coggeshall brought sudo with him to the University of Colorado Boulder. Between 1986 and 1993, the code and features were substantially modified by the IT staff of the University of Colorado Boulder Computer Science Department and the College of Engineering and Applied Science, including Todd C. Miller. The current version has been publicly maintained by OpenBSD developer Todd C. Miller since 1994, and has been distributed under an ISC-style license since 1999.
In November 2009 Thomas Claburn, in response to concerns that Microsoft had patented sudo, characterized such suspicions as overblown. The claims were narrowly framed to a particular GUI, rather than to the sudo concept.
The logo is a reference to an xkcd strip, where an order for a sandwich is accepted when preceded with 'sudo'.
Design
Unlike the command su, users supply their personal password to sudo (if necessary) rather than that of the superuser or other account. This allows authorized users to exercise altered privileges without compromising the secrecy of the other account's password. Users must be in a certain group to use the sudo command, typically either the wheel group or the sudo group. After authentication, and if the configuration file permits the user access, the system invokes the requested command. sudo retains the user's invocation rights through a grace period (typically 5 minutes) per pseudo terminal, allowing the user to execute several successive commands as the requested user without having to provide a password again.
As a security and auditing feature, sudo may be configured to log each command run. When a user attempts to invoke sudo without being listed in the configuration file, an exception indication is presented to the user indicating that the attempt has been recorded. If configured, the root user will be alerted via mail. By default, an entry is recorded in the system.
Configuration
The /etc/sudoers file contains a list of users or user groups with permission to execute a subset of commands while having the privileges of the root user or another specified user. The file is recommended to be edited by using the command sudo visudo. Sudo contains several configuration options such as allowing commands to be run as sudo without a password, changing which users can use sudo, and changing the message displayed upon entering an incorrect password. Sudo features an easter egg that can be enabled from the configuration file that will display an insult every time an incorrect password is entered.
Impact
In some system distributions, sudo has largely supplanted the default use of a distinct superuser login for administrative tasks, most notably in some Linux distributions as well as Apple's macOS. This allows for more secure logging of admin commands and prevents some exploits.
RBAC
In association with SELinux, sudo can be used to transition between roles in role-based access control (RBAC).
Tools and similar programs
visudo is a command-line utility that allows editing the sudo configuration file in a fail-safe manner. It prevents multiple simultaneous edits with locks and performs sanity and syntax checks.
Sudoedit is a program that symlinks to the sudo binary. When sudo is run via its sudoedit alias, sudo behaves as if the -e flag has been passed and allows users to edit files that require additional privileges to write to.
Microsoft released its own version of sudo for Windows in February 2024. It functions similar to its Unix counterpart by giving the ability to run elevated commands from an unelevated console session. The program runas provides comparable functionality in Windows, but it cannot pass current directories, environment variables or long command lines to the child. And while it supports running the child as another user, it does not support simple elevation. Hamilton C shell also includes true su and sudo for Windows that can pass all of that state information and start the child either elevated or as another user (or both).
Graphical user interfaces exist for sudo – notably gksudo – but are deprecated in Debian and no longer included in Ubuntu. Other user interfaces are not directly built on sudo, but provide similar temporary privilege elevation for administrative purposes, such as pkexec in Unix-like operating systems, User Account Control in Microsoft Windows and Mac OS X Authorization Services.
doas, available since OpenBSD 5.8 (October 2015), has been written in order to replace sudo in the OpenBSD base system, with the latter still being made available as a port.
gosu is a tool similar to sudo that is popular in containers where the terminal may not be fully functional or where there are undesirable effects from running sudo in a containerized environment.
See also
chroot
doas
runas
Comparison of privilege authorization features
References
External links
Computer security software
System administration
Unix user management and support-related utilities
Software using the ISC license | Sudo | [
"Technology",
"Engineering"
] | 1,275 | [
"Cybersecurity engineering",
"Information systems",
"Computer security software",
"System administration"
] |
1,008,129 | https://en.wikipedia.org/wiki/Derived%20object | In computer programming, derived objects are files (intermediate or not) that are not directly maintained, but get created.
The most typical context is that of compilation, linking, and packaging of source files.
Depending on the revision control (SCM) system, they may be
completely ignored,
managed as second class citizens or
potentially considered the archetype of configuration items.
The second case assumes a reproducible process to produce them. The third case implies that this process is itself being managed, or in practice audited. Currently, only builds are typically audited, but nothing in principle prevents the extension of this to more general patterns of production. Derived objects may then have a real identity. Different instances of the same derived object may be discriminated generically from each other on the basis of their dependency tree.
References
Version control | Derived object | [
"Engineering"
] | 168 | [
"Software engineering",
"Version control"
] |
1,008,247 | https://en.wikipedia.org/wiki/Klenow%20fragment | The Klenow fragment is a large protein fragment produced when DNA polymerase I from E. coli is enzymatically cleaved by the protease subtilisin. First reported in 1970, it retains the 5' → 3' polymerase activity and the 3’ → 5’ exonuclease activity for removal of precoding nucleotides and proofreading, but loses its 5' → 3' exonuclease activity.
The other smaller fragment formed when DNA polymerase I from E. coli is cleaved by subtilisin retains the 5' → 3' exonuclease activity but does not have the other two activities exhibited by the Klenow fragment (i.e. 5' → 3' polymerase activity, and 3' → 5' exonuclease activity).
Research
Because the 5' → 3' exonuclease activity of DNA polymerase I from E. coli makes it unsuitable for many applications, the Klenow fragment, which lacks this activity, can be very useful in research. The Klenow fragment is extremely useful for research-based tasks such as:
Synthesis of double-stranded DNA from single-stranded templates
Filling in receded 3' ends of DNA fragments to make 5' overhang blunt
Digesting away protruding 3' overhangs
Preparation of radioactive DNA probes
The Klenow fragment was also the original enzyme used for greatly amplifying segments of DNA in the polymerase chain reaction (PCR) process, before being replaced by thermostable DNA polymerases such as Taq polymerase.
The exo-Klenow fragment
Just as the 5' → 3' exonuclease activity of DNA polymerase I from E.coli can be undesirable, the 3' → 5' exonuclease activity of Klenow fragment can also be undesirable for certain applications. This problem can be overcome by introducing mutations in the gene that encodes Klenow. This results in forms of the enzyme being expressed that retain 5' → 3' polymerase activity, but lack any exonuclease activity (5' → 3' or 3' → 5'). This form of the enzyme is called the exo-Klenow fragment.
The exo-Klenow fragment is used in some fluorescent labeling reactions for microarray, and also in dA and dT tailing, an important step in the process of ligating DNA adapters to DNA fragments, frequently used in preparing DNA libraries for Next-Gen sequencing. (for instance see )
References
External links
Diagram at vivo.colostate.edu
DNA replication | Klenow fragment | [
"Biology"
] | 556 | [
"DNA replication",
"Molecular genetics",
"Genetics techniques"
] |
1,008,278 | https://en.wikipedia.org/wiki/Solar%20minimum | Solar minimum is the regular period of least solar activity in the Sun's 11-year solar cycle. During solar minimum, sunspot and solar flare activity diminishes, and often does not occur for days at a time. On average, the solar cycle takes about 11 years to go from one solar minimum to the next, with duration observed varying from 9 to 14 years. The date of the minimum is described by a smoothed average over 12 months of sunspot activity, so identifying the date of the solar minimum usually can only happen 6 months after the minimum takes place.
Solar minimum is contrasted with the solar maximum, when hundreds of sunspots may occur.
Solar minimum and solar maximum
Solar minima and maxima are the two extremes of the Sun's 11-year and 400-year activity cycle. At a maximum, the Sun is peppered with sunspots, solar flares erupt, and the Sun hurls billion-ton clouds of electrified gas into space. Sky watchers may see more auroras, and space agencies must monitor radiation storms for astronaut protection. Power outages, satellite malfunctions, communication disruptions, and GPS receiver malfunctions are just a few of the things that can happen during a solar maximum.
At a solar minimum, there are fewer sunspots and solar flares subside. Sometimes, days or weeks go by without a spot.
Predicting solar minimum cycles
Their non-linear character makes predictions of solar activity very difficult. The solar minimum is characterized by a period of decreased solar activity with few, if any, sunspots. Scientists from the National Center for Atmospheric Research (NCAR) also developed a computer model of solar dynamics (Solar dynamo) for more accurate predictions and have confidence in the forecast based upon a series of test runs with the newly developed model simulating the strength of the past eight solar cycles with more than 98% accuracy.
In hindsight the prediction proved to be wildly inaccurate and not representative of the observed sunspot numbers.
During 2008–09 NASA scientists noted that the Sun is undergoing a "deep solar minimum," stating: "There were no sunspots observed on 266 of [2008's] 366 days (73%). Prompted by these numbers, some observers suggested that the solar cycle had hit bottom in 2008. Sunspot counts for 2009 dropped even lower. As of September 14, 2009 there were no sunspots on 206 of the year's 257 days (80%). Solar physicist Dean Pesnell of the Goddard Space Flight Center came to the following conclusion: "We're experiencing a very deep solar minimum." His statement was confirmed by other specialists in the field. "This is the quietest sun we've seen in almost a century," agreed sunspot expert David Hathaway of the National Space Science and Technology Center NASA/Marshall Space Flight Center. However, the activity is still at a higher level than at a grand solar minimum.
Grand solar minima and maxima
Grand solar minima occur when several solar cycles exhibit lesser than average activity for decades or centuries. Solar cycles still occur during these grand solar minimum periods but are at a lower intensity than usual. The grand minima form a special mode of the solar dynamo operation.
A list of historical Grand minima of solar activity includes also Grand minima ca. 690 AD, 360 BC, 770 BC, 1390 BC, 2860 BC, 3340 BC, 3500 BC, 3630 BC, 3940 BC, 4230 BC, 4330 BC, 5260 BC, 5460 BC, 5620 BC, 5710 BC, 5990 BC, 6220 BC, 6400 BC, 7040 BC, 7310 BC, 7520 BC, 8220 BC, 9170 BC.
See also
Active region
List of solar cycles
Maunder Minimum
Solar cycle
Solar cycle 24
Solar maximum
References
External links
Solar Cycle 25 peaking around 2022 could be one of the weakest in centuries
New Insights on How Solar Minimums Affect Earth (NASA June 14, 2011)
Solar phenomena | Solar minimum | [
"Physics"
] | 825 | [
"Physical phenomena",
"Stellar phenomena",
"Solar phenomena"
] |
1,008,471 | https://en.wikipedia.org/wiki/Wigner%E2%80%93Eckart%20theorem | The Wigner–Eckart theorem is a theorem of representation theory and quantum mechanics. It states that matrix elements of spherical tensor operators in the basis of angular momentum eigenstates can be expressed as the product of two factors, one of which is independent of angular momentum orientation, and the other a Clebsch–Gordan coefficient. The name derives from physicists Eugene Wigner and Carl Eckart, who developed the formalism as a link between the symmetry transformation groups of space (applied to the Schrödinger equations) and the laws of conservation of energy, momentum, and angular momentum.
Mathematically, the Wigner–Eckart theorem is generally stated in the following way. Given a tensor operator and two states of angular momenta and , there exists a constant such that for all , , and , the following equation is satisfied:
where
is the -th component of the spherical tensor operator of rank ,
denotes an eigenstate of total angular momentum and its z component ,
is the Clebsch–Gordan coefficient for coupling with to get ,
denotes some value that does not depend on , , nor and is referred to as the reduced matrix element.
The Wigner–Eckart theorem states indeed that operating with a spherical tensor operator of rank on an angular momentum eigenstate is like adding a state with angular momentum k to the state. The matrix element one finds for the spherical tensor operator is proportional to a Clebsch–Gordan coefficient, which arises when considering adding two angular momenta. When stated another way, one can say that the Wigner–Eckart theorem is a theorem that tells how vector operators behave in a subspace. Within a given subspace, a component of a vector operator will behave in a way proportional to the same component of the angular momentum operator. This definition is given in the book Quantum Mechanics by Cohen–Tannoudji, Diu and Laloe.
Background and overview
Motivating example: position operator matrix elements for 4d → 2p transition
Let's say we want to calculate transition dipole moments for an electron transition from a 4d to a 2p orbital of a hydrogen atom, i.e. the matrix elements of the form , where ri is either the x, y, or z component of the position operator, and m1, m2 are the magnetic quantum numbers that distinguish different orbitals within the 2p or 4d subshell. If we do this directly, it involves calculating 45 different integrals: there are 3 possibilities for m1 (−1, 0, 1), 5 possibilities for m2 (−2, −1, 0, 1, 2), and 3 possibilities for i, so the total is 3 × 5 × 3 = 45.
The Wigner–Eckart theorem allows one to obtain the same information after evaluating just one of those 45 integrals (any of them can be used, as long as it is nonzero). Then the other 44 integrals can be inferred from that first one—without the need to write down any wavefunctions or evaluate any integrals—with the help of Clebsch–Gordan coefficients, which can be easily looked up in a table or computed by hand or computer.
Qualitative summary of proof
The Wigner–Eckart theorem works because all 45 of these different calculations are related to each other by rotations. If an electron is in one of the 2p orbitals, rotating the system will generally move it into a different 2p orbital (usually it will wind up in a quantum superposition of all three basis states, m = +1, 0, −1). Similarly, if an electron is in one of the 4d orbitals, rotating the system will move it into a different 4d orbital. Finally, an analogous statement is true for the position operator: when the system is rotated, the three different components of the position operator are effectively interchanged or mixed.
If we start by knowing just one of the 45 values (say, we know that ) and then we rotate the system, we can infer that K is also the matrix element between the rotated version of , the rotated version of , and the rotated version of . This gives an algebraic relation involving K and some or all of the 44 unknown matrix elements. Different rotations of the system lead to different algebraic relations, and it turns out that there is enough information to figure out all of the matrix elements in this way.
(In practice, when working through this math, we usually apply angular momentum operators to the states, rather than rotating the states. But this is fundamentally the same thing, because of the close mathematical relation between rotations and angular momentum operators.)
In terms of representation theory
To state these observations more precisely and to prove them, it helps to invoke the mathematics of representation theory. For example, the set of all possible 4d orbitals (i.e., the 5 states m = −2, −1, 0, 1, 2 and their quantum superpositions) form a 5-dimensional abstract vector space. Rotating the system transforms these states into each other, so this is an example of a "group representation", in this case, the 5-dimensional irreducible representation ("irrep") of the rotation group SU(2) or SO(3), also called the "spin-2 representation". Similarly, the 2p quantum states form a 3-dimensional irrep (called "spin-1"), and the components of the position operator also form the 3-dimensional "spin-1" irrep.
Now consider the matrix elements . It turns out that these are transformed by rotations according to the tensor product of those three representations, i.e. the spin-1 representation of the 2p orbitals, the spin-1 representation of the components of r, and the spin-2 representation of the 4d orbitals. This direct product, a 45-dimensional representation of SU(2), is not an irreducible representation, instead it is the direct sum of a spin-4 representation, two spin-3 representations, three spin-2 representations, two spin-1 representations, and a spin-0 (i.e. trivial) representation. The nonzero matrix elements can only come from the spin-0 subspace. The Wigner–Eckart theorem works because the direct product decomposition contains one and only one spin-0 subspace, which implies that all the matrix elements are determined by a single scale factor.
Apart from the overall scale factor, calculating the matrix element is equivalent to calculating the projection of the corresponding abstract vector (in 45-dimensional space) onto the spin-0 subspace. The results of this calculation are the Clebsch–Gordan coefficients. The key qualitative aspect of the Clebsch–Gordan decomposition that makes the argument work is that in the decomposition of the tensor product of two irreducible representations, each irreducible representation occurs only once. This allows Schur's lemma to be used.
Proof
Starting with the definition of a spherical tensor operator, we have
which we use to then calculate
If we expand the commutator on the LHS by calculating the action of the on the bra and ket, then we get
We may combine these two results to get
This recursion relation for the matrix elements closely resembles that of the Clebsch–Gordan coefficient. In fact, both are of the form . We therefore have two sets of linear homogeneous equations:
one for the Clebsch–Gordan coefficients () and one for the matrix elements (). It is not possible to exactly solve for . We can only say that the ratios are equal, that is
or that , where the coefficient of proportionality is independent of the indices. Hence, by comparing recursion relations, we can identify the Clebsch–Gordan coefficient with the matrix element , then we may write
Alternative conventions
There are different conventions for the reduced matrix elements. One convention, used by Racah and Wigner, includes an additional phase and normalization factor,
where the array denotes the 3-j symbol. (Since in practice is often an integer, the factor is sometimes omitted in literature.) With this choice of normalization, the reduced matrix element satisfies the relation:
where the Hermitian adjoint is defined with the convention. Although this relation is not affected by the presence or absence of the phase factor in the definition of the reduced matrix element, it is affected by the phase convention for the Hermitian adjoint.
Another convention for reduced matrix elements is that of Sakurai's Modern Quantum Mechanics:
Example
Consider the position expectation value . This matrix element is the expectation value of a Cartesian operator in a spherically symmetric hydrogen-atom-eigenstate basis, which is a nontrivial problem. However, the Wigner–Eckart theorem simplifies the problem. (In fact, we could obtain the solution quickly using parity, although a slightly longer route will be taken.)
We know that is one component of , which is a vector. Since vectors are rank-1 spherical tensor operators, it follows that must be some linear combination of a rank-1 spherical tensor with }. In fact, it can be shown that
where we define the spherical tensors as
and are spherical harmonics, which themselves are also spherical tensors of rank . Additionally, , and
Therefore,
The above expression gives us the matrix element for in the basis. To find the expectation value, we set , , and . The selection rule for and is for the spherical tensors. As we have , this makes the Clebsch–Gordan Coefficients zero, leading to the expectation value to be equal to zero.
See also
Tensor operator
Landé g-factor
References
General
External links
J. J. Sakurai, (1994). "Modern Quantum Mechanics", Addison Wesley, .
Wigner–Eckart theorem
Tensor Operators
Representation theory of Lie groups
Theorems in quantum mechanics
Theorems in representation theory | Wigner–Eckart theorem | [
"Physics",
"Mathematics"
] | 2,063 | [
"Theorems in quantum mechanics",
"Equations of physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Physics theorems"
] |
1,008,629 | https://en.wikipedia.org/wiki/Headshell | A headshell is a head piece designed to be attached to the end of a turntable's or record player's tonearm, which holds the cartridge. Standard catridges are secured to the headshell by a couple of 2.5 mm bolts spaced 1/2" apart. Older, non-metric cartridges used #2 (3/32") bolts.
Some headshells are designed to allow variable weights to be attached. For example, the H4-S Stanton headshell comes with 2g and 4g screw-in weights. Extra weight can be useful to prevent skipping if the DJ is scratching the record.
H-4 Bayonet Mount
Most headshells use a standard H-4 Bayonet Mount, which will fit all S shape tonearms. The bayonet has a standard barrel whose dimensions are 8 mm diameter and 12 mm length, with its four pins connected to the four colour-coded head-shell lead wires.
Headshell lead wires colours
The colour standards for the contact connections are as follows:
White: Left channel cartridge positive.
Blue: Left channel cartridge negative.
Red: Right channel cartridge positive.
Green: Right channel cartridge negative.
References
External links
Audio engineering | Headshell | [
"Engineering"
] | 248 | [
"Electrical engineering",
"Audio engineering"
] |
1,008,644 | https://en.wikipedia.org/wiki/Psychomotor%20retardation | Psychomotor retardation involves a slowing down of thought and a reduction of physical movements in an individual. It can cause a visible slowing of physical and emotional reactions, including speech and affect.
Psychomotor retardation is most commonly seen in people with major depression and in the depressed phase of bipolar disorder; it is also associated with the adverse effects of certain drugs, such as benzodiazepines. Particularly in an inpatient setting, psychomotor retardation may require increased nursing care to ensure adequate food and fluid intake and sufficient personal care. Informed consent for treatment is more difficult to achieve in the presence of this condition.
Causes
Psychiatric disorders: anxiety disorders, bipolar disorder, eating disorders, schizophrenia, severe depression, etc.
Psychiatric medicines (if taken as prescribed or improperly, overdosed, or mixed with alcohol)
Parkinson's disease
Genetic disorders: Qazi–Markouizos syndrome, Say–Meyer syndrome, Tranebjaerg-Svejgaard syndrome, Wiedemann–Steiner syndrome, Wilson's disease, etc.
Examples
Examples of psychomotor retardation include the following:
Unaccountable difficulty in carrying out what are usually considered "automatic" or "mundane" self care tasks for healthy people (i.e., without depressive illness) such as taking a shower, dressing, grooming, cooking, brushing teeth, and exercising.
Physical difficulty performing activities that normally require little thought or effort, such as walking up stairs, getting out of bed, preparing meals, and clearing dishes from the table, household chores, and returning phone calls.
Tasks requiring mobility suddenly (or gradually) may inexplicably seem "impossible". Activities such as shopping, getting groceries, taking care of daily needs, and meeting the demands of employment or school are commonly affected.
Activities usually requiring little mental effort can become challenging. Balancing a checkbook, making a shopping list, and making decisions about mundane tasks (such as deciding what errands need to be done) are often difficult.
In schizophrenia, activity level may vary from psychomotor retardation to agitation; the patient experiences periods of listlessness and may be unresponsive, and at the next moment be active and energetic.
See also
Psychomotor learning
Psychomotor agitation
Disorders of diminished motivation
References
External links
Symptoms and signs of mental disorders
Motor control
Mood disorders
Disorders of diminished motivation | Psychomotor retardation | [
"Biology"
] | 486 | [
"Behavior",
"Motor control"
] |
1,008,869 | https://en.wikipedia.org/wiki/Bricolage | In the arts, bricolage (French for "DIY" or "do-it-yourself projects"; ) is the construction or creation of a work from a diverse range of things that happen to be available, or a work constructed using mixed media.
The term bricolage has also been used in many other fields, including anthropology, philosophy, critical theory, education, computer software, public health, and business.
Origin
Bricolage is a French loanword that means the process of improvisation in a human endeavor. The word is derived from the French verb bricoler ("to tinker"), with the English term DIY ("Do-it-yourself") being the closest equivalent of the contemporary French usage. In both languages, bricolage also denotes any works or products of DIY endeavors.
The arts
Visual art
In art, bricolage is a technique or creative mode, where works are constructed from various materials available or on hand, and is often seen as a characteristic of postmodern art practice. It has been likened to the concept of curating and has also been described as the remixture, reconstruction, and reuse of separate materials or artifacts to produce new meanings and insights.
Architecture
Bricolage is considered the jumbled effect produced by the close proximity of buildings from different periods and in different architectural styles.
It is also a term that is admiringly applied to the architectural work of Le Corbusier, by Colin Rowe and Fred Koetter in their book Collage City, whom they called "a fox in hedgehog disguise," commenting on his wily approach to assembling ideas from found objects of the history of architecture, in contrast to Frank Lloyd Wright, who is called a "hedgehog" for being overly focused on a narrow concept.
Academics
Anthropology
In anthropology, the term has been used in several ways. Most notably, Claude Lévi-Strauss invoked the concept of bricolage to refer to the process that leads to the creation of mythical thought, which "expresses itself by means of a heterogeneous repertoire which, even if extensive, is nevertheless limited. It has to use this repertoire, however, whatever the task in hand because it has nothing else at its disposal". Later, Hervé Varenne and Jill Koyama used the term when explaining the processual aspect of culture, i.e., education
Literature
In literature, bricolage is affected by intertextuality, the shaping of a text's meanings by reference to other texts.
Cultural studies
In cultural studies, bricolage is used to mean the processes by which people acquire objects from across social divisions to create new cultural identities. In particular, it is a feature of subcultures such as the punk movement. Here, objects that possess one meaning (or no meaning) in the dominant culture are acquired and given a new, often subversive meaning. For example, the safety pin became a form of decoration in punk culture.
Social psychology
The term "psychological bricolage" is used to explain the mental processes through which an individual develops novel solutions to problems by making use of previously unrelated knowledge or ideas they already possess.
The term, introduced by Jeffrey Sanchez-Burks, Matthew J. Karlesky and Fiona Lee The Oxford Handbook of Creativity, Innovation, and Entrepreneurship of the University of Michigan, draws from two separate disciplines. The first, "social bricolage," was introduced by cultural anthropologist Claude Lévi-Strauss in 1962. Lévi-Strauss was interested in how societies create novel solutions by using resources that already exist in the collective social consciousness. The second, "creative cognition," is an intra-psychic approach to studying how individuals retrieve and recombine knowledge in new ways. Psychological bricolage, therefore, refers to the cognitive processes that enable individuals to retrieve and recombine previously unrelated knowledge they already possess. Psychological bricolage is an intra-individual process akin to Karl E. Weick's notion of bricolage in organizations, which is akin to Lévi-Strauss' notion of bricolage in societies.
Philosophy
In his book The Savage Mind (1962, English translation 1966), French anthropologist Claude Lévi-Strauss used "bricolage" to describe the characteristic patterns of mythological thought. In his description it is opposed to the engineers' creative thinking, which proceeds from goals to means. Mythical thought, according to Lévi-Strauss, attempts to re-use available materials in order to solve new problems.
Jacques Derrida extends this notion to any discourse. "If one calls bricolage the necessity of borrowing one's concept from the text of a heritage which is more or less coherent or ruined, it must be said that every discourse is bricoleur."
Gilles Deleuze and Félix Guattari, in their 1972 book Anti-Oedipus, identify bricolage as the characteristic mode of production of the schizophrenic producer.
Education
In the discussion of constructionism, Seymour Papert discusses two styles of solving problems. Contrary to the analytical style of solving problems, he describes bricolage as a way to learn and solve problems by trying, testing, playing around.
Joe L. Kincheloe and Shirley R. Steinberg have used the term bricolage in educational research to denote the use of multiperspectival research methods. In Kincheloe's conception of the research bricolage, diverse theoretical traditions are employed in a broader critical theoretical/critical pedagogical context to lay the foundation for a transformative mode of multimethodological inquiry. Using these multiple frameworks and methodologies, researchers are empowered to produce more rigorous and praxiological insights into socio-political and educational phenomena.
Kincheloe and Steinberg theorize a critical multilogical epistemology and critical connected ontology to ground the research bricolage. These philosophical notions provide the research bricolage with a sophisticated understanding of the complexity of knowledge production and the interrelated complexity of both researcher positionality and phenomena in the world. Such complexity demands a more rigorous mode of research that is capable of dealing with the complications of socio-educational experience. Such a critical form of rigor avoids the reductionism of many monological, mimetic research orientations (see Kincheloe, 2001, 2005; Kincheloe & Berry, 2004; Steinberg, 2015; Kincheloe, McLaren, & Steinberg, 2012).
Information technology
Information systems
In information systems, bricolage is used by Claudio Ciborra to describe the way in which strategic information systems (SIS) can be built in order to maintain successful competitive advantage over a longer period of time than standard SIS. By valuing tinkering and allowing SIS to evolve from the bottom-up, rather than implementing it from the top-down, the firm will end up with something that is deeply rooted in the organisational culture that is specific to that firm and is much less easily imitated.
Internet
In her book Life on the Screen (1995), Sherry Turkle discusses the concept of bricolage as it applies to problem solving in code projects and workspace productivity. She advocates the "bricoleur style" of programming as a valid and underexamined alternative to what she describes as the conventional structured "planner" approach. In this style of coding, the programmer works without an exhaustive preliminary specification, opting instead for a step-by-step growth and re-evaluation process. In her essay "Epistemological Pluralism", Turkle writes: "The bricoleur resembles the painter who stands back between brushstrokes, looks at the canvas, and only after this contemplation, decides what to do next."
Visual arts
The visual arts is a field in which individuals often integrate a variety of knowledge sets in order to produce inventive work. To reach this stage, artists read print materials across a wide array of disciplines, as well as information from their own social identities. For instance, the artist Shirin Neshat has integrated her identities as an Iranian exile and a woman in order to make complex, creative and critical bodies of work. This willingness to integrate diverse knowledge sets enables artists with multiple identities to fully leverage their knowledge sets. This is demonstrated by Jeffrey Sanchez-Burks, Chi-Ying Chen and Fiona Lee, who found that individuals were shown to exhibit greater levels of innovation in tasks related to their cultural identities when they successfully integrated those identities.
Business
Karl Weick identifies the following requirements for successful bricolage in organizations.
Intimate knowledge of resources
Careful observation and listening
Trusting one's ideas
Self-correcting structures, with feedback
Glenn Gosnell, V & E Limited, defines the formal term "Bricoleurologist", as indicating expertise and experience in Bricoleurology, i.e. devising and implementing elegant solutions to immediate problems and issues. Those skilled in the art and practice of AMA (Alternate Means of Accomplishment) in the efficient and effective reconstitution of resources can be assigned the title "Bricoleurologist" by a company or institution.
In popular culture
Fashion
In his essay "Subculture: The Meaning of Style", Dick Hebdige discusses how an individual can be identified as a bricoleur when they "appropriated another range of commodities by placing them in a symbolic ensemble which served to erase or subvert their original straight meanings". The fashion industry uses bricolage-like styles by incorporating items typically utilized for other purposes.
Television
MacGyver is a television series in which the protagonist is the paragon of a bricoleur, creating solutions for the problem to be solved out of immediately available found objects.
See also
Collage
Détournement
Do it yourself
Intrapreneurial Bricolage
Jugaad
Jury rig
Kludge
Maker culture
Syncretism
Pastiche
References
External links
Digital humanities
Philosophy of technology
Postmodernism
Psychogeography
Artistic techniques
Do it yourself
Improvisation | Bricolage | [
"Technology"
] | 2,033 | [
"Digital humanities",
"Philosophy of technology",
"Computing and society",
"Science and technology studies"
] |
1,009,030 | https://en.wikipedia.org/wiki/Pathos | Pathos (, ; ; ) appeals to the emotions and ideals of the audience and elicits feelings that already reside in them. Pathos is a term most often used in rhetoric (in which it is considered one of the three modes of persuasion, alongside ethos and logos), as well as in literature, film and other narrative art.
Methods
Emotional appeal can be accomplished in many ways, such as the following:
by a metaphor or storytelling, commonly known as a hook;
by passion in the delivery of the speech or writing, as determined by the audience;
by personal anecdote.
appealing to an ideal can also be handled in various ways, such as the following:
by understanding the reason for their position
avoiding attacks against a person or audience's personality
use the attributes of the ideal to reinforce the message.
Pathos tends to use "loaded" words that will get some sort of reaction. Examples could include "victim", in a number of different contexts. In certain situations, pathos may be described as a "guilt trip" based on the speaker trying to make someone in the audience or the entire audience feel guilty about something. An example would be "Well, you don't have to visit me, but I just really miss you and haven't seen you in so long."
Philosophy
In Stoicism, pathos refers to "complaints of the soul". Succumbing to pathos is an internal event (i.e., in one's soul) that consists in an erroneous response to impressions external to it. This view of pathos, and the accompanying view that all pathos is to be extirpated (in order to achieve the state of apatheia), are related by Stoics to a specific picture of the nature of the soul, of psychological functioning, and of human action. A key feature of that picture is that succumbing to pathos is an error of reason – an intellectual mistake.
Epicureanism interpreted and placed pathos in much more colloquial means and situations, placing it in pleasure, and studying it in almost every facet in regard to pleasure, analyzing emotional specificity that an individual may feel or may need to undergo to appreciate said pathos.
Rhetoric
Aristotle’s text on pathos
In Rhetoric, Aristotle identifies three artistic modes of persuasion, one of which is "awakening emotion (pathos) in the audience so as to induce them to make the judgment desired." In the first chapter, he includes the way in which "men change their opinion in regard to their judgment. As such, emotions have specific causes and effects" (Book 2.1.2–3). Aristotle identifies pathos as one of the three essential modes of proof by his statement that "to understand the emotions—that is, to name them and describe them, to know their causes and the way in which they are excited (1356a24–1356a25). Aristotle posits that, alongside pathos, the speaker must also deploy good ethos in order to establish credibility (Book 2.1.5–9).
Aristotle details what individual emotions are useful to a speaker (Book 2.2.27). In doing so, Aristotle focused on whom, toward whom, and why, stating that "[i]t is not enough to know one or even two of these points; unless we know all three, we shall be unable to arouse anger in anyone. The same is true of the other emotions." He also arranges the emotions with one another so that they may counteract one another. For example, one would pair sadness with happiness (Book 2.1.9).
With this understanding, Aristotle argues for the rhetor to understand the entire situation of goals and audiences to decide which specific emotion the speaker would exhibit or call upon in order to persuade the audience. Aristotle's theory of pathos has three main foci: the frame of mind the audience is in, the variation of emotion between people, and the influence the rhetor has on the emotions of the audience. Aristotle classifies the third of this trio as the ultimate goal of pathos. Similarly, Aristotle outlines the individual importance of persuasive emotions, as well as the combined effectiveness of these emotions on the audience. Antoine Braet did a re-examination of Aristotle's text and in this he examined the speaker's goal of the effect on the audience. Braet explains there are three perspectives of every emotion that a speaker is trying to arouse from the audience: the audience's condition, who the audience is feeling these emotions for, and the motive. Moreover, Aristotle pointedly discusses pleasure and pain in relation to the reactions these two emotions cause in an audience member. According to Aristotle, emotions vary from person to person. Therefore, he stresses the importance of understanding specific social situations in order to successfully utilize pathos as a mode of persuasion.
Aristotle identifies the introduction and the conclusion as the two most important places for an emotional appeal in any persuasive argument.
Alternative views on pathos
Scholars have discussed the different interpretations of Aristotle's views of rhetoric and his philosophy. Some believe that Aristotle may not have even been the inventor of his famous persuasion methods. In the second chapter of Rhetoric, Aristotle's view on pathos changes from the use in discourse to the understanding of emotions and their effects. William Fortenbaugh pointed out that for the Sophist Gorgias, "Being overcome with emotion is analogous to rape." Aristotle opposed this view and created a systematic approach to pathos. Fortenbaugh argues that Aristotle's systematic approach to emotional appeals "depends upon correctly understanding the nature of individual emotions, upon knowing the conditions favorable to, the objects of, and the grounds for individual emotions".
Modern philosophers were typically more skeptical of the use of emotions in communication, with political theorists such as John Locke hoping to extract emotion from reasoned communication entirely. George Campbell presents another view unlike the common systematic approach of Aristotle. Campbell explored whether appeals to emotion or passions would be "an unfair method of persuasion," identifying seven circumstances to judge emotions: probability, plausibility, importance, proximity in time, connection of place, relations to the persons concerned, and interest in the consequences.
The 84 BC Rhetorica ad Herennium book of an unknown author theorizes that the conclusion is the most important place in a persuasive argument to consider emotions such as mercy or hatred, depending on the nature of the persuasion. The "appeal to pity", as it is classified in Rhetorica ad Herennium, is a means to conclude by reiterating the major premise of the work and tying while incorporating an emotional sentiment. The author suggests ways in which to appeal to the pity of the audience: "We shall stir pity in our hearers by recalling vicissitudes of future; by comparing the prosperity we once enjoyed with our present adversity; by entreating those whose pity we seek to win, and by submitting ourselves to their mercy." Additionally, the text impresses the importance of invoking kindness, humanity and sympathy upon the hearer. Finally, the author suggests that the appeal to pity be brief for "nothing dries more quickly than a tear."
Pathos before Aristotle
The concept of emotional appeal existed in rhetoric long before Aristotle's Rhetoric. George A. Kennedy, a well-respected, modern-day scholar, identifies the appeal to emotions in the newly formed democratic court system before 400 BC in his book, The Art of Persuasion in Greece. Gorgias, a Sophist who preceded Aristotle, was interested in the orator's emotional appeal as well. Gorgias believed the orator was able to capture and lead the audience in any direction they pleased through the use of emotional appeal. In the Encomium of Helen, Gorgias states that a soul can feel a particular sentiment on account of words such as sorrow and pity. Certain words act as "bringers-on of pleasure and takers-off of pain. Furthermore, Gorgias equates emotional persuasion to the sensation of being overtaken by a drug: "[f]or just as different drug draw off different humors from the body, and some put an end to disease and other to life, so too of discourses: some give pain, others delight, others terrify, others rouse the hearers to courage, and yet others by a certain vile persuasion drug and trick to soul."
Plato also discussed emotional appeal in rhetoric. Plato preceded Aristotle and therefore laid the groundwork, as did other Sophists, for Aristotle to theorize the concept of pathos. In his dialogue Gorgias, Plato discusses pleasure versus pain in the realm of pathos though in a (probably fictional) conversation between Gorgias and Socrates. The dialogue between several ancient rhetors that Plato created centers around the value of rhetoric, and the men incorporate aspects of pathos in their responses. Gorgias discredits pathos and instead promotes the use of ethos in persuasion. In another of Plato's texts, Phaedrus, his discussion of emotions is more pointed; however, he still does not outline exactly how emotions manipulate an audience. Plato discusses the danger of emotions in oratory. He argues that emotional appeal in rhetoric should be used as the means to an end and not the point of the discussion.
Contemporary pathos
George Campbell, a contributor to the Scottish Enlightenment, was one of the first rhetoricians to incorporate scientific evidence into his theory of emotional appeal. Campbell relied heavily on a book written by physician David Hartley, entitled Observations on Man. The book synthesized emotions and neurology and introduced the concept that action is a result of impression. Hartley determined that emotions drive people to react to appeals based on circumstance but also passions made up of cognitive impulses. Campbell argues that belief and persuasion depend heavily on the force of an emotional appeal. Furthermore, Campbell introduced the importance of the audience's imagination and will on emotional persuasion that is just as important as basic understanding of an argument. Campbell, by drawing on the theories of rhetoricians before him, drew up a contemporary view of pathos that incorporates the psychological aspect of emotional appeal.
Pathos in politics
Pathos has its hand in politics as well, primarily in speech and how to persuade the audience. Mshvenieradze states that "Pathos is directly linked with an audience. Audience is a collective subject of speakers on which an orator tries to impact by own argumentation." Similarly, to how Aristotle discusses how to effectively utilize pathos in rhetoric, the way in which one appeals to the reader is similar in appealing to an audience of voters. In the case of politics and politicians, it is primarily more argumentative writing and speaking. In Book II of Aristotle's writings in Rhetoric, in essence knowing people's emotions helps to enable one to act with words versus writing alone, to earn another's credibility and faith.
As Aristotle's teachings expanded, many other groups of thinkers would go on to adopt different variations of political usage with the elements of pathos involved, which includes groups such as the Epicureans and Stoics.
Pathos in advertising
The contemporary landscape for advertising is highly competitive due to the sheer amount of marketing done by companies. Pathos has become a popular tool to draw consumers in as it targets their emotional side. Studies show that emotion influences people's information processing and decision-making, making pathos a perfect tool for persuading consumers to buy goods and services. In this digital age, "designers must go beyond aesthetics and industrial feasibility to integrate the aspect of 'emotional awareness'". Companies today contain current culture references in their advertisement and oftentimes strive to make the audience feel involved. In other words, it is not enough to have a pleasant looking advertisement; corporations may have to use additional design methods to persuade and gain consumers to buy their products. For example, this type of advertising is exemplified in large food brands such as Presidents Choice's "Eat Together" campaign (2017), and Coca-Cola's "Open-happiness" campaign (2009). One of the most well-known examples of pathos in advertising is the SPCA commercials with pictures of stray dogs with sad music.
Pathos in research
Pathos can also be also used in credited medical journals, research and other academic pieces of writing. The goal is to appeal to the readers' emotion while maintaining the necessary requirements of the medical discourse community. Authors may do so, by using certain vocabulary to elicit an emotional response from the audience. “God-terms” are often used as a rhetorical technique. It is imperative that authors still preserve the standard of writing within the medical community by focusing on factual and scientific information without use of personal opinion.
Pathos in art
It can be argued that most artwork falls under the realm of pathos. Throughout history artists have used pathos within their work by utilizing colors, shape, and texture of the artwork to draw out feelings within their audience. Political cartoons are but one example of artists using pathos to persuade or bring to light issues within the world centering around the government. Most times, the designs are blown out of proportion and are greatly exaggerated, but this adds to the raw feeling the artist tries to evoke within the viewer.
See also
Bread and circuses
Catharsis
Dukkha
Pathetic fallacy
Pathology
Sensibility
Sentimental novel
References
External links
Literary Devices and Literary Terms – The Complete List
Examples of Pathos in Literature, Rhetoric and Music
Philosophy of Aristotle
Emotion
Epicureanism
Rhetoric
Stoicism
Writing
Concepts in ancient Greek aesthetics
Concepts in ancient Greek ethics | Pathos | [
"Biology"
] | 2,797 | [
"Emotion",
"Behavior",
"Human behavior"
] |
1,009,199 | https://en.wikipedia.org/wiki/Callus%20%28cell%20biology%29 | Plant callus (plural calluses or calli) is a growing mass of unorganized plant parenchyma cells. In living plants, callus cells are those cells that cover a plant wound. In biological research and biotechnology callus formation is induced from plant tissue samples (explants) after surface sterilization and plating onto tissue culture medium in vitro (in a closed culture vessel such as a Petri dish). The culture medium is supplemented with plant growth regulators, such as auxin, cytokinin, and gibberellin, to initiate callus formation or somatic embryogenesis. Callus initiation has been described for all major groups of land plants.
Callus induction and tissue culture
Plant species representing all major land plant groups have been shown to be capable of producing callus in tissue culture. A callus cell culture is usually sustained on gel medium. Callus induction medium consists of agar and a mixture of macronutrients and micronutrients for the given cell type. There are several types of basal salt mixtures used in plant tissue culture, but most notably modified Murashige and Skoog medium, White's medium, and woody plant medium. Vitamins, such as Gamborg B5 vitamins, are also provided to enhance growth. For plant cells, enrichment with nitrogen, phosphorus, and potassium is especially important. Plant callus is usually derived from somatic tissues. The tissues used to initiate callus formation depends on the plant species and which tissues are available for explant culture. The cells that give rise to callus and somatic embryos usually undergo rapid division or are partially undifferentiated such as meristematic tissue. In alfalfa (Medicago truncatula), however, callus and somatic embryos are derived from mesophyll cells that undergo dedifferentiation. Plant hormones are used to initiate callus growth. After the callus has formed, the concentration of hormones in the medium may be altered to shift the development from callus to root formation, shoot growth, or somatic embryogenesis. The callus tissue then undergoes further cell growth and differentiation, forming the respective organ primordia. The fully developed organs can then be used for the regeneration of new mature plants.
Morphology
Specific auxin-to-cytokinin ratios in plant tissue culture medium give rise to an unorganized growing and dividing mass of callus cells. Callus cultures are often broadly classified as being either compact or friable. Compact calli are typically green and sturdy, while friable calli are white to creamy yellow in color, fall apart easily, and can be used to generate cell suspension cultures and somatic embryos. In maize, these two callus types are designated as type I (compact) and type II (friable). Callus can directly undergo direct organogenesis and/or embryogenesis where the cells will form an entirely new plant.
Callus cell death
Callus can brown and die during culture, mainly due to the oxidation of phenolic compounds. In Jatropha curcas callus cells, small organized callus cells became disorganized and varied in size after browning occurred. Browning has also been associated with oxidation and phenolic compounds in both explant tissues and explant secretions. In rice, presumably, a condition which is favorable for scutellar callus induction also induces necrosis.
Uses
Callus cells are not necessarily genetically homogeneous because a callus is often made from structural tissue, not individual cells. Nevertheless, callus cells are often considered similar enough for standard scientific analysis to be performed as if on a single subject. For example, an experiment may have half a callus undergo a treatment as the experimental group, while the other half undergoes a similar but non-active treatment as the control group.
Plant calluses derived from many different cell types can differentiate into a whole plant, a process called regeneration, through addition of plant hormones to the culture medium. This ability is known as totipotency. A classical experiment by Folke Skoog and Carlos O. Miller on tobacco pith used as the starting explant shows that the supplementation of culture media by different ratios of auxin to cytokinin concentration induces the formation of roots – with higher auxin to cytokinin ratio, the rooting (rhizogenesis) is induced, applying equal amounts of both hormones stimulates further callus growth and increasing the auxin to cytokinin ratio in favor of the cytokinin leads to the development of shoots. Regeneration of a whole plant from a single cell allows transgenics researchers to obtain whole plants which have a copy of the transgene in every cell. Regeneration of a whole plant that has some genetically transformed cells and some untransformed cells yields a chimera. In general, chimeras are not useful for genetic research or agricultural applications.
Genes can be inserted into callus cells using biolistic bombardment, also known as a gene gun, or Agrobacterium tumefaciens. Cells that receive the gene of interest can then be recovered into whole plants using a combination of plant hormones. The whole plants that are recovered can be used to experimentally determine gene function(s), or to enhance crop plant traits for modern agriculture.
Callus is of particular use in micropropagation where it can be used to grow genetically identical copies of plants with desirable characteristics. To increase the yield, efficiency and explant survivability of micropropagation, a thorough care is taken for the optimization of the micropropagation protocol. For example, using explants composed of low totipotency cells may prolong the time necessary to obtain callus of sufficient size, increasing the total length of the experiment. Similarly, various plant species and explant types require specific plant hormones for callus induction and subsequent organogenesis or embryogenesis – for the formation and growth of maize calluses, auxin 2,4-Dichlorophenoxyacetic acid (2,4-D) was superior to 1-Naphthaleneacetic acid (NAA) or Indole-3-acetic acid (IAA), while the development of callus was hindered in prune explants after applying auxin Indole-3-butyric acid (IBA) but not IAA.
History
Henri-Louis Duhamel du Monceau investigated wound-healing responses in elm trees, and was the first to report formation of callus on live plants.
In 1908, E. F. Simon was able to induce callus from poplar stems that also produced roots and buds. The first reports of callus induction in vitro came from three independent researchers in 1939. P. White induced callus derived from tumor-developing procambial tissues of hybrid Nicotiana glauca that did not require hormone supplementation. Gautheret and Nobecourt were able to maintain callus cultures of carrot using auxin hormone additions.
See also
Embryo rescue
Somatic embryogenesis
Chimera (genetics)
Hyperhydricity
References
Cell culture
Biotechnology
Cell biology | Callus (cell biology) | [
"Biology"
] | 1,486 | [
"Cell biology",
"Biotechnology",
"Model organisms",
"nan",
"Cell culture"
] |
1,009,286 | https://en.wikipedia.org/wiki/Actinometer | An actinometer is an instrument that can measure the heating power of radiation. Actinometers are used in meteorology to measure solar radiation as pyranometers, pyrheliometers and net radiometers.
An actinometer is a chemical system or physical device which determines the number of
photons in a beam integrally or per unit time. This name is commonly
applied to devices used in the ultraviolet and visible wavelength ranges.
For example, solutions of iron(III) oxalate can be used as a chemical
actinometer, while bolometers, thermopiles, and photodiodes are physical
devices giving a reading that can be correlated to the number of photons
detected.
History
Swiss physicist Horace-Bénédict de Saussure invented an early version in the late 18th century. His design used a blackened thermometer enclosed in a glass sphere to measure solar radiation, which he referred to as a "heliothermometer." This instrument is considered one of the first tools to systematically measure solar intensity.
John Herschel further developed actinometers in the 19th century, including a design involving photochemical reactions to measure sunlight intensity, which was a significant step forward. Herschel's actinometer involved observing the rate of a chemical reaction under sunlight, which allowed for more precise quantification of solar energy. Herschel's version was influential and helped standardize measurements of solar energy. Herschel introduced the term actinometer, the first of many uses of the prefix actin for scientific instruments, effects, and processes.
The actinograph is a related device for estimating the actinic power of lighting for photography.
Chemical actinometry
Chemical actinometry involves measuring radiant flux via the yield from a chemical reaction. This process requires a chemical with a known quantum yield and easily analyzed reaction products.
Choosing an actinometer
Potassium ferrioxalate is commonly used, as it is simple to use and sensitive over a wide range of relevant wavelengths (254 nm to 500 nm). Other actinometers include malachite green Leucocyanidins, vanadium(V)–iron(III) oxalate and monochloroacetic acid, however all of these actinometers undergo dark reactions, that is, they react in the absence of light. This is undesirable since it will have to be corrected for. Organic actinometers like butyrophenone or piperylene are analysed by gas chromatography. Other actinometers are more specific in terms of the range of wavelengths at which quantum yields have been determined. Reinecke's salt K[Cr(NH3)2(NCS)4] reacts in the near-UV region although it is thermally unstable. Uranyl oxalate has been used historically but is very toxic and cumbersome to analyze.
Recent investigations into nitrate photolysis
have used 2-nitrobenzaldehyde and benzoic acid as a radical scavenger for hydroxyl radicals produced in the photolysis of hydrogen peroxide and sodium nitrate. However, they originally used ferrioxalate actinometry to calibrate the quantum yields for the hydrogen peroxide photolysis. Radical scavengers proved a viable method of measuring production of hydroxyl radical.
Chemical actinometry in the visible range
Meso-diphenylhelianthrene can be used for chemical actinometry in the visible range (400–700 nm). This chemical measures in the 475–610 nm range, but measurements in wider spectral ranges can be done with this chemical if the emission spectrum of the light source is known.
See also
Actinograph
References
Radiometry
Measuring instruments
English inventions | Actinometer | [
"Technology",
"Engineering"
] | 775 | [
"Telecommunications engineering",
"Measuring instruments",
"Radiometry"
] |
1,009,291 | https://en.wikipedia.org/wiki/Pyranometer | A pyranometer () is a type of actinometer used for measuring solar irradiance on a planar surface and it is designed to measure the solar radiation flux density (W/m2) from the hemisphere above within a wavelength range 0.3 μm to 3 μm.
A typical pyranometer does not require any power to operate. However, recent technical development includes use of electronics in pyranometers, which do require (low) external power (see heat flux sensor).
Explanation
The solar radiation spectrum that reaches Earth's surface extends its wavelength approximately from 300 nm to 2800 nm.
Depending on the type of pyranometer used, irradiance measurements with different degrees of spectral sensitivity will be obtained.
To make a measurement of irradiance, it is required by definition that the response to "beam" radiation varies with the cosine of the angle of incidence. This ensures a full response when the solar radiation hits the sensor perpendicularly (normal to the surface, sun at zenith, 0° angle of incidence), zero response when the sun is at the horizon (90° angle of incidence, 90° zenith angle), and 0.5 at a 60° angle of incidence. It follows that a pyranometer should have a so-called "directional response" or "cosine response" that is as close as possible to the ideal cosine characteristic.
Types
Following the definitions noted in the ISO 9060, three types of pyranometer can be recognized and grouped in two different technologies: thermopile technology and silicon semiconductor technology.
The light sensitivity, known as 'spectral response', depends on the type of pyranometer. The figure here above shows the spectral responses of the three types of pyranometer in relation to the solar radiation spectrum. The solar radiation spectrum represents the spectrum of sunlight that reaches the Earth's surface at sea level, at midday with A.M. (air mass) = 1.5.
The latitude and altitude influence this spectrum. The spectrum is influenced also by aerosol and pollution.
Thermopile pyranometers
A thermopile pyranometer (also called thermo-electric pyranometer) is a sensor based on thermopiles designed to measure the broad band of the solar radiation flux density from a 180° field of view angle. A thermopile pyranometer thus usually measures from 300 to 2800 nm with a largely flat spectral sensitivity (see the spectral response graph) The first generation of thermopile pyranometers had the active part of the sensor equally divided in black and white sectors. Irradiation was calculated from the differential measure between the temperature of the black sectors, exposed to the sun, and the temperature of the white sectors, sectors not exposed to the sun or better said in the shades.
In all thermopile technology, irradiation is proportional to the difference between the temperature of the sun exposed area and the temperature of the shadow area.
Design
In order to attain the proper directional and spectral characteristics, a thermopile pyranometer is constructed with the following main components:
A thermopile sensor with a black coating. It absorbs all solar radiation, has a flat spectrum covering the 300 to 50,000 nanometer range, and has a near-perfect cosine response.
A glass dome. It limits the spectral response from 300 to 2,800 nanometers (cutting off the part above 2,800 nm), while preserving the 180° field of view. It also shields the thermopile sensor from convection. Many, but not all, first-class and secondary standard pyranometers (see ISO 9060 classification of thermopile pyranometers) include a second glass dome as an additional "radiation shield", resulting in a better thermal equilibrium between the sensor and inner dome, compared to some single dome models by the same manufacturer. The effect of having a second dome, in these cases, is a strong reduction of instrument offsets. Class A, single dome models, with low zero-offset (+/- 1 W/m2) are available.
In the modern thermopile pyranometers the active (hot) junctions of the thermopile are located beneath the black coating surface and are heated by the radiation absorbed from the black coating. The passive (cold) junctions of the thermopile are fully protected from solar radiation and in thermal contact with the pyranometer housing, which serves as a heat-sink. This prevents any alteration from yellowing or decay when measuring the temperature in the shade, thus impairing the measure of the solar irradiance.
The thermopile generates a small voltage in proportion to the temperature difference between the black coating surface and the instrument housing. This is of the order of 10 μV (microvolts) per W/m2, so on a sunny day the output will be around 10 mV (millivolts). Each pyranometer has a unique sensitivity, unless otherwise equipped with electronics for signal calibration.
Usage
Thermopile pyranometers are frequently used in meteorology, climatology, climate change research, building engineering physics, photovoltaic systems, and monitoring of photovoltaic power stations.
The solar energy industry, in a 2017 standard, IEC 61724-1:2017, has defined the type and number of pyranometers that should be used depending on the size and category of solar power plant. That norm advises to install thermopile pyranometers horizontally (GHI, Global Horizontal Irradiation), and to install photovoltaic pyranometers in the plane of PV modules (POA, Plane Of Array) to enhance accuracy in Performance Ratio calculation.
To use the data measured by a pyranometer (horizontal or in-plane), quality assessment (QA) of the raw measured data is necessary. This is because the pyranometer measurements typically suffer from environment-induced errors but also handling and neglect errors, such as:
Pollution of the glass dome (e.g. deposition of atmospheric dust, bird droppings, snowfall), which reduces the measured irradiance
Issues with positioning, resulting in measurements in a different plane (i.e. not horizontal or in-plane with PV modules) than expected
Data logger errors resulting in e.g. static values, oscillations, or data capped to a certain value
Reflections and shading from the surrounding objects resulting in inaccurate measurements (i.e. not corresponding to solar irradiance)
Calibration issues of the instrument, leading to measurement errors, offset, or drift over time
Dew, snow, or frost on the glass dome on lower-end pyranometers not equipped with heating units
Each of the above issues appears as a specific pattern in the measured time series. Thanks to this, the issues can be identified, the erroneous records flagged, and excluded from the dataset. The methods employed for data QA can be either manual, relying on an expert to identify the patterns, or automated, where an algorithm does the job. As many of the patterns are complex, not easily described, and require a particular context, manual QA is very common. A specialist software with suitable tools is required to perform the QA.
After the QA procedure, the remaining ‘clean’ dataset reflects the solar irradiance at the measurement site to within the uncertainty of measurement of the instrument. The ‘clean’ measured dataset can be optionally enhanced with data from a satellite-based solar irradiance model. This data is available globally for a much longer time period (typically decades into the past) than the data measured by the pyranometer. The satellite model data can be correlated (or site adapted) to the pyranometer-measured data to produce a dataset with a long time period of data accurate for the specific site, with a defined uncertainty. Such data can be used to perform bankable solar resource studies or produce Solar potential maps.
For monitoring of operational PV power plants, pyranometers play an essential role in verifying the solar irradiance available at any given time or over a certain time period. Due to weather variability, redundancy, and the spatial scale of contemporary solar plants (above 100MWp), multiple pyranometers are installed to provide accurate solar irradiation for each section of the PV power plant. IEC 61724-1:2017 international standard for example calls for at least 4 Class A thermopile pyranometers to be installed at 100MWp PV power plant at all times.
Solar measurements that were QA’d could be used to derive Key Performance Indicators (KPI) such as Performance ratio* - metrics used in asset health monitoring or various contractual scenarios relating to energy produced (billing) or asset management (i.e. O&M). In these calculations, the measured sum of in-plane irradiation over a certain period is used as the determinant to which normalized produced PV electricity is compared to. Due to the difficulty of obtaining reliable in-plane measurements, especially in operational power plants, Energy Performance Index is increasingly being used instead of the older Performance ratio metric.
Photovoltaic pyranometer – silicon photodiode
Also known as a photoelectric pyranometer in the ISO 9060, a photodiode-based pyranometer can detect the portion of the solar spectrum between 400 nm and 1100 nm. The photodiode converts the aforementioned solar spectrum frequencies into current at high speed, thanks to the photoelectric effect. The conversion is influenced by the temperature with a raise in current produced by the raise in temperature (about 0,1% • °C)
Design
A photodiode-based pyranometer is composed by a housing dome, a photodiode, and a diffuser or optical filters. The photodiode has a small surface area and acts as a sensor. The current generated by the photodiode is proportional to irradiance; an output circuit, such as a transimpedance amplifier, generates a voltage directly proportional to the photocurrent. The output is usually on the order of millivolts, the same order of magnitude as thermopile-type pyranometers.
Usage
Photodiode-based pyranometers are implemented where the quantity of irradiation of the visible solar spectrum, or of certain portions such as UV, IR or PAR (photosynthetically active radiation), needs to be calculated. This is done by using diodes with specific spectral responses.
Photodiode-based pyranometers are the core of luxmeters used in photography, cinema and lighting technique. Sometimes they are also installed close to modules of photovoltaic systems.
Photovoltaic pyranometer – photovoltaic cell
Built around the 2000s concurrently with the spread of photovoltaic systems, the photovoltaic pyranometer is an evolution of the photodiode pyranometer. It answered the need for a single reference photovoltaic cell when measuring the power of cell and photovoltaic modules. Specifically, each cell and module is tested through flash tests by their respective manufacturers, and thermopile pyranometers do not possess the adequate speed of response nor the same spectral response of a cell. This would create obvious mismatch when measuring power, which would need to be quantified. In the technical documents, this pyranometer is also known as "reference cell".
The active part of the sensor is composed of a photovoltaic cell working in near short-circuit condition. As such, the generated current is directly proportionate to the solar radiation hitting the cell in a range between 350 nm and 1150 nm. When invested by a luminous radiation in the mentioned range, it produces current as a consequence of the photovoltaic effect. Its sensitivity is not flat, but it is same as that of Silicon photovoltaic cell. See the Spectral Response graph.
Design
A photovoltaic pyranometer is essentially assembled with the following parts:
A metallic container with a fixing staff
A small photovoltaic cell
Signal conditioning electronics
Silicon sensors such as the photodiode and the photovoltaic cell vary the output in function of temperature. In the more recent models, the electronics compensate the signal with the temperature, therefore removing the influence of temperature out of the values of solar irradiance. Inside several models, the case houses a board for the amplification and conditioning of the signal.
Usage
Photovoltaic pyranometers are used in solar simulators and alongside photovoltaic systems for the calculation of photovoltaic module effective power and system performance. Because the spectral response of a photovoltaic pyranometer is similar to that of a photovoltaic module, it may also be used for preliminary diagnosis of malfunction in photovoltaic systems.
Reference PV Cell or Solar Irradiance Sensor may have external inputs ensuring the connection of Module Temperature Sensor, Ambient Temperature Sensor and Wind speed sensor with only one Modbus RTU output connected directly to the Datalogger. These data are suitable for monitoring the Solar PV Plants.
Standardization and calibration
Both thermopile-type and photovoltaic pyranometers are manufactured according to standards.
Thermopile pyranometers
Thermopile pyranometers follow the ISO 9060 standard, which is also adopted by the World Meteorological Organization (WMO). This standard discriminates three classes.
The latest version of ISO 9060, from 2018 uses the following classification: Class A for best performing, followed by Class B and Class C, while the older ISO 9060 standard from 1990 used ambiguous terms as "secondary standard", "first class" and "second class".,
Differences in classes are due to a certain number of properties in the sensors: response time, thermal offsets, temperature dependence, directional error, non-stability, non-linearity, spectral selectivity and tilt response. These are all defined in ISO 9060. For a sensor to be classified in a certain category, it needs to fulfill all the minimum requirements for these properties.
‘Fast response’ and ‘spectrally flat’ are two sub-classifications, included in ISO 9060:2018. They help to further distinguish and categorise sensors. To gain the ‘fast response’ classification, the response time for 95% of readings must be less than 0.5 seconds; while ‘spectrally flat’ can apply to sensors with a spectral selectivity of less than 3% in the 0,35 to 1,5 μm spectral range. While most Class A pyranometers are ‘spectrally flat’, sensors in the ‘fast response’ sub-classification are much rarer. Most Class A pyranometers have a response time of 5 seconds or more.
The calibration is typically done having the World Radiometric Reference (WRR) as an absolute reference. It is maintained by PMOD in Davos, Switzerland. In addition to the World Radiometric Reference, there are private laboratories such as ISO-Cal North America who have acquired accreditation for these unique calibrations. For the Class A pyranometer, calibration is done following ASTM G167, ISO 9847 or ISO 9846. Class B and class C pyranometers are usually calibrated according to ASTM E824 and ISO 9847.
Photovoltaic pyranometer
Photovoltaic pyranometers are standardized and calibrated under IEC 60904-4 for primary reference samples and under IEC 60904-2 for secondary reference samples and the instruments intended for sale.
In both standards, their respective traceability chain starts with the primary standard known as the group of cavity radiometer by the World Radiometric Reference (WRR).
Signal conditioning
The natural output value of these pyranometers does not usually exceed tens of millivolt (mV). It is considered a "weak" signal, and as such, rather vulnerable to electromagnetic interferences, especially where the cable runs across decametrical distances or lies in photovoltaic systems. Thus, these sensors are frequently equipped with signal conditioning electronics, giving an output of 4-20 mA or 0-1 V.
Another solution implies greater immunities to noises, like Modbus over RS-485, suitable for ambiances with electromagnetic interferences typical of medium-large scale photovoltaic power stations, or SDI-12 output, where sensors are part of a low power weather station. The equipped electronics often concur to easy integration in the system's SCADA.
Additional information can also be stored in the electronics of the sensor, like calibration history, serial number.
See also
Actinometer
Photodiode
Heat flux sensor
Net radiometer
Pyrgeometer
Pyrheliometer
Radiometer
Sunlight
Solar constant
Sun path
References
External links
Meteo-Technology instrumentation website
Website showing measured data using a thermopile pyranometer north to the arctic circle
Measuring instruments
Meteorological instrumentation and equipment | Pyranometer | [
"Technology",
"Engineering"
] | 3,586 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
1,009,445 | https://en.wikipedia.org/wiki/European%20Launcher%20Development%20Organisation | The European Launcher Development Organisation (ELDO) is a former European space research organisation. It was first developed in order to establish a satellite launch vehicle for Europe. The three-stage rocket developed was named Europa, after the mythical Greek goddess. Overall, there were 10 launches that occurred under ELDO's funding. The organisation consisted of Belgium, Britain, France, Germany, Italy, and the Netherlands. Australia was an associate member of the organisation.
Initially, the launch site was in Woomera, Australia, but was later moved to the French site Kourou, in French Guiana. The programme was created to replace the Blue Streak Missile Programme after its cancellation in 1960. In 1974, after an unsuccessful satellite launch, the programme was merged with the European Space Research Organisation to form the European Space Agency.
Origins
After the failure to launch Britain's Blue Streak Missile, Britain wished to use its finished space launch parts in order to cut losses. In 1961, Britain and France announced that they would be working together on a launcher that would be capable of sending a one-ton satellite into space. This cooperation was later drafted into the Convention of the European Launcher Development Organisation, which Italy, Belgium, West Germany, the Netherlands and Australia would join. Australia provided a sparsely populated site for missile launcher testing and development at Woomera, South Australia. The original intent of this organisation was to develop a space programme exclusively for Europe, excluding the UN or any outside country.
History
The initial plans for the rocket were proposed in 1962. The rocket created was called the ELDO-A, later renamed Europa-1. It measured in length and weighed more than 110 tons. Europa-1 was planned to put a payload of – into a circular orbit above earth. The three stages consisted of the Blue Streak stage, the French Coralie stage, and the German stage. The first stage, the Blue Streak stage, was to fire for 160 seconds after launch. The second stage, the French Coralie stage, fired for the following 103 seconds. The third and final stage, the German stage, fired for an extra 361 seconds to launch the rocket into Earth's lower orbit. The first stage was a development of Blue Streak and was built in Stevenage, Hertfordshire U.K.
In June 1964, the first stage, F1, had its first launch at Woomera, South Australia. By the middle of 1966, ELDO decided to change Europa-1 from a three-stage launcher into a four-stage launcher that was capable of placing a satellite into geostationary transfer orbit. Following this decision, in 1969, many unsuccessful launches of Europa-1 and the resignation of Britain and Italy prompted a reconsideration of ideas. In 1970, ELDO was forced to cancel the Europa-1 programme.
By late 1970, the plans for Europa-2 were created. Europa-2 was a similarly designed rocket with an extra stage added in. The funding for Europa-2 was supplied 90% by France and Germany. On November 5, 1971, Europa-2 was launched for the first time, but unsuccessfully. The failure of the rocket led to the consideration of a Europa-3 rocket design. However, Europa-3 was never created and the lack of funding prompted the merging of the European Launcher Development Organisation and the European Space Research Organisation to form the European Space Agency.
Australian downrange tracker
The Gove Down Range Guidance and Telemetry Station was built at Gulkula on the Gove Peninsula in the Northern Territory of Australia in the 1960s, to track the downrange path of rockets launched from the RAAF Woomera Range Complex in South Australia, with its state-of-the-art technology operated by mainly Belgian scientists. The satellite tracker was moved back up to the Gove Peninsula in September 2020 by the local historical society, after spending years in storage at Woomera.
Launches
Overall, the European Launcher Development Organisation planned eleven launches, only ten of which actually occurred. Of the nine actual launches, four were successful. Four other launches were unsuccessful and there was one launch that was terminated. The first launch, F-1, occurred on 5 June 1964 which tested only the first stage of the launch and was successful. Both F-2 and F-3, which occurred on 20 October 1964 and 22 March 1965 on their respective dates tested only the first stage once again and were both successful. The fourth launch, F-4, occurred on 24 May 1966. This launch tested only the first stage of the rocket with a dummy stage 2 and 3. This flight was terminated 136 seconds into flight. The fifth launch, F-5, took place on 13 November 1966. This launch aimed to complete the same task as F-4 and was successful. The sixth launch, F-6/1, took place on 4 August 1967. This launch had an active first and second stage with a dummy third stage and satellite. On this launch, the second stage did not ignite and was unsuccessful. The seventh launch, F-6/2, took place on 5 December 1967. It aimed to do the same objective as F-6/1, but the first and second stages did not separate. The eighth launch, F-7, took place on 30 November 1968. On this launch, all three stages were active and a satellite was fitted. After the second stage ignited, the third stage exploded. The ninth launch, F-8, occurred on 3 July 1969 and aimed to do the same thing as F-7, but ended the same way. The tenth launch, F-9, occurred on 12 June 1970 and had all stages active with a satellite fitted. In this launch, all stages were successful, yet the satellite failed to orbit. After this launch, ELDO began losing funds and members and was eventually phased into the ESRO to create the ESA.
After F-10 was cancelled, it was decided that Woomera launch site was not suitable for putting satellites into geosynchronous orbit. In 1966, it was decided to move to the French site of Kourou in South America. France planned to launch F-11, on which Europa-2 launched off into the sky. However, thanks to the static discharge from the fairings travelling down to the third stage sequencer and inertial navigation computer, they cause it to hang and malfunction; signalling the range safety officer to destroy it. The launch of F12 was postponed whilst a project review was carried out, which led to the decision to abandon the Europa design.
References
Space organizations
European Space Agency | European Launcher Development Organisation | [
"Astronomy"
] | 1,333 | [
"Astronomy organizations",
"Space organizations"
] |
1,009,486 | https://en.wikipedia.org/wiki/T.50%20%28standard%29 | ITU-T recommendation T.50 specifies the International Reference Alphabet (IRA), formerly International Alphabet No. 5 (IA5), a character encoding. ASCII is the U.S. variant of that character set.
The original version from November 1988 corresponds to ISO 646. The current version is from September 1992.
History
At the beginning was the International Telegraph Alphabet No. 2 (ITA2), a five-bit code. IA5 is an improvement, based on seven-bit bytes.
Recommendation V.3 IA5 (1968): Initial version, superseded
Recommendation V.3 IA5 (1972): Superseded
Recommendation V.3 IA5 (1976-10): Superseded
Recommendation V.3 IA5 (1980-11): Superseded
Recommendation T.50 IA5 (1984-10): Superseded
Recommendation T.50 IA5 (1988-11-25): Superseded
Recommendation T.50 IRA (1992-09-18): In force
Use
This standard is referenced by other standards such as RFC 3939 ("Calling Line Identification for Voice Mail Messages"). It is also used by some analog modems such as Cisco ones.
Character set
The following table shows the IA5 character set. Each character is shown with the code point of its Unicode equivalent.
Standardisation
Identical standard: ISO/IEC 646:1991 (Twinned)
See also
ITU T.51
References
External links
Official ITU-T T.50 page
Tech Info - Character Codes (IA5 and ISO 646)
Character encoding
Character sets
ITU-T recommendations
ITU-T T Series Recommendations | T.50 (standard) | [
"Technology"
] | 327 | [
"Natural language and computing",
"Character encoding"
] |
1,009,525 | https://en.wikipedia.org/wiki/Advanced%20Composition%20Explorer | Advanced Composition Explorer (ACE or Explorer 71) is a NASA Explorer program satellite and space exploration mission to study matter comprising energetic particles from the solar wind, the interplanetary medium, and other sources.
Real-time data from ACE are used by the National Oceanic and Atmospheric Administration (NOAA) Space Weather Prediction Center (SWPC) to improve forecasts and warnings of solar storms. The ACE robotic spacecraft was launched on 25 August 1997, and entered a Lissajous orbit close to the Lagrange point (which lies between the Sun and the Earth at a distance of some from the latter) on 12 December 1997. The spacecraft is currently operating at that orbit. Because ACE is in a non-Keplerian orbit, and has regular station-keeping maneuvers, the orbital parameters in the adjacent information box are only approximate.
, the spacecraft is still in generally good condition. NASA Goddard Space Flight Center managed the development and integration of the ACE spacecraft.
History
The Advanced Composition Explorer (ACE) was proposed in 1986 as part of the Explorer Concept Study Program. ACE is designed to make coordinated measurements of the elemental and isotopic composition of accelerated nuclei from H (Hydrogen) to Zn (Zinc) spanning six decades in energy per nucleon, from solar wind to galactic cosmic ray energies, with sensitivity and with charge and mass resolution much better than heretofore possible. Following a Phase-A definition study, ACE was selected for development in 1989, and began construction in 1994. On 25 August 1997, ACE was successfully launched from Cape Canaveral Air Force Station by a Delta II launch vehicle. The August 1997 launch was originally scheduled back in 1993.
Science objectives
ACE observations allow the investigation of a wide range of fundamental problems in the following four major areas:
Elemental and isotopic composition of matter
A major objective is the accurate and comprehensive determination of the elemental and isotopic composition of the various samples of "source material" from which nuclei are accelerated. These observations have been used to:
Generate a set of solar isotopic abundances based on a direct sampling of solar material;
Determine the coronal elemental and isotopic composition with greatly improved accuracy;
Establish the pattern of isotopic differences between galactic cosmic ray and Solar System matter;
Measure the elemental and isotopic abundances of interstellar and interplanetary "pick–up ions";
Determine the isotopic composition of the "anomalous cosmic ray component", which represents a sample of the local interstellar medium.
Origin of the elements and subsequent evolutionary processing
Isotopic "anomalies" in meteorites indicate that the Solar System was not homogeneous when formed. Similarly, the Galaxy is neither uniform in space nor constant in time due to continuous stellar nucleosynthesis.
ACE measurements have been used to:
Search for differences between the isotopic composition of solar and meteoritic material;
Determine the contributions of solar wind and solar energetic particles to lunar and meteoritic material, and to planetary atmospheres and magnetospheres;
Determine the dominant nucleosynthetic processes that contribute to cosmic ray source material;
Determine whether cosmic rays are a sample of freshly synthesized material (e.g., from Supernova) or of the contemporary interstellar medium;
Search for isotopic patterns in solar and galactic material as a test of galactic evolution models.
Formation of the solar corona and acceleration of the solar wind
Solar energetic particles, solar wind, and spectroscopic observations show that the elemental composition of the solar corona is differentiated from that of the photosphere, although the processes by which this occurs, and by which the solar wind is subsequently accelerated, are poorly understood. The detailed composition and charge–state data provided by ACE are used to:
Isolate the dominant coronal formation processes by comparing a broad range of coronal and photospheric abundances;
Study plasma conditions at the source of solar wind and solar energetic particles by measuring and comparing the charge states of these two populations;
Study solar wind acceleration processes and any charge or mass-dependent fractionation in various types of solar wind flows.
Particle acceleration and transport in nature
Particle acceleration is ubiquitous in nature and understanding its nature is one of the fundamental problems of space plasma astrophysics. The unique data set obtained by ACE measurements has been used to:
Make direct measurements of charge and/or mass-dependent fractionation during solar energetic particle and interplanetary acceleration events;
Constrain solar flare, coronal shock, and interplanetary shock acceleration models with charge, mass, and spectral data spanning up to five decades in energy;
Test theoretical models for 3He–rich solar flares and solar γ–ray events.
Instruments
Cosmic-Ray Isotope Spectrometer (CRIS)
The Cosmic-Ray Isotope Spectrometer covers the highest range of the Advanced Composition Explorer's energy coverage, from 50 to 500 MeV/nucleon, with an isotopic resolution for elements from Z ≈ 2 to 30. The nuclei detected in this energy interval are predominantly cosmic rays originating in our Galaxy. This sample of galactic matter investigates the nucleosynthesis of the parent material, as well as fractionation, acceleration, and transport processes that these particles undergo in the Galaxy and in the interplanetary medium. Charge and mass identification with CRIS is based on multiple measurements of dE/dx and total energy in stacks of silicon detectors, and trajectory measurements in a scintillating optical fiber trajectory (SOFT) hodoscope. The instrument has a geometrical factor of -sr for isotope measurements.
Electron, Proton, and Alpha-particle Monitor (EPAM)
The Electron, Proton, and Alpha Monitor (EPAM) instrument on the ACE spacecraft is designed to measure a broad range of energetic particles over nearly the full unit-sphere at high time resolution. Such measurements of ions and electrons in the range of a few tens of keV to several MeV are essential to understand the dynamics of solar flares, co-rotating interaction regions (CIRs), interplanetary shock acceleration, and upstream terrestrial events. The large dynamic range of EPAM extends from about 50 keV to 5 MeV for ions, and 40 keV to about 350 keV for electrons. To complement its electron and ion measurements, EPAM is also equipped with a Composition Aperture (CA) which unambiguously identifies ion species reported as species group rates and/or individual pulse-height events. The instrument achieves its large spatial coverage through five telescopes oriented at various angles to the spacecraft spin axis. The low-energy particle measurements, obtained as time resolutions between 1.5 and 24 seconds, and the ability of the instrument to observe particle anisotropies in three dimensions make EPAM an excellent resource to provide the interplanetary context for studies using other instruments on the ACE spacecraft.
Magnetometer (MAG)
The magnetic field experiment on ACE provides continuous measurements of the local magnetic field in the interplanetary medium. These measurements are essential in the interpretation of simultaneous ACE observations of energetic and thermal particle distributions. The experiment consists of a pair of twin, boom-mounted, triaxial flux gate sensors which are located 165 inches (419 cm) from the center of the spacecraft on opposing solar panels. The two triaxial sensors provide a balanced, fully redundant vector instrument and permit some enhanced assessment of the spacecraft's magnetic field.
Real-Time Solar Wind (RTSW)
The Real-Time Solar Wind (RTSW) system is continuously monitoring the solar wind and producing warnings of impending major geomagnetic activity, up to one hour in advance. Warnings and alerts issued by NOAA allow those with systems sensitive to such activity to take preventative action. The RTSW system gathers solar wind and energetic particle data at high time resolution from four ACE instruments (MAG, SWEPAM, EPAM, and SIS), packs the data into a low-rate bit stream, and broadcasts the data continuously. NASA sends real-time data to NOAA each day when downloading science data. With a combination of dedicated ground stations (CRL in Japan and RAL in Great Britain) and time on existing ground tracking networks (NASA DSN and the USAF's AFSCN), the RTSW system can receive data 24 hours per day throughout the year. The raw data are immediately sent from the ground station to the Space Weather Prediction Center in Boulder, Colorado, processed, and then delivered to its Space Weather Operations Center where they are used in daily operations; the data are also delivered to the CRL Regional Warning Center at Hiraiso Station, Japan, to the USAF 55th Space Weather Squadron, and placed on the World Wide Web. The data are downloaded, processed and dispersed within 5 minutes from the time they leave ACE. The RTSW system also uses the low-energy energetic particles to warn of approaching interplanetary shocks and to help monitor the flux of high-energy particles that can produce radiation damage in satellite systems.
Solar Energetic Particle Ionic Charge Analyzer (SEPICA)
The Solar Energetic Particle Ionic Charge Analyzer (SEPICA) was the instrument on the Advanced Composition Explorer (ACE) that determined the ionic charge states of solar and interplanetary energetic particles in the energy range from ≈0.2 MeV nucl-1 to ≈5 MeV charge-1. The charge state of energetic ions contains key information to unravel source temperatures, acceleration, fractionation, and transport processes for these particle populations. SEPICA had the ability to resolve individual charge states with a substantially larger geometric factor than its predecessor ULEZEQ on ISEE-1 and ISEE-3, on which SEPICA was based. To achieve these two requirements at the same time, SEPICA was composed of one high-charge resolution sensor section and two low-charge resolution, but large geometric factor sections.
As of 2008, this instrument is no longer functioning due to failed gas valves.
Solar Isotope Spectrometer (SIS)
The Solar Isotope Spectrometer (SIS) provides high-resolution measurements of the isotopic composition of energetic nuclei from He to Zn (Z=2 to 30) over the energy range from ~10 to ~100 MeV/nucleon. During large solar events, SIS measures the isotopic abundances of solar energetic particles to determine directly the composition of the solar corona and to study particle acceleration processes. During solar quiet times, SIS measures the isotopes of low-energy cosmic rays from the Galaxy and isotopes of the anomalous cosmic ray component, which originates in the nearby interstellar medium. SIS has two telescopes composed of silicon solid-state detectors that provide measurements of the nuclear charge, mass, and kinetic energy of incident nuclei. Within each telescope, particle trajectories are measured with a pair of two-dimensional silicon strip detectors instrumented with custom very-large-scale integrated (VLSI) electronics to provide both position and energy-loss measurements. SIS was specially designed to achieve excellent mass resolution under the extreme, high flux conditions encountered in large solar particle events. It provides a geometry factor of 40 cm2 sr, significantly greater than earlier solar particle isotope spectrometers.
Solar Wind Electron, Proton and Alpha Monitor (SWEPAM)
The Solar Wind Electron Proton Alpha Monitor (SWEPAM) experiment provides the bulk solar wind observations for the Advanced Composition Explorer (ACE). These observations provide the context for elemental and isotopic composition measurements made on ACE as well as allowing the direct examination of numerous solar wind phenomena such as coronal mass ejection, interplanetary shocks, and solar wind fine structure, with advanced, 3-D plasma instrumentation. They also provide an ideal data set for both heliospheric and magnetospheric multi-spacecraft studies where they can be used in conjunction with other, simultaneous observations from spacecraft such as Ulysses. The SWEPAM observations are made simultaneously with independent electron (SWEPAM-e) and ion (SWEPAM-i) instruments. In order to save costs for the ACE project, SWEPAM-e and SWEPAM-i are the recycled flight spares from the joint NASA/ESA Ulysses mission. Both instruments had selective refurbishment, modification, and modernization required to meet the ACE mission and spacecraft requirements. Both incorporate electrostatic analyzers whose fan-shaped fields of view sweep out all pertinent look directions as the spacecraft spins.
Solar Wind Ion Composition Spectrometer (SWICS) and Solar Wind Ion Mass Spectrometer (SWIMS)
The Solar Wind Ion Composition Spectrometer (SWICS) and the Solar Wind Ions Mass Spectrometer (SWIMS) on ACE are instruments optimized for measurements of the chemical and isotopic composition of solar and interstellar matter. SWICS determined uniquely the chemical and ionic-charge composition of the solar wind, the thermal and mean speeds of all major solar wind ions from H through Fe at all solar wind speeds above 300 km/s−1 (protons) and 170 km/s−1 (Fe+16), and resolved H and He isotopes of both solar and interstellar sources. SWICS also measured the distribution functions of both the interstellar cloud and dust cloud pickup ions up to energies of 100 keV/e−1. SWIMS measures the chemical, isotopic and charge state composition of the solar wind for every element between He and Ni. Each of the two instruments are time-of-flight mass spectrometers and use electrostatic analysis followed by the time-of-flight and, as required, an energy measurement.
On 23 August 2011, the SWICS time-of-flight electronics experienced an age- and radiation-induced hardware anomaly that increased the level of background in the composition data. To mitigate the effects of this background, the model for identifying ions in the data was adjusted to take advantage of only the ion energy-per-charge as measured by the electrostatic analyzer, and the ion energy as measured by solid-state detectors. This has allowed SWICS to continue to deliver a subset of the data products that were provided to the public prior to the hardware anomaly, including ion charge state ratios of oxygen and carbon, and measurements of solar wind iron. The measurements of proton density, speed, and thermal speed by SWICS were not affected by this anomaly and continue to the present day.
Ultra-Low-Energy Isotope Spectrometer (ULEIS)
The Ultra-Low-Energy Isotope Spectrometer (ULEIS) on the ACE spacecraft is an ultra-high-resolution mass spectrometer that measures particle composition and energy spectra of elements He–Ni with energies from ~45 keV/nucleon to a few MeV/nucleon. ULEIS investigates particles accelerated in solar energetic particle events, interplanetary shocks, and at the solar wind termination shock. By determining energy spectra, mass composition, and temporal variations in conjunction with other ACE instruments, ULEIS greatly improves our knowledge of solar abundances, as well as other reservoirs such as the local interstellar medium. ULEIS combines the high sensitivity required to measure low particle fluxes, along with the capability to operate in the largest solar particle or interplanetary shock events. In addition to detailed information for individual ions, ULEIS features a wide range of count rates for different ions and energies that allows accurate determination of particle fluxes and anisotropies over short (few minutes) time scales.
Science results
The spectra of particles observed by ACE
Figure 1 shows the particle fluence (total flux over a given period of time) of oxygen at ACE for a time period just after solar minimum, the part of the 11-year solar cycle when solar activity is lowest. The lowest-energy particles come from the slow and fast solar wind, with speeds from about 300 to about 800 km/s. Like the solar wind distribution of all ions, that of oxygen has a suprathermal tail of higher-energy particles; that is, in the frame of the bulk solar wind, the plasma has an energy distribution that is approximately a thermal distribution but has a notable excess above about 5 keV, as shown in Figure 1. The ACE team has made contributions to understanding the origins of these tails and their role in injecting particles into additional acceleration processes.
At energies higher than those of the solar wind particles, ACE observes particles from regions known as corotating interaction regions (CIRs). CIRs form because the solar wind is not uniform. Due to solar rotation, high-speed streams collide with preceding slow solar wind, creating shock waves at roughly 2–5 astronomical units (AU, the distance between Earth and the Sun) and forming CIRs. Particles accelerated by these shocks are commonly observed at 1 AU below energies of about 10 MeV per nucleon. ACE measurements confirm that CIRs include a significant fraction of singly charged helium formed when interstellar neutral helium is ionized.
At yet higher energies, the major contribution to the measured flux of particles is due to solar energetic particles (SEPs) associated with interplanetary (IP) shocks driven by fast coronal mass ejections (CMEs) and solar flares. Enriched abundances of helium-3 and helium ions show that the suprathermal tails are the main seed population for these SEPs. IP shocks traveling at speeds up to about accelerate particles from the suprathermal tail to 100 MeV per nucleon and more. IP shocks are particularly important because they can continue to accelerate particles as they pass over ACE and thus allow shock acceleration processes to be studied in situ.
Other high-energy particles observed by ACE are anomalous cosmic rays (ACRs) that originate with neutral interstellar atoms that are ionized in the inner heliosphere to make "pickup" ions and are later accelerated to energies greater than 10 MeV per nucleon in the outer heliosphere. ACE also observes pickup ions directly; they are easily identified because they are singlely charged. Finally, the highest-energy particles observed by ACE are the galactic cosmic rays (GCRs), thought to be accelerated by shock waves from supernova explosions in our galaxy.
Other findings from ACE
Shortly after launch, the SEP sensors on ACE detected solar events that had unexpected characteristics. Unlike most large, shock-accelerated SEP events, these were highly enriched in iron and helium-3, as are the much smaller, flare-associated impulsive SEP events. Within the first year of operations, ACE found many of these "hybrid" events, which led to substantial discussion within the community as to what conditions could generate them.
One remarkable recent discovery in heliospheric physics has been the ubiquitous presence of suprathermal particles with common spectral shape. This shape unexpectedly occurs in the quiet solar wind; in disturbed conditions downstream from shocks, including CIRs; and elsewhere in the heliosphere. These observations have led Fisk and Gloeckler to suggest a novel mechanism for the particles' acceleration.
Another discovery has been that the current solar cycle, as measured by sunspots, CMEs, and SEPs, has been much less magnetically active than the previous cycle. McComas et al. have shown that the dynamic pressures of the solar wind measured by the Ulysses satellite over all latitudes and by ACE in the ecliptic plane are correlated and were declining in time for about 2 decades. They concluded that the Sun had been undergoing a global change that affected the overall heliosphere. Simultaneously, GCR intensities were increasing and in 2009 were the highest recorded during the past 50 years. GCRs have more difficulty reaching Earth when the Sun is more magnetically active, so the high GCR intensity in 2009 is consistent with a globally reduced dynamic pressure of the solar wind.
ACE also measures abundances of cosmic ray nickel-59 and cobalt-59 isotopes; these measurements indicate that a time longer than the half-life of nickel-59 with bound electrons (7.6 × 104 years) elapsed between the time nickel-59 was created in a supernova explosion and the time cosmic rays were accelerated. Such long delays indicate that cosmic rays come from the acceleration of old stellar or interstellar material rather than from fresh supernova ejecta. ACE also measures an iron-58/iron-56 ratio that is enriched over the same ratio in solar system material. These and other findings have led to a theory of the origin of cosmic rays in galactic superbubbles, formed in regions where many supernovae explode within a few million years. Recent observations of a cocoon of freshly accelerated cosmic rays in the Cygnus superbubble by the Fermi gamma-ray observatory support this theory.
Follow-on space weather observatory
On 11 February 2015, the Deep Space Climate Observatory (DSCOVR)—with several similar instruments including a newer and more sensitive instrument to detect Earth-bound coronal mass ejections—successfully launched by NASA and National Oceanic and Atmospheric Administration aboard a SpaceX Falcon 9 launch vehicle from Cape Canaveral, Florida. The spacecraft arrived at L1 by 8 June 2015, just over 100 days after launch. Along with ACE, both will provide space weather data as long as ACE can continue to function.
See also
References
External links
Advanced Composition Explorer (ACE) - from the California Institute of Technology
ACE Real-Time Solar Wind - from the National Oceanic and Atmospheric Association
Spacecraft launched in 1997
Artificial satellites at Earth-Sun Lagrange points
Explorers Program
Missions to the Sun
Space telescopes
Spacecraft launched by Delta II rockets
NASA space probes
Cosmic-ray experiments
Spacecraft using Lissajous orbits | Advanced Composition Explorer | [
"Astronomy"
] | 4,374 | [
"Space telescopes"
] |
1,009,552 | https://en.wikipedia.org/wiki/GNU%20Linear%20Programming%20Kit | The GNU Linear Programming Kit (GLPK) is a software package intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library. The package is part of the GNU Project and is released under the GNU General Public License.
GLPK uses the revised simplex method and the primal-dual interior point method for non-integer problems and the branch-and-bound algorithm together with Gomory's mixed integer cuts for (mixed) integer problems.
History
GLPK was developed by Andrew O. Makhorin (Андрей Олегович Махорин) of the Moscow Aviation Institute. The first public release was in October 2000.
Version 1.1.1 contained a library for a revised primal and dual simplex algorithm.
Version 2.0 introduced an implementation of the primal-dual interior point method.
Version 2.2 added branch and bound solving of mixed integer problems.
Version 2.4 added a first implementation of the GLPK/L modeling language.
Version 4.0 replaced GLPK/L by the GNU MathProg modeling language, which is a subset of the AMPL modeling language.
Interfaces and wrappers
Since version 4.0, GLPK problems can be modeled using GNU MathProg (GMPL), a subset of the AMPL modeling language used only by GLPK. However, GLPK is most commonly called from other programming languages. Wrappers exist for:
Julia and the JuMP modeling package
Java (using OptimJ)
Further reading
The book uses GLPK exclusively and contains numerous examples.
References
External links
GLPK official site
GLPK Wikibook
Linear Programming Kit
Mathematical optimization software
Free mathematics software
Free software programmed in C
Mathematics software for Linux | GNU Linear Programming Kit | [
"Mathematics"
] | 393 | [
"Mathematics software for Linux",
"Free mathematics software",
"Mathematical software"
] |
1,009,626 | https://en.wikipedia.org/wiki/Teletex | Teletex was ITU-T specification F.200 for a text and document communications service that could be provided over telephone lines. It was rapidly superseded by e-mail; however, the name Teletex lives on in several of the X.500 standard attributes used in Lightweight Directory Access Protocol.
Overview
Teletex was designed as an upgrade to the conventional telex service. The terminal-to-terminal communication service of telex would be turned into an office-to-office document transmission system by teletex. Teletex envisaged direct communication between electronic typewriters, word processors and personal computers. These units had storage for transmitting and receiving messages. The use of such equipment considerably enhanced the character set available for document preparation.
Features
Character sets
In addition to the standard character set, a rich set of graphic symbols and a comprehensive set of control characters were supported in teletex. The set of control characters helped in preparation and reproduction of documents. In particular, they permitted the positioning of the printing element, specification of page orientation, left and right margins, vertical spacing and the use of underlining. The page control feature allowed standard A4 size papers to be used for receiving messages instead of the continuous stationery used in conventional telex systems.
Transmission and reception
A background/foreground operation was envisaged in teletex. Transmission/reception of messages should proceed in the background without affecting the work which the user might be carrying out in the foreground with the equipment. In other words, a user might be preparing a new document, while another document was being transmitted or received. The teletex would also maintain compatibility with the present telex system and inter-operate with it. Telex procedures called for the exchange of header information before the actual document transfer took place. The header information consisted of four parts:
Part 1: Destination ID,
Part 2: Originator ID,
Part 3: Date and time stamp,
Part 4: Document reference.
Twenty-four characters were used for source/destination ids, 14 characters for date and time stamp, and 7 characters for document reference (which also specified the number of pages in the document). Destination/source ID consisted of four fields:
Field 1: Country/Network code,
Field 2: National subscriber number,
Field 3: Reserved for future use,
Field 4: Terminal/Owner code.
The number of characters allotted to each of the above fields was variable, subject to a maximum for each field, the total being 24 characters.
References
Legacy systems
Telegraphy | Teletex | [
"Technology"
] | 516 | [
"Legacy systems",
"Computer systems",
"History of computing"
] |
1,009,651 | https://en.wikipedia.org/wiki/Male%20bonding | In ethology and social science, male bonding or male friendship is the formation of close personal relationships, and patterns of friendship or cooperation between males. Male bonding is a form of homosociality, or social connection between individuals of the same gender. Male bonding can occur through various contexts and activities that build emotional closeness, trust, and camaraderie. Male bonding is an important feature of men’s social functioning and can provide benefits including emotional support and intimacy, shared identity, and personal fulfillment contributing to men’s mental health and wellbeing.
Though male bonding and male friendships have been researched in contexts of anthropology, psychology, and sociology, overall male bonding remains understudied.
Characteristics
Male bonding can take various forms and may be expressed differently across cultures and individual relationships. Common characteristics of male bonding include:
Shared Activities: Men often bond through participation in common activities such as sports, or hobbies. These activities provide an environment for cooperation, competition, and shared experiences, all of which can help strengthen social ties.
Emotional Support: Though men’s friendships are often stereotyped as surface level and consisting of less intimacy, more recent studies have found that men today both value and engage in intimacy in their friendships more than men in previous generations.
Rituals and Traditions: Many male groups engage in social rituals that help cement their relationships. These can range from informal traditions like watching sports together to more formal rites of passage such as fraternity initiations. Such traditions have also been criticized as promoting hegemonic masculinity.
Male bonding across the lifespan
Early childhood (Ages 2-6)
In early childhood, male bonding begins primarily within the family structure. For young boys, bonding often centers around interactions with fathers, or other male relatives. These early connections help children develop trust and emotional security.
There is research evidence from studies of children in school settings that preschool aged children are more likely to select same-sex playmates, rather than playmates of the other sex, when able to self-select playmates. There is also evidence that very young boys and girls differ in emotional expressiveness in dyadic friendships. Young boys often begin to seek out and enjoy time with other boys around this age, especially as they begin to recognize gender as a part of their identity.
School-age children (Ages 6-12)
As children enter school, their social world expands beyond the family, and peer relationships become increasingly important. Male bonding in this phase is shaped by shared interests, group activities, and a developing sense of identity. Boys begin to form more structured friendships, typically based on shared activities. These early friendships lay the groundwork for later emotional and social development. Research suggests that during this phase, children often begin to segregate into same-gender groups, with boys forming strong bonds with other boys.
Adolescence (Ages 12-22)
Adolescence is a period of significant social, emotional, and physical development, and peer relations become more complex and takes on new dimensions as young boys navigate the challenges of puberty and identity formation.
Friendship is important for adolescent mental health and in early adolescence male friendships tend to become more intimate with higher value placed on self-disclosure, reciprocity, loyalty, and commitment. Friendship networks at this age tend to include both same and other gender peers as interests in romantic relationships begins to emerge, though gender segregation remains prominent across dyadic friendships.
In adolescence, cultural pressure for boys to conform to masculine ideals tends to increase which has led many to theorize that boys have fewer intimate friendships in adolescence and adulthood. Some researchers have found that in early adolescence boys often have very loving and intimate relationships with same-gender best friends, but that this intimacy wanes over time with men becoming more disconnected from their friendships in later adolescence, despite stated desires for intimacy. In some studies, men in college report less instances of sharing personal information with male friends including thoughts, feelings, attitudes, and self-disclosures compared to what they shared with female friends. However, more recent literature suggests that college age men tend to be less limited by traditional views of masculinity and homosocial bonding than previous generations and are more intimate and emotionally expressive in their same gender friendships.
During adolescence, boys often bond through risk-taking behaviors such as experimenting with substances, engaging in rebellious acts, or pushing physical limits. This creates a shared sense of adventure and camaraderie but can also have negative consequences if the behaviors are dangerous.
Young adulthood (Ages 22-30) and middle adulthood (Ages 30-60)
Relatively little research has explored male bonding and male friendships in adulthood. Young adulthood is marked by increased independence and the establishment of long-term goals, such as career ambitions, romantic relationships, and starting families. Male bonding during this stage often revolves around shared life transitions, professional development, and personal challenges.
In adulthood, more emphasis begins to be placed on social roles and responsibilities such as increased focus on career, family, and personal development which can impact the amount of time men spend bonding with friends. However, strong male friendships remain vital, as they offer support in navigating the complexities of adulthood and help men maintain a sense of identity. Men’s friendship networks during this time often include work and professional contacts.
Older adulthood (Ages 60+)
Though much research shows friendship in old age is psychologically beneficial, as with other adult age groups, there is very little research on gender differences in friendships later in life. Factors that may impact male bonding in older adults include loss of friends, health issues, and social isolation. In addition to experiencing friendship loss due to death, as men retire, their friendships may deplete due to loss of contact with friends they previously engaged with regularly in work or networking settings.
Male bonding and gender norms
For the past several decades, the social sciences have defined hegemonic masculinity by attributes such as strength, independence, and emotional restraint. These norms have often discouraged men from forming emotionally intimate relationships or expressing vulnerability and have resulted in homohysteria. Men within the LGBTQ+ community often face stigma and exclusion due to non-normative gender identities and sexual orientations. For these individuals, experiences of male bonding are influenced by experiences of marginalization, discrimination, and the complexities of navigating social identities. Gay, bisexual, and queer men may struggle to find acceptance within male spaces that emphasize homophobia or rigid masculinity, such as in sport.
More recent research has shown that younger men are more likely to include gay peers in friendship groups. A new theory of masculinity, called "Inclusive masculinity theory", has emerged to capture a societal decline in homophobia in western cultures and a theorized more inclusive version of masculinity.
For transgender men, the experience of male bonding is shaped by their intersectional identities as both transgender individuals and men. As they navigate gender transition and male socialization, they may face challenges in forming male bonds. Transgender men may encounter exclusion from men in male-dominated spaces such as locker rooms or sports teams.
Male bonding in contemporary Western media
Male bonding has been a common theme in Western media for many decades, often depicted in ways that reinforce traditional notions of masculinity and friendship. In recent years, popular shows like Dave (2020-) and Ted Lasso (2020-) have presented male friendships that are characterized by emotional support, vulnerability, and deeper connections beyond just shared activities.
"Bromance"
The term "bromance" has emerged in recent years to describe a close, non-romantic friendship between men that involves a heightened level of emotional intimacy and affection. The bromance has gained prominence in Western media, particularly in films and television. Unlike traditional representations of male friendships, which often emphasize masculine stereotypes, the bromance focuses on the positive emotional aspects of male bonding.
Though initially thought of as media trope, the bromance has become a more positive and inclusive representation of male relationships that allows men to express care for each other, both verbally and physically, in ways that defy traditional masculine norms, such as hugging, openly expressing affection, or discussing emotions.
"Locker Room Talk"
Locker room talk refers to the informal, often crude conversations that men engage in, typically in private settings such as locker rooms, bars, or among male peers. This style of conversation is often characterized by humor, bravado, and an emphasis on male sexuality, dominance, and sometimes, objectification of women. Locker room talk has become a cultural trope associated with toxic masculinity that reflects broader societal critiques of masculinity and male bonding.
These conversations have gained more scrutiny and attention in both popular culture and the media in recent years following the 2016 U.S. presidential election, when a recording of Donald Trump casually bragging about sexually assaulting women to a television personality was leaked to the public. Following the tape being leaked, Trump attempted to dismiss public concern by stating the remarks were “locker room talk.” The incident sparked widespread discussions about the impact of such talk on societal attitudes toward women and the role it plays in reinforcing a culture of toxic masculinity.
Recession in male friendship
The "male friendship crisis" refers to a growing concern that men, particularly in Western societies, are increasingly isolated from close, emotionally intimate friendships. The American male friendship recession has been reported on by news outlets including the New York Times, PBS News, Psychology Today, and Vox.
Surveys have shown that men are experiencing a decline in the number of meaningful friendships they maintain, with the number of men reporting having at least 6 close friends halving in 2021 compared to 1990. Many men have reported feeling lonely or disconnected from others. This phenomenon is often attributed to cultural norms that encourage men to hide vulnerability and is thought to have been accelerated by societal shifts in the wake of the COVID-19 pandemic such as social isolating during the pandemic and resulting increases in remote work arrangements.
References
Friendship
Interpersonal relationships
Men's health | Male bonding | [
"Biology"
] | 2,032 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
1,009,869 | https://en.wikipedia.org/wiki/MTT%20assay | The MTT assay is a colorimetric assay for assessing cell metabolic activity. NAD(P)H-dependent cellular oxidoreductase enzymes may, under defined conditions, reflect the number of viable cells present. These enzymes are capable of reducing the tetrazolium dye MTT, which is chemically 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide, to its insoluble formazan, which has a purple color. Other closely related tetrazolium dyes including XTT, MTS and the WSTs, are used in conjunction with the intermediate electron acceptor, 1-methoxy phenazine methosulfate (PMS). With WST-1, which is cell-impermeable, reduction occurs outside the cell via plasma membrane electron transport. However, this traditionally assumed explanation is currently contended as proof has also been found of MTT reduction to formazan in lipidic cellular structures without apparent involvement of oxidoreductases.
Tetrazolium dye assays can also be used to measure cytotoxicity (loss of viable cells) or cytostatic activity (shift from proliferation to quiescence) of potential medicinal agents and toxic materials. MTT assays are usually done in the dark since the MTT reagent is sensitive to light.
MTT and related tetrazolium salts
MTT, a yellow tetrazole, is reduced to purple formazan in living cells. A solubilization solution (usually either dimethyl sulfoxide, an acidified ethanol solution, or a solution of the detergent sodium dodecyl sulfate in diluted hydrochloric acid) is added to dissolve the insoluble purple formazan product into a colored solution. The absorbance of this colored solution can be quantified by measuring at a certain wavelength (usually between 500 and 600 nm) by a spectrophotometer. The degree of light absorption is dependent on the degree of formazan concentration accumulated inside the cell and on the cell surface. The greater the formazan concentration, the deeper the purple colour and thus the higher the absorbance.
XTT (2,3-bis-(2-methoxy-4-nitro-5-sulfophenyl)-2H-tetrazolium-5-carboxanilide) has been proposed to replace MTT, yielding higher sensitivity and a higher dynamic range. The formed formazan dye is water-soluble, avoiding a final solubilization step.
Water-soluble tetrazolium salts are more recent alternatives to MTT: they were developed by introducing positive or negative charges and hydroxy groups to the phenyl ring of the tetrazolium salt, or better with sulfonate groups added directly or indirectly to the phenyl ring.
MTS (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium), in the presence of phenazine methosulfate (PMS), produces a formazan product that has an absorbance maximum at 490 nm in phosphate-buffered saline. The MTS assay is often described as a 'one-step' MTT assay, which offers the convenience of adding the reagent straight to the cell culture without the intermittent steps required in the MTT assay. However this convenience makes the MTS assay susceptible to colormetric interference as the intermittent steps in the MTT assay remove traces of coloured compounds, whilst these remain in the microtitre plate in the one-step MTS assay. Precautions are needed to ensure accuracy when using this assay and there are strong arguments for confirming MTS results using qualitative observations under a microscope. (This, however, is prudent for all colormetric assays.)
WSTs (water-soluble tetrazolium salts) are a series of other water-soluble dyes for MTT assays, developed to give different absorption spectra of the formed formazans. WST-1 and in particular WST-8 (2-(2-methoxy-4-nitrophenyl)-3-(4-nitrophenyl)-5-(2,4-disulfophenyl)-2H-tetrazolium), are advantageous over MTT in that they are reduced outside cells, combined with PMS electron mediator, and yield a water-soluble formazan. Finally, WST assays (1) can be read directly (unlike MTT that needs a solubilization step), (2) give a more effective signal than MTT, and (3) decrease toxicity to cells (unlike cell-permeable MTT, and its insoluble formazan that accumulate inside cells).
Significance
Tetrazolium dye reduction is generally assumed to be dependent on NAD(P)H-dependent oxidoreductase enzymes largely in the cytosolic compartment of the cell. Therefore, reduction of MTT and other tetrazolium dyes depends on the cellular metabolic activity due to NAD(P)H flux. Cells with a low metabolism such as thymocytes and splenocytes reduce very little MTT. In contrast, rapidly dividing cells exhibit high rates of MTT reduction. It is important to keep in mind that assay conditions can alter metabolic activity and thus tetrazolium dye reduction without affecting cell viability. In addition, the mechanism of reduction of tetrazolium dyes, i.e. intracellular (MTT, MTS) vs. extracellular (WST-1), will also determine the amount of product. Additionally, proof has been provided as to the spontaneous MTT reduction in lipidic cellular compartments/structures, without enzymatic catalysis involved. Nevertheless, even under this alternative paradigm, MTT assay still assesses the reduction potential of a cell (i.e. availability of reducing compounds to drive cellular energetics). As such, the final cell viability interpretation remains unchanged.
In studying the viability of cells seeded on 3D fibrous scaffolds, the thickness of the scaffolds may influence the MTT assay results.
Observation
The optical density (OD) at 550 nm is used to calculate the percentage of viability results using the following equation:
Viability % = 100 × OD550e / OD550b
where:
OD550e = Mean value of the measured optical density of the test item
OD550b = Mean value of the measured optical density of the negative control
See also
Tetrazolium chloride
Formazan
References
Further reading
External links
MTT assay Protocol
Biochemistry detection reactions | MTT assay | [
"Chemistry",
"Biology"
] | 1,460 | [
"Microbiology techniques",
"Biochemistry detection reactions",
"Biochemical reactions"
] |
1,009,912 | https://en.wikipedia.org/wiki/Medical%20glove | Medical gloves are disposable gloves used during medical examinations and procedures to help prevent cross-contamination between caregivers and patients. Medical gloves are made of different polymers including latex, nitrile rubber, polyvinyl chloride and neoprene; they come unpowdered, or powdered with corn starch to lubricate the gloves, making them easier to put on the hands.
Corn starch replaced tissue-irritating lycopodium powder and talc, but even corn starch can impede healing if it gets into tissues (as during surgery). As such, unpowdered gloves are used more often during surgery and other sensitive procedures. Special manufacturing processes are used to compensate for the lack of powder.
There are two main types of medical gloves: examination and surgical. Surgical gloves have more precise sizing with a better precision and sensitivity and are made to a higher standard. Examination gloves are available either sterile or non-sterile, while surgical gloves are generally sterile.
Besides medicine, medical gloves are widely used in chemical and biochemical laboratories. Medical gloves offer some basic protection against corrosives and surface contamination. However, they are easily penetrated by solvents and various hazardous chemicals, and should not be used for dishwashing or otherwise when the task involves immersion of the gloved hand in the solvent. Medical gloves are recommended to be worn for two main reasons:
To reduce the risk of contamination of health-care workers hands with blood and other body fluids.
To reduce the risk of germ dissemination to the environment and of transmission from the health-care worker to the patient and vice versa, as well as from one patient to another.
History
Caroline Hampton became the chief nurse of the operating room when Johns Hopkins Hospital opened in 1889. When "in the winter of 1889 or 1890" she developed a skin reaction to mercuric chloride that was used for asepsis, William Halsted, soon-to-be her husband, asked the Goodyear Rubber Company to produce thin rubber gloves for her protection. In 1894 Halsted implemented the use of sterilized medical gloves at Johns Hopkins. However, the first modern disposable glove was invented by Ansell Rubber Co. Pty. Ltd. in 1965.
They based the production on the technique for making condoms. These gloves have a range of clinical uses ranging from dealing with human excrement to dental applications.
Criminals have also been known to wear medical gloves during commission of crimes. These gloves are often chosen because their thinness and tight fit allow for dexterity. However, because of the thinness of these gloves, fingerprints may actually pass through the material as glove prints, thus transferring the wearer's prints onto the surface touched or handled.
The participants of the Watergate burglaries infamously wore rubber surgical gloves in an effort to hide their fingerprints.
Industry
In 2020, the market for medical gloves had a value of more than USD 10.17 billion and, with growing demand (especially in developing countries), is expected to grow by 9.2 per cent per year until 2028. The majority of medical gloves is manufactured in South East Asia with Malaysia alone accounting for about three quarters of global production in 2020.
Labour rights violations
There have been several investigations in factories in Malaysia, Thailand and Sri Lanka that documented severe violations of human and labour rights. Both in Malaysia and Thailand migrants represent the majority of workers in hard physical labour. They are frequently recruited by specialized agencies in their less affluent home countries such as Nepal and are often charged with high recruitment fees forcing them into debt bondage. There are documented cases in which employees' passports were withheld by their employers leaving them especially vulnerable to exploitation.
In 2010, for instance, Swedwatch, a Swedish labour right NGO examining a Malaysian factory, reported that most employees were working 12 hours per day seven days a week without overtime pay or payslip, harassment of workers by the management, safety deficits and poor hygienic conditions in employee housing.
Reacting to these findings, from October 2019 to March 2020, the US Department of Labor listed medical gloves produced in Malaysia on the List of Goods Produced by Child Labor or Forced Labor and temporarily banned the import of gloves produced by the Malaysian company Top Glove, the world's largest manufacturer at the time.
Sizing
Generally speaking, examination gloves are sized in XS, S, M and L. Some brands may offer size XL. Surgical gloves are usually sized more precisely since they are worn for a much longer period of time and require exceptional dexterity. The sizing of surgical gloves are based on the measured circumference around the palm (excluding the thumb) in inches, at a level slightly above the thumb's sewn. Typical sizing ranges from 5.5 to 9.0 at an increment of 0.5. Some brands may also offer size 5.0. First-time users of surgical gloves may take some time to find the right size and brand that suit their hand geometry the most. People with a thicker palm may need a size larger than the measurement and vice versa. Sizing should be one of the first thing to look for. Dexterity is essential for every worker and wearing the wrong size of glove can have a huge impact on someone's work. Wearing the right size of glove can also increase comfort, which can influence workers to wear their assigned PPE.
Research on a group of American surgeons found that the most common surgical glove size for men is 7.0, followed by 6.5; and for women 6.0 followed by 5.5.
Powdered gloves
To facilitate donning of gloves, powders have been used as lubricants. Early powders derived from pines or club moss were found to be toxic. Talcum powder was used for decades but linked to postoperative granuloma and scar formation. Corn starch, another agent used as lubricant, was also found to have potential side effects such as inflammatory reactions and granuloma and scar formation.
Elimination of powdered medical gloves
With the availability of non-powdered medical gloves that were easy to don, calls for the elimination of powdered gloves became louder. By 2016, healthcare systems in Germany and the United Kingdom had eliminated their use. In March 2016, the United States Food and Drug Administration (FDA) issued a proposal to ban their medical use and on December 19, 2016 passed a rule banning all powdered gloves intended for medical use. The rule became effective on January 18, 2017.
Powder-free medical gloves are used in medical cleanroom environments, where the need for cleanliness is often similar to that in a sensitive medical environment.
Chlorination
To make them easier to don without the use of powder, gloves can be treated with chlorine. Chlorination affects some of the beneficial properties of latex, but also reduces the quantity of allergenic latex proteins.
Polymer coating
On the market, it is a wide range of applications for polymer coatings in the market. Most of the current disposable gloves are powdered. These coatings include several polymers: silicone, acrylic resins, and gels that make gloves easier to wear. This process is currently used in nitrile gloves and latex gloves.
Alternatives to latex
Due to the increasing rate of latex allergy among health professionals, and in the general population, gloves made of non-latex materials such as polyvinyl chloride, nitrile rubber, or neoprene have become widely used. Chemical processes may be employed to reduce the amount of antigenic protein in Hevea latex, resulting in alternative natural-rubber-based materials such Vytex Natural Rubber Latex. However, non-latex gloves have not yet replaced latex gloves in surgical procedures, as gloves made of alternative materials generally do not fully match the fine control or greater sensitivity to touch available with latex surgical gloves. (High-grade isoprene gloves are the only exception to this rule, as they have the same chemical structure as natural latex rubber. However, fully artificial polyisoprene—rather than "hypoallergenic" cleaned natural latex rubber—is also the most expensive natural latex substitute available.) Other high-grade non-latex gloves, such as nitrile gloves, can cost over twice the price of their latex counterparts, a fact that has often prevented switching to these alternative materials in cost-sensitive environments, such as many hospitals. Nitrile is more resistant to tearing than natural latex, and is more resistant to many chemicals. Sulfur compounds used as accelerants to cure nitrile can speed the tarnishing process in silver, so accelerant-free nitrile or other gloves must be used when handling objects made of these metals when this is not acceptable.
Double gloving
Double gloving is the practice of wearing two layers of medical gloves to reduce the danger of infection from glove failure or penetration of the gloves by sharp objects during medical procedures. Surgeons double glove when operating on individuals bearing infectious agents such as HIV and hepatitis, and to better protect patients against infections possibly transmitted by the surgeon. A systematic review of the literature has shown double gloving to offer significantly more protection against inner glove perforation in surgical procedures compared to the use of a single glove layer. But it was unclear if there was better protection against infections transmitted by the surgeon. Another systematic review studied if double gloving protected the surgeon better against infections transmitted by the patient. Pooled results of 12 studies (RCTs) with 3,437 participants showed that double gloving reduced the number of perforations in inner gloves with 71% compared to single gloving. On average ten surgeons/nurses involved in 100 operations sustain 172 single gloves perforations but with double gloves only 50 inner gloves would be perforated. This is a considerable reduction of the risk.
In addition, cotton gloves can be worn under the single-use gloves to reduce the amount of sweat produced when wearing these gloves for a long period of time. These under gloves can be disinfected and used again.
See also
Latex allergy
Needlestick injury
References
First aid
Gloves
Medical hygiene
Safety clothing
glove
Occupational safety and health
Protective gear | Medical glove | [
"Biology"
] | 2,093 | [
"Medical devices",
"Medical technology"
] |
1,009,951 | https://en.wikipedia.org/wiki/Li%E2%80%93Fraumeni%20syndrome | Li–Fraumeni syndrome (LFS) is a rare, autosomal dominant, hereditary disorder that predisposes carriers to cancer development. It was named after two American physicians, Frederick Pei Li and Joseph F. Fraumeni Jr., who first recognized the syndrome after reviewing the medical records and death certificates of childhood rhabdomyosarcoma patients. The disease is also known as SBLA, for the Sarcoma, Breast, Leukemia, and Adrenal Gland cancers that it is known to cause.
Etiology
LFS is caused by germline mutations (also called genetic variants) in the TP53 tumor suppressor gene, which encodes a transcription factor (p53) that normally regulates the cell cycle and prevents genomic mutations. The variants can be inherited, or can arise from mutations early in embryogenesis, or in one of the parent's germ cells.
LFS is thought to occur in about 1 in 5,000 individuals in the general population. In Brazil there is a common founder variant, p.Arg337, that occurs in about 1 in every 375 people. LFS is inherited in an autosomal dominant fashion which means that a person with LFS has a 50% chance to pass the syndrome on in every pregnancy (and a 50% chance to not pass on the syndrome).
Clinical presentation
Li–Fraumeni syndrome is characterized by early onset of cancer, a wide variety of types of cancers, and development of multiple cancers throughout one's life.
LFS: Mutations in TP53
TP53 is a tumor suppressor gene on chromosome 17 that normally assists in the control of cell division and growth through action on the normal cell cycle. TP53 typically becomes expressed due to cellular stressors, such as DNA damage, and can halt the cell cycle to assist with either the repair of repairable DNA damage, or can induce apoptosis of a cell with irreparable damage. The repair of "bad" DNA, or the apoptosis of a cell, prevents the proliferation of damaged cells and the development of cancer.
Pathogenic and likely pathogenic variants in the TP53 gene can inhibit its normal function and allow cells with damaged DNA to continue to divide. If these DNA mutations are left unchecked, some cells can divide uncontrollably, forming tumors (cancers). Many individuals with Li–Fraumeni syndrome have been shown to be heterozygous for a TP53 variant. Recent studies have shown that 60% to 80% of classic LFS families harbor detectable germ-line TP53 mutations, the majority of which are missense mutations in the DNA-binding domain. These missense mutations cause a decrease in the ability of p53 to bind to DNA, thus inhibiting the normal TP53 mechanism.
LFS-Like (LFS-L):
Families who do not conform to the criteria of classical Li–Fraumeni syndrome have been termed "LFS-Like". LFS-L individuals generally do not have any detectable TP53 variants, and tend to meet either the Birch or Eeles criteria.
Clinical
The classical LFS malignancies—sarcoma, cancers of the breast, brain, and adrenal glands—comprise about 80% of all cancers that occur in this syndrome.
The risk of developing any invasive cancer (excluding skin cancer) is about 50% by age 30 (1% in the general population) and is 90% by age 70. Early-onset breast cancer accounts for 25% of all the cancers in this syndrome. This is followed by soft-tissue sarcomas (20%), bone sarcoma (15%), and brain tumors—especially glioblastomas—(13%). Other tumours seen in this syndrome include leukemia, lymphoma, and adrenocortical carcinoma.
The table below depicts tumor site distribution of variants for families followed in the LFS Study at the National Cancer Institute's (NCI) Division of Cancer Epidemiology and Genetics (DCEG):
About 95% of females with LFS develop breast cancer by age 60 years; the majority of these occur before age 45 years lending females with this syndrome to have almost a 100% lifetime risk of developing cancer.
Cancer Risks by Sex and Age
Age 20 years old 40 years old 60 years old
Men 25% 40% 88%
Women 18% 75% >95%
Diagnosis
Germline variants in the TP53 tumor suppressor gene was discovered to be the primary cause of Li–Fraumeni syndrome in 1990.
Li–Fraumeni syndrome is diagnosed if a person has a pathogenic or likely pathogenic TP53 variant and/or if these three Classic Criteria are met:
The patient has been diagnosed with a sarcoma at a young age (below 45).
A first-degree relative has been diagnosed with any cancer at a young age (below 45).
Another first- or a second-degree relative has been diagnosed with any cancer at a young age (below 45) or with a sarcoma at any age.
LFS should also be suspected in individuals who meet other published criteria.
2015 Revised Chompret Criteria:
A proband with a tumor belonging to the LFS tumor spectrum (premenopausal breast cancer, soft tissue sarcoma, osteosarcoma, central nervous system (CNS) tumor, adrenocortical carcinoma) before age 46 years AND at least one first- or second-degree relative with an LFS tumor (except breast cancer if the proband has breast cancer) before age 56 years or with multiple tumors; OR
A proband with multiple tumors (except multiple breast tumors), two of which belong to the LFS tumor spectrum and the first of which occurred before age 46 years; OR
A proband with adrenocortical carcinoma, choroid plexus tumor, or rhabdomyosarcoma of embryonal anaplastic subtype, irrespective of family history; OR
Breast cancer before age 31 years
Birch Criteria for Li Fraumeni-like syndrome:
A proband with any childhood cancer, sarcoma, brain tumor, or adrenal cortical carcinoma diagnosed before age 45, AND
A first- or second-degree relative with a typical LFS malignancy (sarcoma, leukemia, or cancers of the breast, brain or adrenal cortex) regardless of age at diagnosis, AND
A first- or second-degree relative with any cancer diagnosed before age 60
Eeles Criteria for Li Fraumeni-like Syndrome:
Two different tumors that are part of extended LFS in first- or second-degree relatives at any age (sarcoma, breast cancer, brain tumor, leukemia, adrenocortical carcinoma, melanoma, prostate cancer, and pancreatic cancer).
If an individual has a personal or family history concerning for LFS, they should discuss the risks, benefits, and limitations of genetic testing with their healthcare provider or a genetic counselor.
Management
Genetic counseling and genetic testing for the TP53 gene can confirm a diagnosis of LFS. People with LFS require early and regular cancer screening following the "Toronto Protocol":
Children and adults undergo comprehensive annual physical examinations every 6 to 12 months
All patients consult a physician promptly for evaluation of lingering symptoms and illnesses
Ultrasound of abdomen and pelvis every 3 to 4 months from birth to age 18 years
Colonoscopy and upper endoscopy every 2 to 5 years beginning at age 25 or 5 years before the earliest known GI cancer in the family
Annual dermatological exam starting at age 18 years
Annual whole-body MRI
Annual brain MRI and neurological exam
Annual prostate-specific antigen (PA) starting at age 40
Breast awareness beginning at age 18
Clinical breast exam every 6 to 12 months starting at age 20
Annual breast MRI age 20 to 29; annual breast MRI alternating with mammogram age 30 to 75
Consideration of risk-reducing mastectomy (surgery to remove the breast tissue)
Families and individuals should consider participating in a peer-support group
References
Further reading
Cancer
DNA replication and repair-deficiency disorders
Hereditary cancers
Rare diseases
Single-nucleotide polymorphism associated disease
Syndromes with tumors
Transcription factor deficiencies
Diseases named after discoverers | Li–Fraumeni syndrome | [
"Biology"
] | 1,713 | [
"Senescence",
"Single-nucleotide polymorphism associated disease",
"DNA replication and repair-deficiency disorders",
"Single-nucleotide polymorphisms"
] |
1,009,999 | https://en.wikipedia.org/wiki/Service%20Access%20Point | A Service Access Point (SAP) is an identifying label for network endpoints used in Open Systems Interconnection (OSI) networking.
The SAP is a conceptual location at which one OSI layer can request the services of another OSI layer. As an example, PD-SAP or PLME-SAP in IEEE 802.15.4 can be mentioned, where the medium access control (MAC) layer requests certain services from the physical layer. Service access points are also used in IEEE 802.2 Logical Link Control in Ethernet and similar data link layer protocols.
When using the OSI Network system (CONS or CLNS), the base for constructing an address for a network element is an NSAP address, similar in concept to an IP address. OSI protocols as well as Asynchronous Transfer Mode (ATM) can use Transport (TSAP), Session (SSAP) or Presentation (PSAP) Service Access Points to specify a destination address for a connection. These SAPs consist of NSAP addresses combined with optional transport, session and presentation selectors, which can differentiate at any of the three layers between multiple services at that layer provided by a network element.
IEEE 802's reference model (RM) guarantees the following SAPs:
LSAP - Link
MSAP - MAC
PSAP - PHY
802.3 (the Ethernet standard) optionally includes:
OSAP - operations, administration and maintenance (OAM)
MCSAP - MAC control
Energy efficient Ethernet PSAP
Time sync PSAP
References
OSI protocols | Service Access Point | [
"Technology"
] | 314 | [
"Computing stubs",
"Computer network stubs"
] |
3,043,886 | https://en.wikipedia.org/wiki/Enzyme%20kinetics | Enzyme kinetics is the study of the rates of enzyme-catalysed chemical reactions. In enzyme kinetics, the reaction rate is measured and the effects of varying the conditions of the reaction are investigated. Studying an enzyme's kinetics in this way can reveal the catalytic mechanism of this enzyme, its role in metabolism, how its activity is controlled, and how a drug or a modifier (inhibitor or activator) might affect the rate.
An enzyme (E) is a protein molecule that serves as a biological catalyst to facilitate and accelerate a chemical reaction in the body. It does this through binding of another molecule, its substrate (S), which the enzyme acts upon to form the desired product. The substrate binds to the active site of the enzyme to produce an enzyme-substrate complex ES, and is transformed into an enzyme-product complex EP and from there to product P, via a transition state ES*. The series of steps is known as the mechanism:
E + S ⇄ ES ⇄ ES* ⇄ EP ⇄ E + P
This example assumes the simplest case of a reaction with one substrate and one product. Such cases exist: for example, a mutase such as phosphoglucomutase catalyses the transfer of a phosphate group from one position to another, and isomerase is a more general term for an enzyme that catalyses any one-substrate one-product reaction, such as triosephosphate isomerase. However, such enzymes are not very common, and are heavily outnumbered by enzymes that catalyse two-substrate two-product reactions: these include, for example, the NAD-dependent dehydrogenases such as alcohol dehydrogenase, which catalyses the oxidation of ethanol by NAD+. Reactions with three or four substrates or products are less common, but they exist. There is no necessity for the number of products to be equal to the number of substrates; for example, glyceraldehyde 3-phosphate dehydrogenase has three substrates and two products.
When enzymes bind multiple substrates, such as dihydrofolate reductase (shown right), enzyme kinetics can also show the sequence in which these substrates bind and the sequence in which products are released. An example of enzymes that bind a single substrate and release multiple products are proteases, which cleave one protein substrate into two polypeptide products. Others join two substrates together, such as DNA polymerase linking a nucleotide to DNA. Although these mechanisms are often a complex series of steps, there is typically one rate-determining step that determines the overall kinetics. This rate-determining step may be a chemical reaction or a conformational change of the enzyme or substrates, such as those involved in the release of product(s) from the enzyme.
Knowledge of the enzyme's structure is helpful in interpreting kinetic data. For example, the structure can suggest how substrates and products bind during catalysis; what changes occur during the reaction; and even the role of particular amino acid residues in the mechanism. Some enzymes change shape significantly during the mechanism; in such cases, it is helpful to determine the enzyme structure with and without bound substrate analogues that do not undergo the enzymatic reaction.
Not all biological catalysts are protein enzymes: RNA-based catalysts such as ribozymes and ribosomes are essential to many cellular functions, such as RNA splicing and translation. The main difference between ribozymes and enzymes is that RNA catalysts are composed of nucleotides, whereas enzymes are composed of amino acids. Ribozymes also perform a more limited set of reactions, although their reaction mechanisms and kinetics can be analysed and classified by the same methods.
General principles
The reaction catalysed by an enzyme uses exactly the same reactants and produces exactly the same products as the uncatalysed reaction. Like other catalysts, enzymes do not alter the position of equilibrium between substrates and products. However, unlike uncatalysed chemical reactions, enzyme-catalysed reactions display saturation kinetics. For a given enzyme concentration and for relatively low substrate concentrations, the reaction rate increases linearly with substrate concentration; the enzyme molecules are largely free to catalyse the reaction, and increasing substrate concentration means an increasing rate at which the enzyme and substrate molecules encounter one another. However, at relatively high substrate concentrations, the reaction rate asymptotically approaches the theoretical maximum; the enzyme active sites are almost all occupied by substrates resulting in saturation, and the reaction rate is determined by the intrinsic turnover rate of the enzyme. The substrate concentration midway between these two limiting cases is denoted by KM. Thus, KM is the substrate concentration at which the reaction velocity is half of the maximum velocity.
The two important properties of enzyme kinetics are how easily the enzyme can be saturated with a substrate, and the maximum rate it can achieve. Knowing these properties suggests what an enzyme might do in the cell and can show how the enzyme will respond to changes in these conditions.
Enzyme assays
Enzyme assays are laboratory procedures that measure the rate of enzyme reactions. Since enzymes are not consumed by the reactions they catalyse, enzyme assays usually follow changes in the concentration of either substrates or products to measure the rate of reaction. There are many methods of measurement. Spectrophotometric assays observe the change in the absorbance of light between products and reactants; radiometric assays involve the incorporation or release of radioactivity to measure the amount of product made over time. Spectrophotometric assays are most convenient since they allow the rate of the reaction to be measured continuously. Although radiometric assays require the removal and counting of samples (i.e., they are discontinuous assays) they are usually extremely sensitive and can measure very low levels of enzyme activity. An analogous approach is to use mass spectrometry to monitor the incorporation or release of stable isotopes as the substrate is converted into product. Occasionally, an assay fails and approaches are essential to resurrect a failed assay.
The most sensitive enzyme assays use lasers focused through a microscope to observe changes in single enzyme molecules as they catalyse their reactions. These measurements either use changes in the fluorescence of cofactors during an enzyme's reaction mechanism, or of fluorescent dyes added onto specific sites of the protein to report movements that occur during catalysis. These studies provide a new view of the kinetics and dynamics of single enzymes, as opposed to traditional enzyme kinetics, which observes the average behaviour of populations of millions of enzyme molecules.
An example progress curve for an enzyme assay is shown above. The enzyme produces product at an initial rate that is approximately linear for a short period after the start of the reaction. As the reaction proceeds and substrate is consumed, the rate continuously slows (so long as the substrate is not still at saturating levels). To measure the initial (and maximal) rate, enzyme assays are typically carried out while the reaction has progressed only a few percent towards total completion. The length of the initial rate period depends on the assay conditions and can range from milliseconds to hours. However, equipment for rapidly mixing liquids allows fast kinetic measurements at initial rates of less than one second. These very rapid assays are essential for measuring pre-steady-state kinetics, which are discussed below.
Most enzyme kinetics studies concentrate on this initial, approximately linear part of enzyme reactions. However, it is also possible to measure the complete reaction curve and fit this data to a non-linear rate equation. This way of measuring enzyme reactions is called progress-curve analysis. This approach is useful as an alternative to rapid kinetics when the initial rate is too fast to measure accurately.
The Standards for Reporting Enzymology Data Guidelines provide minimum information required to comprehensively report kinetic and equilibrium data from investigations of enzyme activities including corresponding experimental conditions. The guidelines have been developed to report functional enzyme data with rigor and robustness.
Single-substrate reactions
Enzymes with single-substrate mechanisms include isomerases such as triosephosphateisomerase or bisphosphoglycerate mutase, intramolecular lyases such as adenylate cyclase and the hammerhead ribozyme, an RNA lyase. However, some enzymes that only have a single substrate do not fall into this category of mechanisms. Catalase is an example of this, as the enzyme reacts with a first molecule of hydrogen peroxide substrate, becomes oxidised and is then reduced by a second molecule of substrate. Although a single substrate is involved, the existence of a modified enzyme intermediate means that the mechanism of catalase is actually a ping–pong mechanism, a type of mechanism that is discussed in the Multi-substrate reactions section below.
Michaelis–Menten kinetics
As enzyme-catalysed reactions are saturable, their rate of catalysis does not show a linear response to increasing substrate. If the initial rate of the reaction is measured over a range of substrate concentrations (denoted as [S]), the initial reaction rate () increases as [S] increases, as shown on the right. However, as [S] gets higher, the enzyme becomes saturated with substrate and the initial rate reaches Vmax, the enzyme's maximum rate.
The Michaelis–Menten kinetic model of a single-substrate reaction is shown on the right. There is an initial bimolecular reaction between the enzyme E and substrate S to form the enzyme–substrate complex ES. The rate of enzymatic reaction increases with the increase of the substrate concentration up to a certain level called Vmax; at Vmax, increase in substrate concentration does not cause any increase in reaction rate as there is no more enzyme (E) available for reacting with substrate (S). Here, the rate of reaction becomes dependent on the ES complex and the reaction becomes a unimolecular reaction with an order of zero. Though the enzymatic mechanism for the unimolecular reaction ES ->[k_{cat}] E + P can be quite complex, there is typically one rate-determining enzymatic step that allows this reaction to be modelled as a single catalytic step with an apparent unimolecular rate constant kcat.
If the reaction path proceeds over one or several intermediates, kcat will be a function of several elementary rate constants, whereas in the simplest case of a single elementary reaction (e.g. no intermediates) it will be identical to the elementary unimolecular rate constant k2. The apparent unimolecular rate constant kcat is also called turnover number, and denotes the maximum number of enzymatic reactions catalysed per second.
The Michaelis–Menten equation describes how the (initial) reaction rate v0 depends on the position of the substrate-binding equilibrium and the rate constant k2.
(Michaelis–Menten equation)
with the constants
This Michaelis–Menten equation is the basis for most single-substrate enzyme kinetics. Two crucial assumptions underlie this equation (apart from the general assumption about the mechanism only involving no intermediate or product inhibition, and there is no allostericity or cooperativity). The first assumption is the so-called
quasi-steady-state assumption (or pseudo-steady-state hypothesis), namely that the concentration of the substrate-bound enzyme (and hence also the unbound enzyme) changes much more slowly than those of the product and substrate and thus the change over time of the complex can be set to zero
. The second assumption is that the total enzyme concentration does not change over time, thus .
The Michaelis constant KM is experimentally defined as the concentration at which the rate of the enzyme reaction is half Vmax, which can be verified by substituting [S] = KM into the Michaelis–Menten equation and can also be seen graphically. If the rate-determining enzymatic step is slow compared to substrate dissociation (), the Michaelis constant KM is roughly the dissociation constant KD of the ES complex.
If [S] is small compared to then the term and also very little ES complex is formed, thus [E]_{\rm tot} \approx [E]. Therefore, the rate of product formation is
Thus the product formation rate depends on the enzyme concentration as well as on the substrate concentration, the equation resembles a bimolecular reaction with a corresponding pseudo-second order rate constant . This constant is a measure of catalytic efficiency. The most efficient enzymes reach a in the range of . These enzymes are so efficient they effectively catalyse a reaction each time they encounter a substrate molecule and have thus reached an upper theoretical limit for efficiency (diffusion limit); and are sometimes referred to as kinetically perfect enzymes. But most enzymes are far from perfect: the average values of and are about and , respectively.
Direct use of the Michaelis–Menten equation for time course kinetic analysis
The observed velocities predicted by the Michaelis–Menten equation can be used to directly model the time course disappearance of substrate and the production of product through incorporation of the Michaelis–Menten equation into the equation for first order chemical kinetics. This can only be achieved however if one recognises the problem associated with the use of Euler's number in the description of first order chemical kinetics. i.e. e−k is a split constant that introduces a systematic error into calculations and can be rewritten as a single constant which represents the remaining substrate after each time period.
In 1983 Stuart Beal (and also independently Santiago Schnell and Claudio Mendoza in 1997) derived a closed form solution for the time course kinetics analysis of the Michaelis-Menten mechanism. The solution, , has the form:
where W[ ] is the Lambert-W function. and where F(t) is
This equation is encompassed by the equation below, obtained by Berberan-Santos, which is also valid when the initial substrate concentration is close to that of enzyme,
where W[ ] is again the Lambert-W function.
Linear plots of the Michaelis–Menten equation
The plot of v versus [S] above is not linear; although initially linear at low [S], it bends over to saturate at high [S]. Before the modern era of nonlinear curve-fitting on computers, this nonlinearity could make it difficult to estimate KM and Vmax accurately. Therefore, several researchers developed linearisations of the Michaelis–Menten equation, such as the Lineweaver–Burk plot, the Eadie–Hofstee diagram and the Hanes–Woolf plot. All of these linear representations can be useful for visualising data, but none should be used to determine kinetic parameters, as computer software is readily available that allows for more accurate determination by nonlinear regression methods.
The Lineweaver–Burk plot or double reciprocal plot is a common way of illustrating kinetic data. This is produced by taking the reciprocal of both sides of the Michaelis–Menten equation. As shown on the right, this is a linear form of the Michaelis–Menten equation and produces a straight line with the equation y = mx + c with a y-intercept equivalent to 1/Vmax and an x-intercept of the graph representing −1/KM.
Naturally, no experimental values can be taken at negative 1/[S]; the lower limiting value 1/[S] = 0 (the y-intercept) corresponds to an infinite substrate concentration, where 1/v=1/Vmax as shown at the right; thus, the x-intercept is an extrapolation of the experimental data taken at positive concentrations. More generally, the Lineweaver–Burk plot skews the importance of measurements taken at low substrate concentrations and, thus, can yield inaccurate estimates of Vmax and KM. A more accurate linear plotting method is the Eadie–Hofstee plot. In this case, v is plotted against v/[S]. In the third common linear representation, the Hanes–Woolf plot, [S]/v is plotted against [S].
In general, data normalisation can help diminish the amount of experimental work and can increase the reliability of the output, and is suitable for both graphical and numerical analysis.
Practical significance of kinetic constants
The study of enzyme kinetics is important for two basic reasons. Firstly, it helps explain how enzymes work, and secondly, it helps predict how enzymes behave in living organisms. The kinetic constants defined above, KM and Vmax, are critical to attempts to understand how enzymes work together to control metabolism.
Making these predictions is not trivial, even for simple systems. For example, oxaloacetate is formed by malate dehydrogenase within the mitochondrion. Oxaloacetate can then be consumed by citrate synthase, phosphoenolpyruvate carboxykinase or aspartate aminotransferase, feeding into the citric acid cycle, gluconeogenesis or aspartic acid biosynthesis, respectively. Being able to predict how much oxaloacetate goes into which pathway requires knowledge of the concentration of oxaloacetate as well as the concentration and kinetics of each of these enzymes. This aim of predicting the behaviour of metabolic pathways reaches its most complex expression in the synthesis of huge amounts of kinetic and gene expression data into mathematical models of entire organisms. Alternatively, one useful simplification of the metabolic modelling problem is to ignore the underlying enzyme kinetics and only rely on information about the reaction network's stoichiometry, a technique called flux balance analysis.
Michaelis–Menten kinetics with intermediate
One could also consider the less simple case
{E} + S
<=>[k_{1}][k_{-1}]
ES
->[k_2]
EI
->[k_3]
{E} + P
where a complex with the enzyme and an intermediate exists and the intermediate is converted into product in a second step. In this case we have a very similar equation
but the constants are different
We see that for the limiting case , thus when the last step from EI -> E + P is much faster than the previous step, we get again the original equation. Mathematically we have then and .
Multi-substrate reactions
Multi-substrate reactions follow complex rate equations that describe how the substrates bind and in what sequence. The analysis of these reactions is much simpler if the concentration of substrate A is kept constant and substrate B varied. Under these conditions, the enzyme behaves just like a single-substrate enzyme and a plot of v by [S] gives apparent KM and Vmax constants for substrate B. If a set of these measurements is performed at different fixed concentrations of A, these data can be used to work out what the mechanism of the reaction is. For an enzyme that takes two substrates A and B and turns them into two products P and Q, there are two types of mechanism: ternary complex and substituted-enzyme mechanisms.
Ternary-complex mechanisms
In these enzymes, both substrates bind to the enzyme at the same time to produce an EAB ternary complex. The order of binding can either be random (in a random mechanism) or substrates have to bind in a particular sequence (in an ordered mechanism). When a set of v by [S] curves (fixed A, varying B) from an enzyme with a ternary-complex mechanism are plotted in a Lineweaver–Burk plot, the set of lines produced will intersect.
Enzymes with ternary-complex mechanisms include glutathione S-transferase, dihydrofolate reductase and DNA polymerase. The following links show short animations of the ternary-complex mechanisms of the enzymes dihydrofolate reductase and DNA polymerase.
Substituted-enzyme ("ping–pong") mechanisms
As shown on the right, enzymes with a substituted-enzyme mechanism can exist in two states, E and a chemically modified form of the enzyme E*; this modified enzyme is known as an intermediate. In such mechanisms, substrate A binds, changes the enzyme to E* by, for example, transferring a chemical group to the active site, and is then released. Only after the first substrate is released can substrate B bind and react with the modified enzyme, regenerating the unmodified E form. When a set of v by [S] curves (fixed A, varying B) from an enzyme with a substituted-enzyme mechanism are plotted in a Lineweaver–Burk plot, a set of parallel lines will be produced. This is called a secondary plot.
Enzymes with substituted-enzyme mechanisms include some oxidoreductases such as thioredoxin peroxidase, transferases such as acylneuraminate cytidylyltransferase and serine proteases such as trypsin and chymotrypsin. Serine proteases are a very common and diverse family of enzymes, including digestive enzymes (trypsin, chymotrypsin, and elastase), several enzymes of the blood clotting cascade and many others. In these serine proteases, the E* intermediate is an acyl-enzyme species formed by the attack of an active site serine residue on a peptide bond in a protein substrate. A short animation showing the mechanism of chymotrypsin is linked here.
Memory effects
Both of these two types of mechanism can display enzyme memory, with very different causes and consequences in the two cases. In ternary complex mechanisms these are possible if the mechanism includes slow processes and the binding steps are not at quasi-equilibrium, because the intermediates may be swept away very fast. This can generate cooperativity, even in monomeric enzymes. In a substituted-enzyme mechanism slow steps are not needed to generate memory effects. Instead, for an enzyme with several alternative substrates the kinetic properties of the second half reaction may vary with different substrates in the first half reaction, even though the same substituted enzyme seems to be transformed.
Reversible catalysis and the Haldane equation
External factors may limit the ability of an enzyme to catalyse a reaction in both directions (whereas the nature of a catalyst in itself means that it cannot catalyse just one direction, according to the principle of microscopic reversibility). We consider the case of an enzyme that catalyses the reaction in both directions:
{E} + {S}
<=>[k_{1}][k_{-1}]
ES
<=>[k_{2}][k_{-2}]
{E} + {P}
The steady-state, initial rate of the reaction is
is positive if the reaction proceed in the forward direction () and negative otherwise.
Equilibrium requires that , which occurs when . This shows that thermodynamics forces a relation between the values of the 4 rate constants.
The values of the forward and backward maximal rates, obtained for , , and , , respectively, are and , respectively. Their ratio is not equal to the equilibrium constant, which implies that thermodynamics does not constrain the ratio of the maximal rates. This explains that enzymes can be much "better catalysts" (in terms of maximal rates) in one particular direction of the reaction.
On can also derive the two Michaelis constants and . The Haldane equation is the relation .
Therefore, thermodynamics constrains the ratio between the forward and backward values, not the ratio of values.
Non-Michaelis–Menten kinetics
Many different enzyme systems follow non Michaelis-Menten behavior. A select few examples include kinetics of self-catalytic enzymes, cooperative and allosteric enzymes, interfacial and intracellular enzymes, processive enzymes and so forth. Some enzymes produce a sigmoid v by [S] plot, which often indicates cooperative binding of substrate to the active site. This means that the binding of one substrate molecule affects the binding of subsequent substrate molecules. This behavior is most common in multimeric enzymes with several interacting active sites. Here, the mechanism of cooperation is similar to that of hemoglobin, with binding of substrate to one active site altering the affinity of the other active sites for substrate molecules. Positive cooperativity occurs when binding of the first substrate molecule increases the affinity of the other active sites for substrate. Negative cooperativity occurs when binding of the first substrate decreases the affinity of the enzyme for other substrate molecules.
Allosteric enzymes include mammalian tyrosyl tRNA-synthetase, which shows negative cooperativity, and bacterial aspartate transcarbamoylase and phosphofructokinase, which show positive cooperativity.
Cooperativity is surprisingly common and can help regulate the responses of enzymes to changes in the concentrations of their substrates. Positive cooperativity makes enzymes much more sensitive to [S] and their activities can show large changes over a narrow range of substrate concentration. Conversely, negative cooperativity makes enzymes insensitive to small changes in [S].
The Hill equation is often used to describe the degree of cooperativity quantitatively in non-Michaelis–Menten kinetics. The derived Hill coefficient n measures how much the binding of substrate to one active site affects the binding of substrate to the other active sites. A Hill coefficient of <1 indicates negative cooperativity and a coefficient of >1 indicates positive cooperativity.
Pre-steady-state kinetics
In the first moment after an enzyme is mixed with substrate, no product has been formed and no intermediates exist. The study of the next few milliseconds of the reaction is called pre-steady-state kinetics. Pre-steady-state kinetics is therefore concerned with the formation and consumption of enzyme–substrate intermediates (such as ES or E*) until their steady-state concentrations are reached.
This approach was first applied to the hydrolysis reaction catalysed by chymotrypsin. Often, the detection of an intermediate is a vital piece of evidence in investigations of what mechanism an enzyme follows. For example, in the ping–pong mechanisms that are shown above, rapid kinetic measurements can follow the release of product P and measure the formation of the modified enzyme intermediate E*. In the case of chymotrypsin, this intermediate is formed by an attack on the substrate by the nucleophilic serine in the active site and the formation of the acyl-enzyme intermediate.
In the figure to the right, the enzyme produces E* rapidly in the first few seconds of the reaction. The rate then slows as steady state is reached. This rapid burst phase of the reaction measures a single turnover of the enzyme. Consequently, the amount of product released in this burst, shown as the intercept on the y-axis of the graph, also gives the amount of functional enzyme which is present in the assay.
Chemical mechanism
An important goal of measuring enzyme kinetics is to determine the chemical mechanism of an enzyme reaction, i.e., the sequence of chemical steps that transform substrate into product. The kinetic approaches discussed above will show at what rates intermediates are formed and inter-converted, but they cannot identify exactly what these intermediates are.
Kinetic measurements taken under various solution conditions or on slightly modified enzymes or substrates often shed light on this chemical mechanism, as they reveal the rate-determining step or intermediates in the reaction. For example, the breaking of a covalent bond to a hydrogen atom is a common rate-determining step. Which of the possible hydrogen transfers is rate determining can be shown by measuring the kinetic effects of substituting each hydrogen by deuterium, its stable isotope. The rate will change when the critical hydrogen is replaced, due to a primary kinetic isotope effect, which occurs because bonds to deuterium are harder to break than bonds to hydrogen. It is also possible to measure similar effects with other isotope substitutions, such as 13C/12C and 18O/16O, but these effects are more subtle.
Isotopes can also be used to reveal the fate of various parts of the substrate molecules in the final products. For example, it is sometimes difficult to discern the origin of an oxygen atom in the final product; since it may have come from water or from part of the substrate. This may be determined by systematically substituting oxygen's stable isotope 18O into the various molecules that participate in the reaction and checking for the isotope in the product. The chemical mechanism can also be elucidated by examining the kinetics and isotope effects under different pH conditions, by altering the metal ions or other bound cofactors, by site-directed mutagenesis of conserved amino acid residues, or by studying the behaviour of the enzyme in the presence of analogues of the substrate(s).
Enzyme inhibition and activation
Enzyme inhibitors are molecules that reduce or abolish enzyme activity, while enzyme activators are molecules that increase the catalytic rate of enzymes. These interactions can be either reversible (i.e., removal of the inhibitor restores enzyme activity) or irreversible (i.e., the inhibitor permanently inactivates the enzyme).
Reversible inhibitors
Traditionally reversible enzyme inhibitors have been classified as competitive, uncompetitive, or non-competitive, according to their effects on KM and Vmax. These different effects result from the inhibitor binding to the enzyme E, to the enzyme–substrate complex ES, or to both, respectively. The division of these classes arises from a problem in their derivation and results in the need to use two different binding constants for one binding event. The binding of an inhibitor and its effect on the enzymatic activity are two distinctly different things, another problem the traditional equations fail to acknowledge. In noncompetitive inhibition the binding of the inhibitor results in 100% inhibition of the enzyme only, and fails to consider the possibility of anything in between. In noncompetitive inhibition, the inhibitor will bind to an enzyme at its allosteric site; therefore, the binding affinity, or inverse of KM, of the substrate with the enzyme will remain the same. On the other hand, the Vmax will decrease relative to an uninhibited enzyme. On a Lineweaver-Burk plot, the presence of a noncompetitive inhibitor is illustrated by a change in the y-intercept, defined as 1/Vmax. The x-intercept, defined as −1/KM, will remain the same. In competitive inhibition, the inhibitor will bind to an enzyme at the active site, competing with the substrate. As a result, the KM will increase and the Vmax will remain the same. The common form of the inhibitory term also obscures the relationship between the inhibitor binding to the enzyme and its relationship to any other binding term be it the Michaelis–Menten equation or a dose response curve associated with ligand receptor binding. To demonstrate the relationship the following rearrangement can be made:
Adding zero to the bottom ([I]-[I])
Dividing by [I]+Ki
This notation demonstrates that similar to the Michaelis–Menten equation, where the rate of reaction depends on the percent of the enzyme population interacting with substrate, the effect of the inhibitor is a result of the percent of the enzyme population interacting with inhibitor. The only problem with this equation in its present form is that it assumes absolute inhibition of the enzyme with inhibitor binding, when in fact there can be a wide range of effects anywhere from 100% inhibition of substrate turn over to just >0%. To account for this the equation can be easily modified to allow for different degrees of inhibition by including a delta Vmax term.
or
This term can then define the residual enzymatic activity present when the inhibitor is interacting with individual enzymes in the population. However the inclusion of this term has the added value of allowing for the possibility of activation if the secondary Vmax term turns out to be higher than the initial term. To account for the possibly of activation as well the notation can then be rewritten replacing the inhibitor "I" with a modifier term denoted here as "X".
While this terminology results in a simplified way of dealing with kinetic effects relating to the maximum velocity of the Michaelis–Menten equation, it highlights potential problems with the term used to describe effects relating to the KM. The KM relating to the affinity of the enzyme for the substrate should in most cases relate to potential changes in the binding site of the enzyme which would directly result from enzyme inhibitor interactions. As such a term similar to the one proposed above to modulate Vmax should be appropriate in most situations:
Irreversible inhibitors
Enzyme inhibitors can also irreversibly inactivate enzymes, usually by covalently modifying active site residues. These reactions, which may be called suicide substrates, follow exponential decay functions and are usually saturable. Below saturation, they follow first order kinetics with respect to inhibitor. Irreversible inhibition could be classified into two distinct types. Affinity labelling is a type of irreversible inhibition where a functional group that is highly reactive modifies a catalytically critical residue on the protein of interest to bring about inhibition. Mechanism-based inhibition, on the other hand, involves binding of the inhibitor followed by enzyme mediated alterations that transform the latter into a reactive group that irreversibly modifies the enzyme.
Philosophical discourse on reversibility and irreversibility of inhibition
Having discussed reversible inhibition and irreversible inhibition in the above two headings, it would have to be pointed out that the concept of reversibility (or irreversibility) is a purely theoretical construct exclusively dependent on the time-frame of the assay, i.e., a reversible assay involving association and dissociation of the inhibitor molecule in the minute timescales would seem irreversible if an assay assess the outcome in the seconds and vice versa. There is a continuum of inhibitor behaviors spanning reversibility and irreversibility at a given non-arbitrary assay time frame. There are inhibitors that show slow-onset behavior and most of these inhibitors, invariably, also show tight-binding to the protein target of interest.
Mechanisms of catalysis
The favoured model for the enzyme–substrate interaction is the induced fit model. This model proposes that the initial interaction between enzyme and substrate is relatively weak, but that these weak interactions rapidly induce conformational changes in the enzyme that strengthen binding. These conformational changes also bring catalytic residues in the active site close to the chemical bonds in the substrate that will be altered in the reaction. Conformational changes can be measured using circular dichroism or dual polarisation interferometry. After binding takes place, one or more mechanisms of catalysis lower the energy of the reaction's transition state by providing an alternative chemical pathway for the reaction. Mechanisms of catalysis include catalysis by bond strain; by proximity and orientation; by active-site proton donors or acceptors; covalent catalysis and quantum tunnelling.
Enzyme kinetics cannot prove which modes of catalysis are used by an enzyme. However, some kinetic data can suggest possibilities to be examined by other techniques. For example, a ping–pong mechanism with burst-phase pre-steady-state kinetics would suggest covalent catalysis might be important in this enzyme's mechanism. Alternatively, the observation of a strong pH effect on Vmax but not KM might indicate that a residue in the active site needs to be in a particular ionisation state for catalysis to occur.
History
In 1902 Victor Henri proposed a quantitative theory of enzyme kinetics, but at the time the experimental significance of the hydrogen ion concentration was not yet recognized. After Peter Lauritz Sørensen had defined the logarithmic pH-scale and introduced the concept of buffering in 1909 the German chemist Leonor Michaelis and Dr. Maud Leonora Menten (a postdoctoral researcher in Michaelis's lab at the time) repeated Henri's experiments and confirmed his equation, which is now generally referred to as Michaelis-Menten kinetics (sometimes also Henri-Michaelis-Menten kinetics). Their work was further developed by G. E. Briggs and J. B. S. Haldane, who derived kinetic equations that are still widely considered today a starting point in modeling enzymatic activity.
The major contribution of the Henri-Michaelis-Menten approach was to think of enzyme reactions in two stages. In the first, the substrate binds reversibly to the enzyme, forming the enzyme-substrate complex. This is sometimes called the Michaelis complex. The enzyme then catalyzes the chemical step in the reaction and releases the product. The kinetics of many enzymes is adequately described by the simple Michaelis-Menten model, but all enzymes have internal motions that are not accounted for in the model and can have significant contributions to the overall reaction kinetics. This can be modeled by introducing several Michaelis-Menten pathways that are connected with fluctuating rates, which is a mathematical extension of the basic Michaelis Menten mechanism.
Software
ENZO (Enzyme Kinetics) is a graphical interface tool for building kinetic models of enzyme catalyzed reactions. ENZO automatically generates the corresponding differential equations from a stipulated enzyme reaction scheme. These differential equations are processed by a numerical solver and a regression algorithm which fits the coefficients of differential equations to experimentally observed time course curves. ENZO allows rapid evaluation of rival reaction schemes and can be used for routine tests in enzyme kinetics.
See also
Protein dynamics
Diffusion limited enzyme
Langmuir adsorption model
Footnotes
α. Link: Interactive Michaelis–Menten kinetics tutorial (Java required)
β. Link: dihydrofolate reductase mechanism (Gif)
γ. Link: DNA polymerase mechanism (Gif)
δ. Link: Chymotrypsin mechanism (Flash required)
References
Further reading
Introductory
Advanced
External links
Animation of an enzyme assay — Shows effects of manipulating assay conditions
MACiE — A database of enzyme reaction mechanisms
ENZYME — Expasy enzyme nomenclature database
ENZO — Web application for easy construction and quick testing of kinetic models of enzyme catalyzed reactions.
ExCatDB — A database of enzyme catalytic mechanisms
BRENDA — Comprehensive enzyme database, giving substrates, inhibitors and reaction diagrams
SABIO-RK — A database of reaction kinetics
Joseph Kraut's Research Group, University of California San Diego — Animations of several enzyme reaction mechanisms
Symbolism and Terminology in Enzyme Kinetics — A comprehensive explanation of concepts and terminology in enzyme kinetics
An introduction to enzyme kinetics — An accessible set of on-line tutorials on enzyme kinetics
Enzyme kinetics animated tutorial — An animated tutorial with audio
Catalysis | Enzyme kinetics | [
"Chemistry"
] | 8,002 | [
"Catalysis",
"Chemical kinetics",
"Enzyme kinetics"
] |
3,043,958 | https://en.wikipedia.org/wiki/Fixed%20points%20of%20isometry%20groups%20in%20Euclidean%20space | A fixed point of an isometry group is a point that is a fixed point for every isometry in the group. For any isometry group in Euclidean space the set of fixed points is either empty or an affine space.
For an object, any unique centre and, more generally, any point with unique properties with respect to the object is a fixed point of its symmetry group.
In particular this applies for the centroid of a figure, if it exists. In the case of a physical body, if for the symmetry not only the shape but also the density is taken into account, it applies to the centre of mass.
If the set of fixed points of the symmetry group of an object is a singleton then the object has a specific centre of symmetry. The centroid and centre of mass, if defined, are this point. Another meaning of "centre of symmetry" is a point with respect to which inversion symmetry applies. Such a point needs not be unique; if it is not, there is translational symmetry, hence there are infinitely many of such points. On the other hand, in the cases of e.g. C3h and D2 symmetry there is a centre of symmetry in the first sense, but no inversion.
If the symmetry group of an object has no fixed points then the object is infinite and its centroid and centre of mass are undefined.
If the set of fixed points of the symmetry group of an object is a line or plane then the centroid and centre of mass of the object, if defined, and any other point that has unique properties with respect to the object, are on this line or plane.
1D
Line
Only the trivial isometry group leaves the whole line fixed.
Point
The groups generated by a reflection leave a point fixed.
2D
Plane
Only the trivial isometry group C1 leaves the whole plane fixed.
Line
Cs with respect to any line leaves that line fixed.
Point
The point groups in two dimensions with respect to any point leave that point fixed.
3D
Space
Only the trivial isometry group C1 leaves the whole space fixed.
Plane
Cs with respect to a plane leaves that plane fixed.
Line
Isometry groups leaving a line fixed are isometries which in every plane perpendicular to that line have common 2D point groups in two dimensions with respect to the point of intersection of the line and the planes.
Cn ( n > 1 ) and Cnv ( n > 1 )
cylindrical symmetry without reflection symmetry in a plane perpendicular to the axis
cases in which the symmetry group is an infinite subset of that of cylindrical symmetry
Point
All other point groups in three dimensions
No fixed points
The isometry group contains translations or a screw operation.
Arbitrary dimension
Point
One example of an isometry group, applying in every dimension, is that generated by inversion in a point. An n-dimensional parallelepiped is an example of an object invariant under such an inversion.
References
Slavik V. Jablan, Symmetry, Ornament and Modularity, Volume 30 of K & E Series on Knots and Everything, World Scientific, 2002.
Euclidean symmetries
Group theory
Fixed points (mathematics)
Geometric centers | Fixed points of isometry groups in Euclidean space | [
"Physics",
"Mathematics"
] | 630 | [
"Functions and mappings",
"Point (geometry)",
"Euclidean symmetries",
"Mathematical analysis",
"Fixed points (mathematics)",
"Geometric centers",
"Mathematical objects",
"Group theory",
"Fields of abstract algebra",
"Topology",
"Mathematical relations",
"Symmetry",
"Dynamical systems"
] |
3,043,978 | https://en.wikipedia.org/wiki/Generic%20point | In algebraic geometry, a generic point P of an algebraic variety X is a point in a general position, at which all generic properties are true, a generic property being a property which is true for almost every point.
In classical algebraic geometry, a generic point of an affine or projective algebraic variety of dimension d is a point such that the field generated by its coordinates has transcendence degree d over the field generated by the coefficients of the equations of the variety.
In scheme theory, the spectrum of an integral domain has a unique generic point, which is the zero ideal. As the closure of this point for the Zariski topology is the whole spectrum, the definition has been extended to general topology, where a generic point of a topological space X is a point whose closure is X.
Definition and motivation
A generic point of the topological space X is a point P whose closure is all of X, that is, a point that is dense in X.
The terminology arises from the case of the Zariski topology on the set of subvarieties of an algebraic set: the algebraic set is irreducible (that is, it is not the union of two proper algebraic subsets) if and only if the topological space of the subvarieties has a generic point.
Examples
The only Hausdorff space that has a generic point is the singleton set.
Any integral scheme has a (unique) generic point; in the case of an affine integral scheme (i.e., the prime spectrum of an integral domain) the generic point is the point associated to the prime ideal (0).
History
In the foundational approach of André Weil, developed in his Foundations of Algebraic Geometry, generic points played an important role, but were handled in a different manner. For an algebraic variety V over a field K, generic points of V were a whole class of points of V taking values in a universal domain Ω, an algebraically closed field containing K but also an infinite supply of fresh indeterminates. This approach worked, without any need to deal directly with the topology of V (K-Zariski topology, that is), because the specializations could all be discussed at the field level (as in the valuation theory approach to algebraic geometry, popular in the 1930s).
This was at a cost of there being a huge collection of equally generic points. Oscar Zariski, a colleague of Weil's at São Paulo just after World War II, always insisted that generic points should be unique. (This can be put back into topologists' terms: Weil's idea fails to give a Kolmogorov space and Zariski thinks in terms of the Kolmogorov quotient.)
In the rapid foundational changes of the 1950s Weil's approach became obsolete. In scheme theory, though, from 1957, generic points returned: this time à la Zariski. For example for R a discrete valuation ring, Spec(R) consists of two points, a generic point (coming from the prime ideal {0}) and a closed point or special point coming from the unique maximal ideal. For morphisms to Spec(R), the fiber above the special point is the special fiber, an important concept for example in reduction modulo p, monodromy theory and other theories about degeneration. The generic fiber, equally, is the fiber above the generic point. Geometry of degeneration is largely then about the passage from generic to special fibers, or in other words how specialization of parameters affects matters. (For a discrete valuation ring the topological space in question is the Sierpinski space of topologists. Other local rings have unique generic and special points, but a more complicated spectrum, since they represent general dimensions. The discrete valuation case is much like the complex unit disk, for these purposes.)
References
Algebraic geometry
General topology | Generic point | [
"Mathematics"
] | 789 | [
"General topology",
"Fields of abstract algebra",
"Topology",
"Algebraic geometry"
] |
3,044,088 | https://en.wikipedia.org/wiki/Compensation%20point | The light compensation point (Ic) is the light intensity on the light curve where the rate of photosynthesis exactly matches the rate of cellular respiration. At this point, the uptake of CO2 through photosynthetic pathways is equal to the respiratory release of carbon dioxide, and the uptake of O2 by respiration is equal to the photosynthetic release of oxygen. The concept of compensation points in general may be applied to other photosynthetic variables, the most important being that of CO2 concentration – CO2 compensation point (Γ).Interval of time in day time when light intensity is low due to which net gaseous exchange is zero is called as compensation point.
In assimilation terms, at the compensation point, the net carbon dioxide assimilation is zero. Leaves release CO2 by photorespiration and cellular respiration, but CO2 is also converted into carbohydrate by photosynthesis. Assimilation is therefore the difference in the rate of these processes. At a given partial pressure of CO2 (0.343 hPa in 1980 atmosphere), there is an irradiation at which the net assimilation of CO2 is zero. For instance, in the early morning and late evenings, the light compensation point Ic may be reached as photosynthetic activity decreases and respiration increases. The concentration of CO2 also affects the rates of photosynthesis and photorespiration. Higher CO2 concentrations favour photosynthesis whereas low CO2 concentrations favor photorespiration, producing a CO2 compensation point Γ for a given irradiation.
Light compensation point
As defined above, the light compensation point Ic is when no net carbon assimilation occurs. At this point, the organism is neither consuming nor building biomass. The net gaseous exchange is also zero at this point.
Ic is a practical value that can be reached during early mornings and early evenings. Respiration is relatively constant with regard to light, whereas photosynthesis depends on the intensity of sunlight.
Depth
For aquatic plants where the level of light at any given depth is roughly constant for most of the day, the compensation point is the depth at which light penetrating the water creates the same balanced effect.
CO2 compensation point
The CO2 compensation point (Γ) is the CO2 concentration at which the rate of photosynthesis exactly matches the rate of respiration. There is a significant difference in Γ between plants and plants: on land, the typical value for Γ in a plant ranges from 40–100 μmol/mol, while in plants the values are lower at 3–10 μmol/mol. Plants with a weaker CCM, such as C2 photosynthesis, may display an intermediate value at 25 μmol/mol.
The μmol/mol unit may alternatively be expressed as the partial pressure of CO2 in pascals; for atmospheric conditions, 1μmol/mol = 1 ppm ≈ 0.1 Pa. For modeling of photosynthesis, the more important variable is the CO2 compensation point in the absence of mitochondrial respiration, also known as the CO2 photocompensation point (Γ*), the biochemical CO2 compensation point of Rubisco. It may be measured by whole-leaf isotopic gas exchange, or be estimated in the Laisk method using an intermediate "apparent" value of C* with correction. C* approximates Γ* in the absence of carbon refixation, i.e. carbon fixation from photorespiration products. In plants, both values are lower than their counterparts. In C2 plants that operate by refixation, only C* is significantly lower.
As it is not yet common to routinely change the concentration of air, the concentration points are largely theoretical derived from modeling and extrapolation, though they do hold up well in these applications. Both Γ and Γ* are linearly related to the partial pressure of oxygen (p(O2)) due to the side reaction of Rubisco. Γ is also related to temperature due to the temperature-dependence of respiration rates. It is also related to irradiation, as light is required to produce RuBP (ribulose-1,5-bisphosphate), the electron acceptor for Rubisco. At normal irradiation, there would almost always be enough RuBP; but at low irradiation, lack of RuBP decreases the photosynthetic activity and therefore affects Γ.
The marine environment
Respiration occurs by both plants and animals throughout the water column, resulting in the destruction, or usage, of organic matter, but photosynthesis can only take place via photosynthetic algae in the presence of light, nutrients and CO2. In well-mixed water columns plankton are evenly distributed, but a net production only occurs above the compensation depth. Below the compensation depth there is a net loss of organic matter. The total population of photosynthetic organisms cannot increase if the loss exceeds the net production.
The compensation depth between photosynthesis and respiration of phytoplankton in the ocean must be dependent on some factors: the illumination at the surface, the transparency of the water, the biological character of the plankton present, and the temperature. The compensation point was found nearer to the surface as you move closer to the coast. It is also lower in the winter seasons in the Baltic Sea according to a study that examined the compensation point of multiple photosynthetic species. The blue portion of the visible spectrum, between 455 and 495 nanometers, dominates light at the compensation depth.
A concern regarding the concept of the compensation point is it assumes that phytoplankton remain at a fixed depth throughout a 24-hour period (time frame in which compensation depth is measured), but phytoplankton experience displacement due to isopycnals moving them tens of meters.
See also
Photophosphorylation
Critical depth
CO2 fertilization effect
References
Photosynthesis | Compensation point | [
"Chemistry",
"Biology"
] | 1,222 | [
"Biochemistry",
"Photosynthesis"
] |
3,044,121 | https://en.wikipedia.org/wiki/Half%20range%20Fourier%20series | In mathematics, a half range Fourier series is a Fourier series defined on an interval instead of the more common , with the implication that the analyzed function should be extended to as either an even (f(-x)=f(x)) or odd function (f(-x)=-f(x)). This allows the expansion of the function in a series solely of sines (odd) or cosines (even). The choice between odd and even is typically motivated by boundary conditions associated with a differential equation satisfied by .
Example
Calculate the half range Fourier sine series for the function where .
Since we are calculating a sine series,
Now,
When n is odd,
When n is even,
thus
With the special case , hence the required Fourier sine series is
Fourier series | Half range Fourier series | [
"Mathematics"
] | 164 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
3,044,148 | https://en.wikipedia.org/wiki/Thermal%20inertia | Thermal inertia is a term commonly used to describe the observed delays in a body's temperature response during heat transfers. The phenomenon exists because of a body's ability to both store and transport heat relative to its environment. Since the configuration of system components and mix of transport mechanisms (e.g. conduction, convection, radiation, phase change) vary substantially between instances, there is no generally applicable mathematical definition of closed form for thermal inertia.
Bodies with relatively large mass and heat capacity typically exhibit slower temperature responses. However heat capacity alone cannot accurately quantify thermal inertia. Measurements of it further depend on how heat flows are distributed inside and outside a body.
Whether thermal inertia is an intensive or extensive quantity depends upon context. Some authors have identified it as an intensive material property, for example in association with thermal effusivity. It has also been evaluated as an extensive quantity based upon the measured or simulated spatial-temporal behavior of a system during transient heat transfers. A time constant is then sometimes appropriately used as a simple parametrization for thermal inertia of a selected component or subsystem.
Description
A thermodynamic system containing one or more components with large heat capacity indicates that dynamic, or transient, effects must be considered when measuring or modelling system behavior. Steady-state calculations, many of which produce valid estimates of equilibrium heat flows and temperatures without an accounting for thermal inertia, nevertheless yield no information on the pace of changes between equilibrium states. Nowadays the spatial-temporal behavior of complex systems can be precisely evaluated with detailed numerical simulation. In some cases a lumped system analysis can estimate a thermal time constant.
A larger heat capacity for a component generally means a longer time to reach equilibrium. The transition rate also occurs in conjunction with the component's internal and environmental heat transfer coefficients, as referenced over an interface area . The time constant for an estimated exponential transition of the component's temperature will adjust as under conditions which obey Newton's law of cooling; and when characterized by a ratio or Biot number, much less than one.
Analogies of thermal inertia to the temporal behaviors observed in other disciplines of engineering and physics can sometimes be used with caution. In building performance simulation, thermal inertia is also known as the thermal flywheel effect, and the heat capacity of a structure's mass (sometimes called the thermal mass) can produce a delay between diurnal heat flow and temperature which is similar to the delay between current and voltage in an AC-driven RC circuit. Thermal inertia is less directly comparable to the mass-and-velocity term used in mechanics, where inertia restricts the acceleration of an object. In a similar way, thermal inertia can be a measure of heat capacity of a mass, and of the velocity of the thermal wave which controls the surface temperature of a body.
Thermal effusivity
For a semi-infinite rigid body where heat transfer is dominated by the diffusive process of conduction only, the thermal inertia response at a surface can be approximated from the material's thermal effusivity, also called thermal responsivity . It is defined as the square root of the product of the material's bulk thermal conductivity and volumetric heat capacity, where the latter is the product of density and specific heat capacity:
is thermal conductivity, with unit W⋅m−1⋅K−1
is density, with unit kg⋅m−3
is specific heat capacity, with unit J⋅kg−1⋅K−1
Thermal effusivity has units of a heat transfer coefficient multiplied by square root of time:
SI units of W⋅m−2⋅K−1⋅s1/2 or J⋅m−2⋅K−1⋅s−1/2.
Non-SI units of kieffers: Cal⋅cm−2⋅K−1⋅s−1/2, are also used informally in older references.
When a constant flow of heat is abruptly imposed upon a surface, performs nearly the same role in limiting the surface's initial dynamic "thermal inertia" response:
as the rigid body's usual heat transfer coefficient plays in determining the surface's final static surface temperature.
See also
List of thermodynamic properties
Thermal analysis
References
Thermodynamic properties
Physical quantities
Heat transfer | Thermal inertia | [
"Physics",
"Chemistry",
"Mathematics"
] | 884 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Thermodynamics",
"Physical properties"
] |
3,044,477 | https://en.wikipedia.org/wiki/Acidifier | Acidifiers are inorganic chemicals that, put into a human (or other mammalian) body, either produce or become acid.
These chemicals increase the level of gastric acid in the stomach when ingested, thus decreasing the stomach pH.
Out of many types of acidifiers, the main four are:
Gastric acidifiers, these are the drugs which are used to restore temporarily the acidity of stomach in patient suffering from hypochlorhydria
Urinary acidifiers, used to control pH in urine
Systemic acidifiers, used to control pH in the overall body
Acids, mostly used in laboratory experiments
Acidifier performance in distal stomach is debatable.
Patients who suffer from achlorhydria have deficient secretion of hydrochloric acid in their stomach. In such cases, acidifiers may provide sufficient acidity for proper digestion of food. Systemic acidifiers, usually given by injection, act by reducing the alkali reserve in the body, and are also useful in reducing metabolic alkalosis.
References
Acids
Inorganic compounds | Acidifier | [
"Chemistry"
] | 212 | [
"Acids",
"Inorganic compounds"
] |
3,044,493 | https://en.wikipedia.org/wiki/Carbonate%20compensation%20depth | The carbonate compensation depth (CCD) is the depth, in the oceans, at which the rate of supply of calcium carbonates matches the rate of solvation. That is, solvation 'compensates' supply. Below the CCD solvation is faster, so that carbonate particles dissolve and the carbonate shells (tests) of animals are not preserved. Carbonate particles cannot accumulate in the sediments where the sea floor is below this depth.
Calcite is the least soluble of these carbonates, so the CCD is normally the compensation depth for calcite. The aragonite compensation depth (ACD) is the compensation depth for aragonitic carbonates. Aragonite is more soluble than calcite, and the aragonite compensation depth is generally shallower than both the calcite compensation depth and the CCD.
Overview
As shown in the diagram, biogenic calcium carbonate (CaCO3) tests are produced in the photic zone of the oceans (green circles). Upon death, those tests escaping dissolution near the surface settle, along with clay materials. In seawater, a dissolution boundary is formed as a result of temperature, pressure, and depth, and is known as the saturation horizon. Above this horizon, waters are supersaturated and CaCO3 tests are largely preserved. Below it, waters are undersaturated, because of both the increasing solubility with depth and the release of CO2 from organic matter decay, and CaCO3 will dissolve. The sinking velocity of debris is rapid (broad pale arrows), so dissolution occurs primarily at the sediment surface.
At the carbonate compensation depth, the rate of dissolution exactly matches the rate of supply of CaCO3 from above. At steady state this depth, the CCD, is similar to the snowline (the first depth where carbonate-poor sediments occur). The lysocline is the depth interval between the saturation and carbonate compensation depths.
Solubility of carbonate
Calcium carbonate is essentially insoluble in sea surface waters today. Shells of dead calcareous plankton sinking to deeper waters are practically unaltered until reaching the lysocline, the point about 3.5 km deep past which the solubility increases dramatically with depth and pressure. By the time the CCD is reached all calcium carbonate has dissolved according to this equation:
CaCO3 + CO2 + H2O <=> Ca^2+ (aq) + 2HCO_3^- (aq)
Calcareous plankton and sediment particles can be found in the water column above the CCD. If the sea bed is above the CCD, bottom sediments can consist of calcareous sediments called calcareous ooze, which is essentially a type of limestone or chalk. If the exposed sea bed is below the CCD tiny shells of CaCO3 will dissolve before reaching this level, preventing deposition of carbonate sediment. As the sea floor spreads, thermal subsidence of the plate, which has the effect of increasing depth, may bring the carbonate layer below the CCD; the carbonate layer may be prevented from chemically interacting with the sea water by overlying sediments such as a layer of siliceous ooze or abyssal clay deposited on top of the carbonate layer.
Variations in value of the CCD
The exact value of the CCD depends on the solubility of calcium carbonate which is determined by temperature, pressure and the chemical composition of the water – in particular the amount of dissolved in the water. Calcium carbonate is more soluble at lower temperatures and at higher pressures. It is also more soluble if the concentration of dissolved is higher. Adding a reactant to the above chemical equation pushes the equilibrium towards the right producing more products: Ca2+ and HCO3−, and consuming more reactants and calcium carbonate according to Le Chatelier's principle.
At the present time the CCD in the Pacific Ocean is about 4200–4500 metres except beneath the equatorial upwelling zone, where the CCD is about 5000 m. In the temperate and tropical Atlantic Ocean the CCD is at approximately 5000 m. In the Indian Ocean it is intermediate between the Atlantic and the Pacific at approximately 4300 meters. The variation in the depth of the CCD largely results from the length of time since the bottom water has been exposed to the surface; this is called the "age" of the water mass. Thermohaline circulation determines the relative ages of the water in these basins. Because organic material, such as fecal pellets from copepods, sink from the surface waters into deeper water, deep water masses tend to accumulate dissolved carbon dioxide as they age. The oldest water masses have the highest concentrations of and therefore the shallowest CCD. The CCD is relatively shallow in high latitudes with the exception of the North Atlantic and regions of Southern Ocean where downwelling occurs. This downwelling brings young, surface water with relatively low concentrations of carbon dioxide into the deep ocean, depressing the CCD.
In the geological past the depth of the CCD has shown significant variation. In the Cretaceous through to the Eocene the CCD was much shallower globally than it is today; due to intense volcanic activity during this period atmospheric concentrations were much higher. Higher concentrations of resulted in a higher partial pressure of over the ocean. This greater pressure of atmospheric leads to increased dissolved in the ocean mixed surface layer. This effect was somewhat moderated by the deep oceans' elevated temperatures during this period. In the late Eocene the transition from a greenhouse to an icehouse Earth coincided with a deepened CCD.
John Murray investigated and experimented on the dissolution of calcium carbonate and was first to identify the carbonate compensation depth in oceans.
Climate change impacts
Increasing atmospheric concentration of from combustion of fossil fuels are causing the CCD to rise, with zones of downwelling first being affected. Ocean acidification, which is also caused by increasing carbon dioxide concentrations in the atmosphere, will increase such dissolution and shallow the carbonate compensation depth on timescales of tens to hundreds of years.
Sedimentary ooze
On the sea floors above the carbonate compensation depth, the most commonly found ooze is calcareous ooze; on the sea floors below the carbonate compensation depth, the most commonly found ooze is siliceous ooze. While calcareous ooze mostly consists of Rhizaria, siliceous ooze mostly consists of Radiolaria and diatoms.
See also
Carbonate pump
Great Calcite Belt
Lysocline
Ocean acidification
References
Oceanography | Carbonate compensation depth | [
"Physics",
"Environmental_science"
] | 1,335 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
3,044,582 | https://en.wikipedia.org/wiki/Why%20We%20Nap | Why We Nap: Evolution, Chronobiology, and Functions of Polyphasic and Ultrashort Sleep is a 1992 book edited by Claudio Stampi, sole proprietor of the Chronobiology Research Institute. It is frequently mentioned by "polyphasic sleepers", as it is one of the few published books about the subject of systematic short napping in extreme situations where consolidated sleep is not possible.
According to the book, in a sleep deprived condition, measurements of a polyphasic sleeper's memory retention and analytical ability show increases as compared with monophasic and biphasic sleep (but still a decrease of 12% as compared with free running sleep). According to Stampi, the improvement is due to an extraordinary evolutionary predisposition to adopt such a sleep schedule; he hypothesizes this is possibly because polyphasic sleep was the preferred schedule of ancestors of the human race for thousands of years prior to the adoption of the monophasic schedule.
According to EEG measurements collected by Dr. Stampi during a 50-day trial of polyphasic ultrashort sleep with a test subject and published in his book Why We Nap, the proportion of sleep stages remains roughly the same during both polyphasic and monophasic sleep schedules. The major differences are that the ratio of lighter sleep stages to deeper sleep stages is slightly reduced and that sleep stages are often taken out of order or not at all, that is, some naps may be composed primarily of slow wave sleep while rapid eye movement sleep dominates other naps.
References
External links
1992 non-fiction books
Sleep
Birkhäuser books | Why We Nap | [
"Biology"
] | 331 | [
"Behavior",
"Sleep"
] |
3,044,870 | https://en.wikipedia.org/wiki/Semantic%20URL%20attack | In a semantic URL attack, a client manually adjusts the parameters of its request by maintaining the URL's syntax but altering its semantic meaning. This attack is primarily used against CGI driven websites.
A similar attack involving web browser cookies is commonly referred to as cookie poisoning.
Example
Consider a web-based e-mail application where users can reset their password by answering the security question correctly, and allows the users to send the password
to the e-mail address of their choosing. After they answer the security question correctly, the web page will arrive to the following web form where the users can enter their alternative e-mail address:
<form action="resetpassword.php" method="GET">
<input type="hidden" name="username" value="user001" />
<p>Please enter your alternative e-mail address:</p>
<input type="text" name="altemail" /><br />
<input type="submit" value="Submit" />
</form>
The receiving page, resetpassword.php, has all the information it needs to send the password to the new e-mail. The hidden variable username contains the value user001, which is the username of the e-mail account.
Because this web form is using the GET data method, when the user submits alternative@emailexample.com as the e-mail address where the user wants the password to be sent to,
the user then arrives at the following URL:
http://semanticurlattackexample.com/resetpassword.php?username=user001&altemail=alternative%40emailexample.com
This URL appears in the location bar of the browser, so the user can identify the username and the e-mail address through the URL parameters. The user may decide to steal other people's (user002) e-mail address by visiting the following URL as an experiment:
http://semanticurlattackexample.com/resetpassword.php?username=user002&altemail=alternative%40emailexample.com
If the resetpassword.php accepts these values, it is vulnerable to a semantic URL attack. The new password of the user002 e-mail address will be generated and sent to alternative@emailexmaple.com which causes user002's e-mail account to be stolen.
One method of avoiding semantic URL attacks is by using session variables. However, session variables can be vulnerable to other types of attacks such as session hijacking and cross-site scripting.
References
See also
Query string
Internet security
URL | Semantic URL attack | [
"Technology"
] | 576 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
3,045,014 | https://en.wikipedia.org/wiki/Pleasure%20principle%20%28psychology%29 | In Freudian psychoanalysis, the pleasure principle () is the instinctive seeking of pleasure and avoiding of pain to satisfy biological and psychological needs. Specifically, the pleasure principle is the animating force behind the id.
Precursors
Epicurus in the ancient world, and later Jeremy Bentham, laid stress upon the role of pleasure in directing human life, the latter stating: "Nature has placed mankind under the governance of two sovereign masters, pain and pleasure".
Freud's most immediate predecessor and guide however was Gustav Theodor Fechner and his psychophysics.
Freudian developments
Freud used the idea that the mind seeks pleasure and avoids pain in his Project for a Scientific Psychology of 1895, as well as in the theoretical portion of The Interpretation of Dreams of 1900, where he termed it the 'unpleasure principle'.
In the Two Principles of Mental Functioning of 1911, contrasting it with the reality principle, Freud spoke for the first time of "the pleasure-unpleasure principle, or more shortly the pleasure principle". In 1923, linking the pleasure principle to the libido he described it as the watchman over life; and in Civilization and Its Discontents of 1930 he still considered that "what decides the purpose of life is simply the programme of the pleasure principle".
While on occasion Freud wrote of the near omnipotence of the pleasure principle in mental life, elsewhere he referred more cautiously to the mind's strong (but not always fulfilled) tendency towards the pleasure principle.
Two principles
Freud contrasted the pleasure principle with the counterpart concept of the reality principle, which describes the capacity to defer gratification of a desire when circumstantial reality disallows its immediate gratification. In infant and early childhood, the id rules behavior by obeying only the pleasure principle. People at that age only seek immediate gratification, aiming to satisfy cravings such as hunger and thirst, and at later ages the id seeks out sex.
Maturity is learning to endure the pain of deferred gratification. Freud argued that "an ego thus educated has become 'reasonable'; it no longer lets itself be governed by the pleasure principle, but obeys the reality principle, which also, at bottom, seeks to obtain pleasure, but pleasure which is assured through taking account of reality, even though it is pleasure postponed and diminished".
The beyond
In his book Beyond the Pleasure Principle, published in 1921, Freud considered the possibility of "the operation of tendencies beyond the pleasure principle, that is, of tendencies more primitive than it and independent of it". By examining the role of repetition compulsion in potentially over-riding the pleasure principle, Freud ultimately developed his opposition between Libido, the life instinct, and the death drive.
See also
References
External links
Pleasure/unpleasure principle
Psychoanalytic terminology
Motivation
Positive psychology
Pleasure
Energy and instincts
Freudian psychology | Pleasure principle (psychology) | [
"Biology"
] | 584 | [
"Ethology",
"Behavior",
"Motivation",
"Human behavior"
] |
3,045,205 | https://en.wikipedia.org/wiki/Gibbs%20state | In probability theory and statistical mechanics, a Gibbs state is an equilibrium probability distribution which remains invariant under future evolution of the system. For example, a stationary or steady-state distribution of a Markov chain, such as that achieved by running a Markov chain Monte Carlo iteration for a sufficiently long time, is a Gibbs state.
Precisely, suppose is a generator of evolutions for an initial state , so that the state at any later time is given by . Then the condition for to be a Gibbs state is
.
In physics there may be several physically distinct Gibbs states in which a system may be trapped, particularly at lower temperatures.
They are named after Josiah Willard Gibbs, for his work in determining equilibrium properties of statistical ensembles. Gibbs himself referred to this type of statistical ensemble as being in "statistical equilibrium".
See also
Gibbs algorithm
Gibbs measure
KMS state
References
Statistical mechanics
Stochastic processes | Gibbs state | [
"Physics",
"Mathematics"
] | 179 | [
"Statistical mechanics stubs",
"Applied mathematics",
"Statistical mechanics",
"Applied mathematics stubs"
] |
3,045,792 | https://en.wikipedia.org/wiki/Foresight%20%28futures%20studies%29 | In futurology, especially in Europe, the term foresight has become widely used to describe activities such as:
critical thinking concerning long-term developments,
debate,
wider participatory democracy, and
shaping the future, especially by influencing public policy.
In the last decade, scenario methods, for example, have become widely used in some European countries in policy-making. The FORSOCIETY network brings together national Foresight teams from most European countries, and the European Foresight Monitoring Project is collating material on Foresight activities around the world. Foresight methods are used more and more in regional planning and decision–making (“regional foresight”). Several non-European think tanks, like Strategic Foresight Group, also engage in foresight studies.
The foresight of futurology is also known as strategic foresight. This foresight used by and describing professional futurists trained in Master's programs is the research-driven practice of exploring expected and alternative futures and guiding futures to inform strategy. Foresight includes understanding the relevant recent past; scanning to collect insight about present, futuring to describe the understood future including trend research; environment research to explore possible trend breaks from developments on the fringe and other divergencies that may lead to alternative futures; visioning to define preferred future states; designing strategies to craft this future; and adapting the present forces to implement this plan. There is notable but not complete overlap between foresight and strategic planning, change management, forecasting, and design thinking.
At the same time, the use of foresight for companies (“corporate foresight”) is becoming more professional and widespread Corporate foresight is used to support strategic management, identify new business fields and increase the innovation capacity of a firm.
Foresight is not the same as futures research or strategic planning. It encompasses a range of approaches that combine the three components mentioned above, which may be recast as:
futures (forecasting, forward thinking, prospectives),
planning (strategic analysis, priority setting), and
networking (participatory, dialogic) tools and orientations.
Much futurology research has been rather ivory tower work, but Foresight programmes were designed to influence policy - often R&D policy. Much technology policy had been very elitist; Foresight attempts to go beyond the "usual suspects" and gather widely distributed intelligence. These three lines of work were already common in Francophone futures studies going by the name la prospective. In the 1990s, an explosion of systematic organisation of these methods began in large-scale TECHNOLOGY FORESIGHT programmes in Europe and elsewhere.
Foresight thus draws on traditions of work in long-range planning and strategic planning, horizontal policymaking and democratic planning, and participatory futurology - but was also highly influenced by systemic approaches to innovation studies, science and technology policy, and analysis of "critical technologies".
Many of the methods that are commonly associated with Foresight - Delphi surveys, scenario workshops, etc. - derive from futurology. So does the fact that Foresight is concerned with:
The longer-term - futures that are usually at least 10 years away (though there are some exceptions to this, especially in its use in private business). Since Foresight is action-oriented (the planning link) it will rarely be oriented to perspectives beyond a few decades out (though where decisions like aircraft design, power station construction or other major infrastructural decisions are concerned, then the planning horizon may well be half a century).
Alternative futures: it is helpful to examine alternative paths of development, not just what is currently believed to be most likely or business as usual. Often Foresight will construct multiple scenarios. These may be an interim step on the way to creating what may be known as positive visions, success scenarios, aspirational futures. Sometimes alternative scenarios will be a major part of the output of Foresight work, with the decision about what future to build being left to other mechanisms.
See also
Accelerating change
Emerging technologies
Foresight Institute
Forecasting
Horizon scanning
Optimism bias
Reference class forecasting
Scenario planning
Strategic foresight
Strategic Foresight Group
Technology forecasting
Technology Scouting
References
Further reading
There are numerous journals that deal with research on foresight:
Technological Forecasting and Social Change
Futures
Futures & Foresight Science
European Journal of Futures Research
Foresight
Research focusing more on the combination of foresight and national R&D policy can be found in International Journal of Foresight and Innovation Policy
External links
The FORLEARN Online Guide developed by the Institute for Prospective Technological Studies of the European Commission
The Foresight Programme of UNIDO, the Investment and Technology Promotion Branch of the United Nations Industrial Development Organization.
Handbook of Knowledge Society Foresight published by the European Foundation, Dublin
Foresight (futures studies)
Transhumanism | Foresight (futures studies) | [
"Technology",
"Engineering",
"Biology"
] | 968 | [
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology"
] |
3,045,799 | https://en.wikipedia.org/wiki/Capacity%20building | Capacity building (or capacity development, capacity strengthening) is the improvement in an individual's or organization's facility (or capability) "to produce, perform or deploy". The terms capacity building and capacity development have often been used interchangeably, although a publication by OECD-DAC stated in 2006 that capacity development was the preferable term. Since the 1950s, international organizations, governments, non-governmental organizations (NGOs) and communities use the concept of capacity building as part of "social and economic development" in national and subnational plans. The United Nations Development Programme defines itself by "capacity development" in the sense of "'how UNDP works" to fulfill its mission. The UN system applies it in almost every sector, including several of the Sustainable Development Goals to be achieved by 2030. For example, the Sustainable Development Goal 17 advocates for enhanced international support for capacity building in developing countries to support national plans to implement the 2030 Agenda.
Under the codification of international development law, capacity building is a "cross cutting modality of international intervention". It often overlaps or is part of interventions in public administration reform, good governance and education in line sectors of public services.
The consensus approach of the international community for the components of capacity building as established by the World Bank, United Nations and European Commission consists of five areas: a clear policy framework, institutional development and legal framework, citizen participation and oversight, human resources improvements including education and training, and sustainability. Some of these overlap with other interventions and sectors. Much of the actual focus has been on training and educational inputs where it may be a euphemism for education and training. For example, UNDP focuses on training needs in its assessment methodology rather than on actual performance goals.
The pervasive use of the term for these multiple sectors and elements and the huge amount of development aid funding devoted to it has resulted in controversy over its true meaning. There is also concern over its use and impacts. In international development funding, evaluations by the World Bank and other donors have consistently revealed problems in this overall category of funding dating back to the year 2000. Since the arrival of capacity building as a dominant subject in international aid, donors and practitioners have struggled to create a concise mechanism for determining the effectiveness of capacity building initiatives. An independent public measurement indicator for improvement and oversight of the large variety of capacity building initiatives was published in 2015. This scoring system is based on international development law and professional management principles.
Definitions
Capacity development
A "good practice paper" by OECD-DAC defined capacity development as follows: "Capacity development is understood as the process whereby people, organizations and society as a whole unleash, strengthen, create, adapt and maintain capacity over time." Capacity is understood as "the ability of people, organizations and society as a whole to manage their affairs successfully".
The OECD-DAC stated in 2006 that the term "capacity development" should be used rather than the term "capacity building". This is because "capacity building" would imply starting from a plain surface and a step-by-step erection of a new structure - which is not how it works.
The European Commission Toolkit defines capacity development in the same way and stresses that capacity relates to "abilities", "attributes" and a "process". It is an attribute of people, individual organizations and groups of organizations. Capacity is shaped by, adapting to and reacting to external factors and actors, but it is not something external — it is internal to people, organizations and groups or systems of organizations. Thus, capacity development is a change process internal to organizations and people.
The United Nations Office for Disaster Risk Reduction (UNDRR), formerly the United Nations International Strategy for Disaster Reduction (UNISDR), defines capacity development in the disaster risk reduction domain as "the process by which people, organizations and society systematically stimulate and develop their capability over time to achieve social and economic goals, including through improvement of knowledge, skills, systems, and institutions – within a wider social and cultural enabling environment."
Outside of international interventions, capacity building can refer to strengthening the skills of people and communities, in small businesses and local grassroots movements. Organizational capacity building is used by NGOs and governments to guide their internal development and activities as a form of managerial improvements following administrative practices.
Community capacity building
The United Nations Committee of Experts on Public Administration in 2006 offered an additional term, "community capacity building". It is defined as a long-term continual process of development that involves all stakeholders as opposed to practices which limit oversight and involvement in interventions with governments. The list of parties that it defines as "community" includes ministries, local authorities, non-governmental organizations, professionals, community members, academics and more. According to the Committee, capacity building takes place at an individual, an institutional, societal level and "non-training" level.
The term "community capacity building" (CCB) began to be used in 1995 and since then became popular for example within the policy literature in the United Kingdom, particularly in the context of urban policy, regeneration and social development. It is, however, difficult to distinguish it from the practice of "community development". It is "built on a deficit model of communities which fails to engage properly with their own skills, knowledge and interests". Therefore, it does not properly address structural reasons for poverty and inequality.
Components
The World Bank, United Nations and European Commission describe capacity building to consist of five areas: a clear policy framework, institutional development and legal framework, citizen/democratic participation and oversight, human resources improvements including education and training, and sustainability.
The United Nations Development Group Capacity Development Guidelines presents a framework of capacity development comprising three interconnected levels of capacity: Individual, Institutional and Enabling Policy.
Thinking of capacity building as simply training or human resource development is insufficient.
Evolution
History
The discourse on and concept of capacity development has traditionally been closely associated with development cooperation.
The UNDP was one of the forerunners in designing international interventions in the category of capacity building and development. In the early 1970s, the UNDP offered guidance to its staff and governments on what it called "institution-building" which is one of the pillars of its current work and is part of a category of "public administration reform".
In the 1970s, international organizations emphasized building capacity through technical skills training in rural areas, and also in the administrative sectors of developing countries. In the 1980s they expanded the concept of institutional development further. "Institutional development" was viewed as a long-term process of interventions in a developing country's government, public and private sector institutions, and NGOs.
Under the UNDP's 2008–2013 "strategic plan for development", capacity building is the "organization's core contribution to development". The UNDP focused on building capacity at an institutional level and offers a six-step process for systematic capacity building. The six steps are: Conducting training need assessment, engage stakeholders on capacity development, assess capacity needs and assets, formulate a capacity development response, implement a capacity development response, evaluate capacity development.
Trends
Since about 2005, the capacity development agenda has also been adopted beyond the traditional aid community. This is particularly true for Africa: for example the African Union has developed a Capacity Development Strategic Framework and is using capacity development as one of three themes to structure its Development Effectiveness internet portal.
Trends in development cooperation shape how capacity development is discussed. These include for example: new forms of financing and less of a North–South dichotomy; more in-country leadership and less donor power; resilience as a framework in fragile environments; increasing private sector engagement.
Global goals
The UNDP integrated this capacity-building system into its work on reaching the Millennium Development Goals (MDGs) by the year 2015. The UNDP states that it focused on building capacity at the institutional level because it believed that "institutions are at the heart of human development, and that when they are able to perform better, [...] they can contribute more meaningfully to the achievement of national human development goals."
The United Nations Sustainable Development Goals mention capacity building (rather than capacity development) in several places: Sustainable Development Goal 17 is to "Strengthen the means of implementation and revitalize the Global Partnership for Sustainable Development". Target 9 of that goal is formulated as "Enhance international support for implementing effective and targeted capacity-building in developing countries to support national plans to implement all the Sustainable Development Goals, including through north–south, South-South and triangular cooperation."
Sustainable Development Goal 6 also includes capacity building in its Target 6a which is to "By 2030, expand international cooperation and capacity-building support to developing countries in water- and sanitation-related activities and programmes, including water harvesting, desalination, water efficiency, wastewater treatment, recycling and reuse technologies". Similarly, Sustainable Development Goal 8 Target 8.10 states "Strengthen the capacity of domestic financial institutions to encourage and expand access to banking, insurance and financial services for all".
Scale
As of 2009, some $20 billion per year of international development intervention funding went for capacity development; roughly 20% of total funding in this category The World Bank committed more than $1 billion per year to this service in loans or grants (more than 10% of its portfolio of nearly $10 billion).
A publication by OECD-DAC in 2005 estimated that "about a quarter of donor aid, or more than $15 billion a year, has gone into "Technical Cooperation", the bulk of which is ostensibly aimed at capacity development".
Processes for different entities
Governments
One of the most fundamental ideas associated with capacity building is the idea of building the capacities of governments in developing countries so they are able to handle the problems associated with environmental protection, economic and social needs. Developing a government's capacity whether at the local, regional or national level can improve governance and can lead to sustainable development and political reform. Capacity building in governments often targets a government's ability to budget, collect revenue, create and implement laws, promote civic engagement.
Local communities and NGOs
International donors often include capacity building as a form of interventions with local governments or NGOs working in developing areas. A study in 2001 observed that "the act of resetting aspirations and strategy is often the first step in improving an organization's capacity". Secondly good management is important (committed people in senior positions to make capacity building happen). Thirdly, patience is required: "there are few quick fixes when it comes to building capacity".
Some methods of capacity building for NGOs might include visiting training centers, organizing exposure visits, office and documentation support, on-the-job training, learning centers, and consultations.
Private sector organizations
For private sector organizations, capacity building may go beyond the improvement of services for public organizations and include fund-raising and income generation, diversity, partnerships and collaboration, marketing, positioning, planning and other activities relating to production and performance.:35–36 Capacity development of private organizations involves the build-up of an organization's tangible and intangible assets. Organization development (OD) is the study and implementation of practices, systems, and techniques that affect organizational change. The goal of which is to modify an organization's performance and/or culture.
Evaluation
Challenges with evaluations
The difficulties with achieving results from capacity development projects have regularly been described in a range of publications. For example, in 2006, a document by OECD-DAC stated that: "evaluation results confirm that development of sustainable capacity remains one of the most difficult areas of international development practice. Capacity development has been one of the least responsive targets of donor assistance, lagging behind progress in infrastructure development or improving health and child mortality".
Since the arrival of capacity building as a dominant subject in international aid, donors and practitioners have struggled to create a concise mechanism for determining the effectiveness of capacity building initiatives.
Recognition of problems in capacity building interventions in evaluations funded and managed by international organizations dates back to the year 1999. A World Bank review in the year 2000 found many examples where capacity building interventions undermined public management efforts. In these cases, public sector reform and institution-building were hindered. In 2005, the Bank noted again in its evaluations that business practices to its capacity building work are not as rigorous as they are in other areas. For example, standard quality assurance processes were missing at the design stage. Similar problems were reported by UNDP in 2002 when they reviewed their capacity building projects.
Effective evaluation and monitoring
In 2007, specific criteria for effective evaluation and monitoring of the capacity building of NGOs were proposed, though only in generalities without clear measures for the tool. The proposal suggested only that evaluating the capacity building ability of NGOs should be based on a combination of monitoring the results of their activities and also a more open flexible way of monitoring that also takes into consideration, self-improvement and cooperation. Other wishes were that monitoring for capacity building effectiveness should include an organization's clarity of mission, an organization's leadership, an organization's learning, an organization's emphasis on on-the-job-development, an organization's monitoring processes.
In 2007, USAID published a report on its approach to monitoring and evaluating the capacity building. According to the report, USAID monitors program objectives, the links between projects and activities of an organization and its objectives, a program or organization's measurable indicators, data collection, and progress reports. USAID noted two types of indicators for progress: "output indicators" and "outcome indicators." Output indicators measure immediate changes or results such as the number of people trained. Outcome indicators measure the impact, such as laws changed due to trained advocates. Both the "numbers of people trained" and "laws changed" are, however, just inputs or intermediate inputs and do not measure actual improvements in "performance" in terms of measurable outcomes of public agencies that are the definition of capacity building.
Despite these claims of existence of these evaluation approaches, there was little more than lists of inputs and outputs without use of professional management standards or any kind of real oversight, and a report for the World Bank in 2009 noted that the failures were deep and systemic, where the measures used are "smile sheets", asking beneficiaries if they are "happy" or "better off" and measuring things like "raised awareness", "enhanced skills", and "improved teamwork" that are "locally driven", rather than on whether the underlying problems are solved, and refraining from asking whether there may be hidden agendas to buy influence, subsidize elites, and continue dependency.
An independent public measurement indicator for improvement and oversight of the large variety of capacity building initiatives was published in 2015, with scoring, and based on international development law and professional management principles. This comprehensive indicator for capacity building was proposed as part of the elements codifying international development law in a treatise. It consists of 20 specific elements that apply law, administrative principles, social science concepts, and education concepts, to troubleshoot the actual problems that occur and to promote public oversight and accountability. The indicator has two sections: one with 11 questions to assure proper application of the five recognized principles of capacity building, analyzing their application in diagnosis and design of an intervention (7 questions), sustainability of reform (2 questions), and good governance (2 questions), and second, with 9 questions to assure professionalism and safeguards against conflicts of interest, unintended consequences, and distortion of public and private systems. This indicator is one of 13 that is part of the treatise of international development law and can be applied with the other indicators for specific sectors and development principles, as well as assurance of quality of evaluation systems.
Critique
Critique of capacity development has centered on the ambiguity surrounding it in terms of its anticipated focus, its effectiveness, the role of infrastructure organisations (such as empowerment networks), and the unwillingness or inability of public agencies to apply their own principles and international law.
Capacity building has been called a buzzword within development which comes with a heavy normative load but little critical interrogation and appropriate review. The term capacity building is usually "loaded with positive value".
Despite some 20 years recognizing the problems, practitioners continue to note that some capacity development projects are just "throwing money at symptoms with no logic or analysis". Others are "disguised bribes to government officials and attempts to undermine entire government structures by setting up foreign run Ministries and foreign influenced political parties or civil society to lobby for foreign interests" using the interventions as a form of "soft power". One common problem of interventions that focus on education and training of foreign government officials is that they are akin to trying to "teach elephants to fly" or to "teach wolves not to eat sheep" while avoiding the actual changes needed for impact.
Under international development law, there is also concern that much of the implementation of capacity building has been and continues to be in violation of existing international treaties such as the U.N. Declaration Against Corruption and Bribery, Articles 15, 16, 18, and 19.
Examples
Below are examples of capacity building in developing countries:
At state government level: In 1999, the UNDP supported capacity building of the state government in Bosnia and Herzegovina. The program focused on strengthening the state's government by fostering new organizational, leadership and management skills in government figures, improved the government's technical abilities to communicate with the international community and civil society within the country.
In India the Sanitation Capacity Building platform (SCBP) was designed to "support and build the capacity of town/cities to plan and implement decentralized sanitation solutions" with funding by the Bill & Melinda Gates Foundation from 2015 to 2022.
References
Community development
International development
Non-profit technology
Assistance | Capacity building | [
"Technology"
] | 3,587 | [
"Information technology",
"Non-profit technology"
] |
3,045,894 | https://en.wikipedia.org/wiki/Nortel%20Meridian | Nortel Meridian is a private branch exchange telephone switching system. It provides advanced voice features, data connectivity, LAN communications, computer telephony integration (CTI), and information services for communication applications ranging from 60 to 80,000 lines.
History
Exploratory development on digital technology, common for the SL-1 (PBX) and the DMS (public switch) product lines, began in 1969 at Northern Telecom, while R&D activities related to the SL-1 started in 1971. SL stands for Stored Logic.
The original products were developed in a Bell-Northern Research developed proprietary toolset and language, similar to Pascal called Protel, and ran without a specific operating system. In the 1990s it was evolved onto VxWorks, a commercial real time embedded operating system, at which time the model numbers were evolved to add the letter C to the end of the option numbers.
It was introduced by Northern Telecom in December 1974 at the USITA convention in San Francisco, with an original capacity from 100 to 7,600 lines, and became the first fully digital PBX announced on the global market aimed at the smaller PBX market. In the early 1970s, most PBXs were either electromechanical (e.g. cross-bar) or based on a hybrid technology (e.g. switching matrix made from a two-dimensional array of contacts but control performed by an electronic logic). For this reason, the SL-1 enjoyed a great success on the enterprise market both in North-America and globally.
Its success went on to power the company into a leadership position in the telephony world, and led to expanded designs "up and down" to provide products at all sizes, including the DMS series high-end machines, and the Meridian Norstar for smaller installations up to 200 users. The SL-1 was gradually enhanced (peripheral hardware, packaging, etc.) and renamed Meridian-1 in the late 1980s. The Meridian-1 has evolved to support IP telephony and other next generation IP services.
Impact
The Meridian has 43 million installed users worldwide, making it the most widely used PBX.
The Meridian was one of the few PBXs still available from a major communications supplier that can be configured as non-VOIP PBX and could be upgraded to a hybrid system with VoIP added.
Models
The Meridian 1 range currently consists of several models:
Meridian 1 Option 11C (60-800 lines)
Meridian 1 Option 11C Mini (60-128 lines)
Meridian 1 Option 61C (600-2000 lines)
Meridian 1 Option 81C (200-16,000 lines)
Additionally, other products have been sold using the Meridian brand:
The Norstar key telephone system was sold as the Meridian Norstar in some markets, until the mid-1990s
A large-scale switch based on the DMS-100 is sold as the Meridian SL-100
Resellers, and accessory manufacturers frequently but erroneously use the phrase "Meridian Option" to refer to the Meridian 1 range, to distinguish it from the smaller and larger Norstar and SL-100
Digital Line Card
A digital line card is an intelligent peripheral equipment (IPE) device which can be installed in the IPE module. It provides 16 voice and 16 data communication links between a Meridian 1 switch and modular digital telephones.
The digital line card supports voice only or simultaneous voice and data service over a single twisted pair of standard telephone wiring. When a Meridian digital telephone is equipped with the data option, an asynchronous ASCII terminal, or a PC acting as an asynchronous ASCII terminal, can be connected to the system through the digital telephone.
Physical description
Digital line cards are housed in the Intelligent Peripheral Equipment (IPE) Modules. Up to 16 cards are supported.
The digital line card circuitry is mounted on a by ( by ) double-sided printed circuit board. The card connects to the backplane through a 160-pin edge connector. The faceplate of the digital line card is equipped with a red LED that lights when the card is disabled.
When the card is installed, the LED remains lit for two to five seconds as a self-test runs. If the self-test completes successfully, the LED flashes three times and remains lit until the card is configured and enabled in software, then the LED goes out. If the LED continually flashes or remains weakly lit, there is a fault detected in the card.
Functional description
The digital line card is equipped with 16 identical digital line interfaces. Each interface provides a multiplexed voice, data, and signaling path to and from a digital terminal (telephone) over a 2-wire full duplex 512kHz time-division multiplexing digital link. Each digital telephone and associated data terminal is assigned a separate Terminal Number (TN) in the system database, giving a total of 32 addressable units per card.
References
Extracts from Nortel IP Telephony
TCM Loop
Time Compression Multiplexing (TCM) is the standard communication protocol used by Nortel digital telephones on a 2-wire tip and ring circuit. Each digital phone line terminates at a station port on a Digital Voice Card (DVC) in the PBX. The circuit interface operates by a transformer coupling which provides foreign voltage protection between the TCM loop and the digital line. The maximum length of a normal TCM loop is 300 m (1000 ft.) Minimum voltage at telephone 10 V DC and a TCM port should have between 15 and 20 V DC across the Tip and Ring with the phone disconnected. The port connection may be through a Punch-down block or a Modular connector called a TELADAPT plug (or socket) which is Nortel parlance for phone connector.
Evolution Path
The Nortel Meridian 1 can be upgraded to support VoIP in two forms:
VoIP Trunking, the Meridian 1 can have ITG-Trunk cards added to it to support PBX-to-PBX voice trunking using H.323
VoIP Line (VoIP Sets), the Meridian 1 can have IP Line cards added to it to support VoIP sets.
The introduction of Release 3.0 for the Meridian 1, otherwise known as the CS1000 Release 3.0 also provides an upgrade path for the existing customer base to upgrade a Meridian 1 to an IP-PBX
Introduction of E-MetroTel UCX4.5 MDSE package by leveraging MGC card, line cards and cabinets provides upgrade path for existing customers
See also
List of telephone switches
Nortel business phones
Portico TVA
References
External links
Network World. 19 November. 1990
Meridian product group home page
Meridian Product Image Library
Nortel Meridian / Avaya CS1K
Meridian (CS1000) Documentation
Telephone exchanges
Meridian
Computer telephony integration | Nortel Meridian | [
"Technology"
] | 1,383 | [
"Information technology",
"Computer telephony integration"
] |
3,046,012 | https://en.wikipedia.org/wiki/Credit%20card%20interest | Credit card interest is a way in which credit card issuers generate revenue. A card issuer is a bank or credit union that gives a consumer (the cardholder) a card or account number that can be used with various payees to make payments and borrow money from the bank simultaneously. The bank pays the payee and then charges the cardholder interest over the time the money remains borrowed. Banks suffer losses when cardholders do not pay back the borrowed money as agreed. As a result, optimal calculation of interest based on any information they have about the cardholder's credit risk is key to a card issuer's profitability. Before determining what interest rate to offer, banks typically check national, and international (if applicable), credit bureau reports to identify the borrowing history of the card holder applicant with other banks and conduct detailed interviews and documentation of the applicant's finances.
Interest rates
Interest rates vary widely. Some credit card loans are secured by real estate, and can be as low as 6to 12% in the U.S. (2005). Typical credit cards have interest rates between 7and 36% in the U.S., depending largely upon the bank's risk evaluation methods and the borrower's credit history. Brazil has much higher interest rates, about 50% over that of most developing countries, which average about 200% (Economist, May 2006). A Brazilian bank-issued Visa or MasterCard to a new account holder can have annual interest as high as 240% even though inflation seems to have gone up per annum (Economist, May 2006). Banco do Brasil offered its new checking account holders Visa and MasterCard credit accounts for 192% annual interest, with somewhat lower interest rates reserved for people with dependable income and assets (July 2005). These high-interest accounts typically offer very low credit limits (US$40 to $400). They also often offer a grace period with no interest until the due date, which makes them more popular for use as liquidity accounts, which means that the majority of consumers use them only for convenience to make purchases within the monthly budget, and then (usually) pay them off in full each month. As of August 2016, Brazilian rates can get as high as 450% per year.
Calculation of interest rates
Most U.S. credit cards are quoted in terms of nominal annual percentage rate (APR) compounded daily, or sometimes (and especially formerly) monthly, which in either case is not the same as the effective annual rate (EAR). Despite the "annual" in APR, it is not necessarily a direct reference for the interest rate paid on a stable balance over one year.
The more direct reference for the one-year rate of interest is EAR. The general conversion factor for APR to EAR is , where represents the number of compounding periods of the APR per EAR period.
For a common credit card quoted at 12.99% APR compounded daily, the one-year EAR is , or 13.87%; and if it is compounded monthly, the one-year EAR is or 13.79%. On an annual basis, the one-year EAR for compounding monthly is always less than the EAR for compounding daily. However, the relationship of the two in individual billing periods depends on the APR and the number of days in the billing period.
For example, given twelve billing periods a year, 365 days, and an APR of 12.99%, if a billing period is 28 days then the rate charged by monthly compounding is greater than that charged by daily compounding ( is greater than ). But for a billing period of 31 days, the order is reversed ( is less than ).
In general, credit cards available to middle-class cardholders that range in credit limit from $1,000 to $30,000 calculate the finance charge by methods that are exactly equal to compound interest compounded daily, although the interest is not posted to the account until the end of the billing cycle. A high U.S. APR of 29.99% carries an effective annual rate of 34.96% for daily compounding and 34.48% for monthly compounding, given a year with twelve billing periods and 365 days.
Table 1 below, given by Prosper (2005), shows data from Experian, one of the three main U.S. and UK credit bureaus (along with Equifax in the UK and TransUnion in the U.S. and internationally). (The data actually come from installment loans [closed end loans], but can also be used as a fair approximation for credit card loans [open end loans]). The table shows the loss rates from borrowers with various credit scores. To get a desired rate of return, a lender would add the desired rate to the loss rate to determine the interest rate. Though individual borrowers differ, lenders predict that, as an aggregate, borrowers will tend to exhibit the same payment behavior that others with similar credit scores have shown in the past. Banks then compete on details by making analyses of how to use data such as these along with any other data they gather from the application and history with the cardholder, to determine an interest rate that will attract borrowers by remaining competitive with other banks and still assure a profit. Debt-to-income ratio (DTI) is another important factor for determining interest rates. The bank calculates it by adding up the borrower's obligated minimum payments on loans, and dividing by the cardholder's income. If it is more than a set point (such as 20% in this example) then loans to that applicant are considered a higher risk than given by this table. These loss rates already include incomes the lenders receive from payments in collection, either from debt collection efforts after default or from selling the loans to third parties for further collection attempts, at a fraction of the amount owed.
To use the chart to make a loan, determine an expected rate of return on the investment (X) and add that to the expected loss rate from the chart. The sum is an approximation of the interest rate that should be contracted with the borrower in order to achieve the expected rate of return.
Interrelated fees
Banks make many other fees that interrelate with interest charges in complex ways (since they make a profit from the whole combination), including transactions fees paid by merchants and cardholders, and penalty fees, such as for borrowing over the established credit limit, or for failing to make a minimum payment on time.
Banks vary widely in the proportion of credit card account income that comes from interest (depending upon their marketing mix). In a typical UK card issuer, between 80% and 90% of cardholder generated income is derived from interest charges. A further 10% is made up from default fees.
Laws
Usury
Many nations limit the amount of interest that can be charged (often called usury laws). Most countries strictly regulate the manner in which interest rates are agreed, calculated, and disclosed. Some countries (especially with Muslim influence) prohibit interest being charged at all (and other methods are used, such as an ownership interest taken by the bank in the cardholder's business profits based upon the purchase amount).
United States
Credit CARD Act of 2009
This statute covers several aspects of credit card contracts, including the following:
Limits over-the-limit fees to cases where the consumer has given permission.
Limits interest rate increases on past balances to cases in which the account has been over 60 days late.
Limits general interest rate increases to 45 days after a written notice is given, allowing the consumer to opt out.
Requires extra payments to be applied to the highest-interest rate sub-balance.
Truth in Lending Act
In the United States, there are four commonly accepted methods of charging interest, which are listed in the section below, "Methods of Charging Interest". These are detailed in RegulationZ of the Truth in Lending Act. There is a legal obligation on U.S. issuers that the method of charging interest is disclosed and is sufficiently transparent to be fair. This is typically done in the Schumer box, which lists rates and terms in writing to the cardholder applicant in a standard format. RegulationZ details four principal methods of calculating interest. For purposes of comparison between rates, the "expected rate" is the APR applied to the average daily balance for a year, or in other words, the interest charged on the actual balance left lent out by the bank at the close of each business day.
That said, there are not just four prescribed ways to charge interest, to wit those specified in RegulationZ. U.S. issuers can charge interest according to any reasonable method to which the card holder agrees. The four (or arguably six) "safe-harbour" ways to describe and charge interest are detailed in RegulationZ. If an issuer charges interest in one of these ways then there is a shorthand description of that method in RegulationZ that can be used. If a lender uses that description, and charges interest in that way, then their disclosure is deemed to be sufficiently transparent and fair. If not, then they must provide an explanation of the method used. Because of the safe-harbour definitions, U.S. lenders have tended to gravitate towards these methods of charging and describing the way interest is charged, both because it is easy and because legal compliance is guaranteed. Arguably, the approach also provides flexibility for issuers, enhancing the profile of the way in which interest is charged, and therefore increasing the scope for product differentiation on what is, after all, a key product feature.
Pre-payment penalties
Clauses calling for a penalty for paying more than the contracted regular payment were once common in another type of loan, the installment loan, and they are of great concern to governments regulating credit card loans. Today, in many cases because of strict laws, most card issuers do not charge any pre-payment penalties at all (except those that come naturally from the interest calculation methodsee the section below). That means cardholders can "cancel" the loan at any time by simply paying it off, and be charged no more interest than that calculated on the time the money was borrowed.
Cancelling loans
Cardholders are often surprised in situations where the bank cancels or changes the terms on their loans. Most card issuers are allowed to raise the interest rate (within legal guidelines) at any time. Usually they have to give some notice, such as 30 or 60 days, in writing. If the cardholder does not agree to the new rate or terms, then it is expected that the account will be paid off. That can be difficult for a cardholder with a large loan who expected to make payments over many years. Banks can also cancel a loan and request that all amounts be paid back immediately for any default on the contract whatsoever, which could be as simple as a late payment or even a default on a loan to another bank (the so-called "Universal default") if the contract states it. In some cases, a borrower may cancel the account within the time allowed without paying off the account. As long as the borrower makes no new charges on the account, then the borrower has not "agreed" to the new terms, and may pay off the account under the old terms.
Average daily balance
The sum of the daily outstanding balances is divided by the number of days covered in the cycle to give an average balance for that period. This amount is multiplied by a constant factor to give an interest charge. The resultant interest is the same as if interest was charged at the close of each day, except that it compounds (gets added to the principal) only once per month. It is the simplest of the four methods in the sense that it produces an interest rate approximating if not exactly equal the expected rate.
Adjusted balance
The balance at the end of the billing cycle is multiplied by a factor in order to give the interest charge. This can result in an actual interest rate lower or higher than the expected one, since it does not take into account the average daily balance, i.e. the time value of money actually lent by the bank. It does, however, take into account money that is left lent out over several months.
Previous balance
The reverse happens: the balance at the start of the previous billing cycle is multiplied by the interest factor in order to derive the charge. As with the Adjusted Balance method, this method can result in an interest rate higher or lower than the expected one, but the part of the balance that carries over more than two full cycles is charged at the expected rate.
Two-cycle average daily balance
The sum of the daily balances of the previous two cycles is used, but interest is charged on that amount only over the current cycle. This can result in an actual interest charge that applies the advertised rate to an amount that does not represent the actual amount of money borrowed over time, much different that the expected interest charge. The interest charged on the actual money borrowed over time can vary radically from month-to-month (rather than the APR remaining steady). For example, a cardholder with an average daily balance for the June, July, and August cycles of $100, 1000, 100, will have interest calculated on 550 for July, which is only 55% of the expected interest on 1000, and will have interest calculated on 550 again in August, which is 550% higher than the expected interest on the money actually borrowed over that month, which is 100.
However, when analyzed, the interest on the balance that stays borrowed over the whole time period ($100 in this case) actually does approximate the expected interest rate, just like the other methods, so the variability is only on the balance that varies month-to-month. Therefore, the key to keeping the interest rate stable and close to the "expected rate" (as given by average daily balance method) is to keep the balance close to the same every month. The strategic consumer who has this type of account either pays it all off each month, or makes most charges towards the end of the cycle and payments at the beginning of the cycle to avoid paying too much interest above the expected interest given the interest rate; whereas business cardholders have more sophisticated ways of analyzing and using this type of account for peak cash-flow needs, and willingly pay the "extra" interest to do better business.
Much confusion is caused by and much mis-information given about this method of calculating interest. Because of its complexity for consumers, advisors from Motley Fool (2005) to Credit Advisors (2005) advise consumers to be very wary of this method (unless they can analyze it and achieve true value from it). Despite the confusion of variable interest rates, the bank using this method does have a rationale; that is it costs the bank in strategic opportunity costs to vary the amount loaned from month-to-month, because they have to adjust assets to find the money to loan when it is suddenly borrowed, and find something to do with the money when it is paid back. In that sense, the two-cycle average daily balance can be likened to electric charges for industrial clients, in which the charge is based upon the peak usage rather than the actual usage. And, in fact, this method of charging interest is often used for business cardholders as stated above. These accounts often have much higher credit limits than typically consumer accounts (perhaps tens or hundreds of thousands instead of just thousands).
Daily accrual
The daily accrual method is commonly used in the UK. The annual rate is divided by 365 to give a daily rate. Each day, the balance of the account is multiplied by this rate, and at the end of the cycle the total interest is billed to the account. The effect of this method is theoretically mathematically the same over one year as the average daily balance method, because the interest is compounded monthly, but calculated on daily balances. Although a detailed analysis can be done that shows that the effective interest can be slightly lower or higher each month than with the average daily balance method, depending upon the detailed calculation procedure used and the number of days in each month, the effect over the entire year provides only a trivial opportunity for arbitrage.
Methods and marketing
In effect, differences in methods mostly act upon the fluctuating balance of the most recent cycle (and are almost the same for balances carried over from cycle to cycle. Banks and consumers are aware of transaction costs, and banks actually receive income in the form of per-transaction payments from the merchants, besides gaining a new loan, which is more business for the bank. Therefore, the interest charged in the most recent cycle interrelates with other incomes and benefits to the cardholder and bank, such as transaction cost, transaction fees to the bank, marketing costs for gaining each new loan (which is like a sale for the bank) and marketing costs for overall cardholder perception, which can increase market share. Therefore, the rate charged on the most recent cycle is largely a matter of marketing preference based upon cardholder perceptions, rather than a matter of maximizing the rate.
Bank fee arbitrage and its limits
In general, differences between methods represent a degree of precision over charging the expected interest rate. Precision is important because any detectable difference from the expected rate can theoretically be taken advantage of (through arbitrage) by cardholders (who have control over when to charge and when to pay), to the possible loss of profitability of the bank. However, in effect, the differences between methods are trivial except in terms of cardholder perceptions and marketing, because of transaction costs, transaction income, cash advance fees, and credit limits. While cardholders can certainly affect their overall costs by managing their daily balances (for example, by buying or paying early or late in the month depending upon the calculation method), their opportunities for scaling this arbitrage to make large amounts of money are very limited. For example, in order to charge the maximum on the card, to take maximum advantage of any aribtragable difference in calculation methods, cardholders must actually buy something of that value at the right time, and doing so only to take advantage of a small mathematical discrepancy from the expected rate could be very inconvenient. That adds a cost to each transaction which obscures any benefit that can be gained. Credit limits limit how much can be charged, and thus how much advantage can be taken (trivial amounts), and cash advance fees are charged by banks partially to limit the amount of free movement that can be accomplished. (With no fee cardholders could create any daily balances advantageous to them through a series of cash advances and payments).
Cash rate
Most banks charge a separate, higher interest rate, and a cash advance fee (ranging from 1 to 5% of the amount of cash taken) on cash or cash-like transactions (called "quasi-cash" by many banks). These transactions are usually the ones for which the bank receives no transaction fee from the payee, such as cash from a bank or ATM, casino chips, and some payments to the government (and any transaction that looks in the bank's discretion like a cash swap, such as a payment on multiple invoices). In effect, the interest rate charged on purchases is subsidized by other profits to the bank.
Default rate
Many US banks since 2000 and 2009 had a contractual default rate (in the U.S., 2005, ranging from 10to36%), which is typically much higher than the regular APR. The rate took effect automatically if any of the listed conditions occur, which can include the following: one or two late payments, any amount overdue beyond the due date or one more cycle, any returned payment (such as an NSF check), any charging over the credit limit (sometimes including the bank's own fees), andin some casesany reduction of credit rating or default with another lender, at the discretion of the bank. In effect, the cardholder is agreeing to pay the default rate on the balance owed unless all the listed events can be guaranteed not to happen. A single late payment, or even a non-reconciled mistake on any account, could result in charges of hundreds or thousands of dollars over the life of the loan. These high effective fees create a great incentive for cardholders to keep track of all their credit card and checking account balances (from which credit card payments are made) and for keeping wide margins (extra money or money available). However, the current lack of provable "account balance ownership" in most credit card and checking account designs (studied between 1990 and 2005) make these kinds of "penalty fees" a complex problem, indeed. New US statutes passed in 2009 limit the use of default rates by allowing an increase in rate on purchases already made only to accounts that have been over 60 days late.
Variable rate
Many credit card issuers give a rate that is based upon an economic indicator published by a respected journal. For example, most banks in the U.S. offer credit cards based upon the lowest U.S. prime rate as published in the Wall Street Journal on the previous business day to the start of the calendar month. For example, a rate given as 9.99% plus the prime rate will be 16.99% when the prime rate is 7.00% (such as the end of 2005). These rates usually also have contractual minimums and maximums to protect the consumer (or the bank, as it may be) from wild fluctuations of the prime rate. While these accounts are harder to budget for, they can theoretically be a little less expensive since the bank does not have to accept the risk of fluctuation of the market (since the prime rate follows inflation rates, which affect the profitability of loans). A fixed rate can be better for consumers who have fixed incomes or need control over their payments budgets. These rates can be varied upon depending upon the policies of different organisations.
Grace period
Many banks provide an exception to their normal method of calculating interest, in which no interest is charged on an ending statement balance that is paid by the due date. Banks have various rules. In some cases the account must be paid off for two months in a row to obtain the discount. If the required amount is not paid, then the normal interest rate calculation method is still used. This allows cardholders to use credit cards for the convenience of the payment method (to have one invoice payable with one check per month rather than many separate cash or check transactions), which allows them to keep money invested at a return until it must be moved to pay the balance, and allows them to keep the float on the money borrowed during each month. The bank, in effect, is marketing the convenience of the payment method (to receive fees and possible new lending income, when the cardholder does not pay), as well as the loans themselves.
Promotional interest rates
Many banks offer very low interest, often 0%, for a certain number of statement cycles on certain sub-balances ranging from the entire balance to purchases or balance transfers (used to pay off other accounts), or only for buying certain merchandise in stores owned or contracted with by the lender. Such "zero interest" credit cards allow participating retailers to generate more sales by encouraging consumers to make more purchases on credit. Additionally, the bank gets a chance to increase income by having more money lent out, and possibly an extra marketing transaction payment, either from the payee or sales side of the business, for contributing to the sale (in some cases as much as the entire interest payment, charged to the payee instead of the cardholder).
These offers are often complex, requiring the cardholder to work to understand the terms of the offer, and possibly to pay off the sub-balance by a certain date or have interest charged retro-actively, or to pay a certain amount per month over the minimum due (an "interest free" minimum payment) in order to pay down the sub-balance. Methods for communicating the sub-balances and rules on statements vary widely and do not usually conform to any standard. For example, sub-balances are not always reconcilable with the bank (due to lack of debit and credit statements on those balances), and even the term "cycle" (for number of cycles) is not often defined in writing by the bank. Banks also allocate payments automatically to sub-balances in various, often obscure ways. For example, they may contractually pay off promotional balances before higher-interest balances (causing the higher interest to be charged until the account is paid off in full.) These methods, besides possibly saving the cardholder money over the expected interest rate, serve to obscure the actual rate charged by the bank. For example, consumers may think they are paying zero percent, when the actual calculated amount on their daily balances is much more.
When a "promotional" rate expires, normal balance transfer rate would apply and significant increase in interest charges could accrue and may be greater than they were prior to initiating a balance transfer.
Stoozing
Stoozing is the act of borrowing money at an interest rate of 0%, a rate typically offered by credit card companies as an incentive for new customers. The money is then placed in a high interest bank account to make a profit from the interest earned. The borrower (or "stoozer") then pays the money back before the 0% period ends. The borrower does not typically have a real debt to service, but instead uses the money loaned to them to earn interest. Stoozing can also be viewed as a form of arbitrage.
Rewards programs
The term "rewards program" is a term used by card issuers to refer to offers (first used by Discover Card in 1985) to share transactions fees with the cardholder through various games and bonus programs. Cardholders typically receive one "point", "mile" or actual penny (1% of the transaction) for each dollar spent on the card, and more points for buying from certain types of merchants or cooperating merchants, and then can pay down the loan, or trade points for airline flights, catalog merchandise, lower interest rates, gift cards, or cash. The points can also be exchanged, sometimes, between cooperating programs of different banks, making them more and more currency-like. These programs represent such a large value that they are not-completely-jokingly considered a set of currencies. These combined "currencies" have accumulated to the point that they hold more value worldwide than U.S. (paper) dollars, and are the subject of company liquidation disputes and divorce settlements (Economist, 2005). They are criticized for being highly inflationary, and subject to the whims of the card issuers (raising the prices after the points are earned). Many cardholders get a new card or use a card for the points, but later forget or decline to use the points, anyway. While opening new avenues for marketing and competition, rewards programs are criticized in terms of being able to compare interest rates by making it impossible for consumers to compare one competitive interest rate offer to another through any standard means such as under the U.S. Truth in Lending Act, because of the extra value offered by the bonus program, along with other terms, costs, and benefits created by other marketing gimmicks such as the ones cited in this article.
References
Credit card terminology
Interest rates
Mathematical finance | Credit card interest | [
"Mathematics"
] | 5,621 | [
"Applied mathematics",
"Mathematical finance"
] |
3,046,323 | https://en.wikipedia.org/wiki/Gibbs%20algorithm | In statistical mechanics, the Gibbs algorithm, introduced by J. Willard Gibbs in 1902, is a criterion for choosing a probability distribution for the statistical ensemble of microstates of a thermodynamic system by minimizing the average log probability
subject to the probability distribution satisfying a set of constraints (usually expectation values) corresponding to the known macroscopic quantities. in 1948, Claude Shannon interpreted the negative of this quantity, which he called information entropy, as a measure of the uncertainty in a probability distribution. In 1957, E.T. Jaynes realized that this quantity could be interpreted as missing information about anything, and generalized the Gibbs algorithm to non-equilibrium systems with the principle of maximum entropy and maximum entropy thermodynamics.
Physicists call the result of applying the Gibbs algorithm the Gibbs distribution for the given constraints, most notably Gibbs's grand canonical ensemble for open systems when the average energy and the average number of particles are given. (See also partition function).
This general result of the Gibbs algorithm is then a maximum entropy probability distribution. Statisticians identify such distributions as belonging to exponential families.
References
Statistical mechanics
Particle statistics
Entropy and information | Gibbs algorithm | [
"Physics",
"Mathematics"
] | 234 | [
"Statistical mechanics stubs",
"Physical quantities",
"Particle statistics",
"Entropy and information",
"Entropy",
"Statistical mechanics",
"Dynamical systems"
] |
3,046,549 | https://en.wikipedia.org/wiki/Moishezon%20manifold | In mathematics, a Moishezon manifold is a compact complex manifold such that the field of meromorphic functions on each component has transcendence degree equal the complex dimension of the component:
Complex algebraic varieties have this property, but the converse is not true: Hironaka's example gives a smooth 3-dimensional Moishezon manifold that is not an algebraic variety or scheme. showed that a Moishezon manifold is a projective algebraic variety if and only if it admits a Kähler metric. showed that any Moishezon manifold carries an algebraic space structure; more precisely, the category of Moishezon spaces (similar to Moishezon manifolds, but are allowed to have singularities) is equivalent with the category of algebraic spaces that are proper over .
References
Algebraic geometry
Analytic geometry | Moishezon manifold | [
"Mathematics"
] | 165 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
3,046,584 | https://en.wikipedia.org/wiki/Kraton%20%28polymer%29 | Kraton is the trade name given to a number of high-performance elastomers manufactured by Kraton Polymers, and used as synthetic replacements for rubber. Kraton polymers offer many of the properties of natural rubber, such as flexibility, high traction, and sealing abilities, but with increased resistance to heat, weathering, and chemicals.
Company
The origin of Kraton polymers goes back to the synthetic rubber (GR-S) program funded by the U.S. government during World War II to develop and establish a domestic supply capability for synthetic styrene butadiene rubber (SBR) as an alternative to natural rubber.
Shell Oil Company purchased the Torrance, California facility from the U.S. government that was built to make synthetic styrene butadiene rubber. The company formed Elastomers Division that eventually became Kraton Corporation. Shell Oil Company broaden the product portfolio of elastomers in the 1950s, under the technical leadership of Murray Luftglass and Norman R. Legge.
As part of the divestment program that was announced by Shell in December 1998, the Kraton elastomers business was sold to a private equity firm Ripplewood Holdings in 2000. Kraton completed IPO on December 17, 2009, to became a separate publicly traded company. In 2021 Kraton employees won an ASC Innovation Award for "Next Generation of Biobased Tackifiers REvolutionTM". Kraton employees accept an ASC Innovation Award
Properties
Kraton polymers are styrenic block copolymer (SBC) consisting of polystyrene blocks and rubber blocks. The rubber blocks consist of polybutadiene, polyisoprene, or their hydrogenated equivalents. The tri-block with polystyrene blocks at both extremities linked together by a rubber block is the most important polymer structure observed in SBC. If the rubber block consists of polybutadiene, the corresponding triblock structure is: poly(styrene-block-butadiene-block-styrene) usually abbreviated as SBS. Kraton D (SBS and SIS) and their selectively hydrogenated versions Kraton G (SEBS and SEPS) are the major Kraton polymer structures. The microstructure of SBS consists of domains of polystyrene arranged regularly in a matrix of polybutadiene, as shown in the TEM micrograph. The picture was obtained on a thin film of polymer cast onto mercury from solution, and then stained with osmium tetroxide.
The glass transition temperature (Tg) of the polybutadiene blocks is typically −90 °C and Tg of the polystyrene blocks is +100 °C. So, at any temperature between about −90 °C and +100 °C Kraton SBS will act as a physically crosslinked elastomer. If Kraton polymers are heated substantially above the Tg of the styrene-derived blocks, that is, above about 100 °C, like 170 °C the physical cross-links change from rigid glassy regions to flowable melt regions and the entire material flows and therefore can be cast, molded, or extruded into any desired form. On cooling, this new form resumes its elastomeric character. This is the reason such a material is called a thermoplastic elastomer (TPE). The polystyrene blocks form domains of nanometre size in the microstructure, and they stabilize the form of the molded material. Depending on the rubber-to-polystyrene ratio in the material, the polystyrene domains can be spherical or form cylinders or lamellae. The hydrogenated Kraton polymers named Kraton G exhibit improved resistance to temperature (processing at 200–230 °C is common), to oxidation, and to UV. SEBS and SEPS due to their polyolefinic rubber nature present excellent compatibility with polyolefins and paraffinic oils.
Applications
Kraton polymers are always used in blends with various other ingredients like paraffinic oils, polyolefins, polystyrene, bitumen, tackifying resins, and fillers to provide a very large range of end-use products ranging from hot melt adhesives to impact-modified transparent polypropylene bins, from medical TPE compounds to modified bitumen roofing felts or from oil gel toys (including sex toys) to elastic attachments in diapers.
It can make asphalt flexible, which is necessary if the asphalt is to be used to coat a surface that is below grade or for highly demanding paving applications like F1 racing tracks.
Kraton-based compounds are also used in non-slip knife handles.
The earliest commercial components using Kraton G (thermoplastic rubber) in the automobile industry were in 1970s. The implementation of U.S. requirements for automobile bumpers to absorb impacts with no damage to the car's safety equipment lead to the first successful commercial automotive application of specialized flexible polymers as fascia for the 1974 AMC Matador.
American Motors Corporation (AMC) also used this polymer plastic on the AMC Eagle for the color matched flexible wheel arch flares that flowed into rocker panel extensions. This was needed because of the Eagle's 2-inch wider track compared to the AMC Concord platform on which the AWD cars were based on. The Eagle's Kraton bodywork was lightweight, flexible, and did not crack in cold weather as is typical of fiberglass automobile body components.
Some grades of Kraton can also be dissolved into hydrocarbon oils to create "shear thinning" grease-type products that are used in the manufacture of telecommunications cables containing optical fibers.
References
Polymers
Copolymers
Brand name materials | Kraton (polymer) | [
"Chemistry",
"Materials_science"
] | 1,198 | [
"Polymers",
"Polymer chemistry"
] |
3,046,828 | https://en.wikipedia.org/wiki/Savas%20Dimopoulos | Savas Dimopoulos (; ; born 1952) is a particle physicist at Stanford University. He worked at CERN from 1994 to 1997. Dimopoulos is well known for his work on constructing theories beyond the Standard Model.
Life
He was born an ethnic Greek in Istanbul, Turkey and later moved to Athens due to ethnic tensions in Turkey during the 1950s and 1960s.
Education and career
Dimopoulos studied as an undergraduate at the University of Houston. He went to the University of Chicago and studied under Yoichiro Nambu for his doctoral studies. After completing his Ph.D. in 1979, he briefly went to Columbia University before taking a faculty position at Stanford University in 1980. During 1981 and 1982 he was also affiliated with the University of Michigan, Harvard University and the University of California, Santa Barbara. From 1994 to 1997 he was on leave from Stanford University and was employed by CERN.
Dimopoulos is well known for his work on constructing theories beyond the Standard Model, which are currently being searched for and tested at particle colliders like LHC at CERN and at experiments all over the world. For example, in 1981 he proposed a softly broken SU(5) GUT model with Howard Georgi, which is one of the foundational papers of the Minimal Supersymmetric Standard Model (MSSM). He also proposed the ADD model of large extra dimensions with Nima Arkani-Hamed and Gia Dvali.
Awards
In 2006, the American Physical Society awarded Dimopoulos the Sakurai Prize, "For his creative ideas on dynamical symmetry breaking, supersymmetry, and extra spatial dimensions, which have shaped theoretical research on TeV-scale physics, thereby inspiring a wide range of experiments." In 2006, he received the Caterina Tomassoni and Felice Pietro Chisesi Prize at the University of Rome, Italy. "The prize recognizes and encourages outstanding achievements in physics. Dimopoulos was lauded by the Tomassoni Committee as "one of the leading figures in theoretical particle physics. His proposal of the supersymmetric standard model has aided the understanding of high-energy physics mechanisms."
He appeared in the 2013 documentary film Particle Fever, about the work of the Large Hadron Collider.
He is a member of the U. S. National Academy of Sciences.
Work
Baryogenesis at the GUT scale
Early work on technicolor
Early work on soft supersymmetry breaking and Gauge coupling unification in the MSSM
Moduli-mediated millimeter scale forces
"ADD model" of large extra dimensions, with Nima Arkani-Hamed and Gia Dvali
Split supersymmetry
References
External links
Papers in the INSPIRE-HEP database.
Faculty page at Stanford.
1952 births
People associated with CERN
Greek academics
Greek emigrants to the United States
20th-century Greek physicists
Constantinopolitan Greeks
Harvard University staff
J. J. Sakurai Prize for Theoretical Particle Physics recipients
Living people
Particle physicists
Scientists from Istanbul
Stanford University Department of Physics faculty
Turkish people of Greek descent
University of Chicago alumni
University of Houston alumni
University of Michigan alumni
Members of the United States National Academy of Sciences
Scientists from Athens
Academics from Istanbul | Savas Dimopoulos | [
"Physics"
] | 644 | [
"Particle physicists",
"Particle physics"
] |
3,047,078 | https://en.wikipedia.org/wiki/VMDS | VMDS abbreviates the relational database technology called Version Managed Data Store provided by GE Energy as part of its Smallworld technology platform and was designed from the outset to store and analyse the highly complex spatial and topological networks typically used by enterprise utilities such as power distribution and telecommunications.
VMDS was originally introduced in 1990 as has been improved and updated over the years. Its current version is 6.0.
VMDS has been designed as a spatial database. This gives VMDS a number of distinctive characteristics when compared to conventional attribute only relational databases.
Distributed server processing
VMDS is composed of two parts: a simple, highly scalable data block server called SWMFS (Smallworld Master File Server) and an intelligent client API written in C and Magik. Spatial and attribute data are stored in data blocks that reside in special files called data store files on the server. When the client application requests data it has sufficient intelligence to work out the optimum set of data blocks that are required. This request is then made to SWMFS which returns the data to the client via the network for processing.
This approach is particularly efficient and scalable when dealing with spatial and topological data which tends to flow in larger volumes and require more processing then plain attribute data (for example during a map redraw operation). This approach makes VMDS well suited to enterprise deployment that might involve hundreds or even thousands of concurrent clients.
Support for long transactions
Relational databases support short transactions in which changes to data are relatively small and are brief in terms in duration (the maximum period between the start and the end of a transaction is typically a few seconds or less).
VMDS supports long transactions in which the volume of data involved in the transaction can be substantial and the duration of the transaction can be significant (days, weeks or even months). These types of transaction are common in advanced network applications used by, for example, power distribution utilities.
Due to the time span of a long transaction in this context the amount of change can be significant (not only within the scope of the transaction, but also within the context of the database as a whole). Accordingly, it is likely that the same record might be changed more than once. To cope with this scenario VMDS has inbuilt support for automatically managing such conflicts and allows applications to review changes and accept only those edits that are correct.
Spatial and topological capabilities
As well as conventional relational database features such as attribute querying, join fields, triggers and calculated fields, VMDS has numerous spatial and topological capabilities. This allows spatial data such as points, texts, polylines, polygons and raster data to be stored and analysed.
Spatial functions include: find all features within a polygon, calculate the Voronoi polygons of a set of sites and perform a cluster analysis on a set of points.
Vector spatial data such as points, polylines and polygons can be given topological attributes that allow complex networks to be modelled. Network analysis engines are provided to answer questions such as find the shortest path between two nodes or how to optimize a delivery route (the travelling salesman problem). A topology engine can be configured with a set of rules that define how topological entities interact with each other when new data is added or existing data edited.
Data abstraction
In VMDS all data is presented to the application as objects. This is different from many relational databases that present the data as rows from a table or query result using say JDBC. VMDS provides a data modelling tool and underlying infrastructure as part of the Smallworld technology platform that allows administrators to associate a table in the database with a Magik exemplar (or class). Magik get and set methods for the Magik exemplar can be automatically generated that expose a table's field (or column). Each VMDS row manifests itself to the application as an instance of a Magik object and is known as an RWO (or real world object). Tables are known as collections in Smallworld parlance.
# all_rwos hold all the rwos in the database and is heterogeneous
all_rwos << my_application.rwo_set()
# valve_collection holds the valve collection
valves << all_rwos.select(:collection, {:valve})
number_of_valves << valves.size
Queries are built up using predicate objects:
# find 'open' valves.
open_valves << valves.select(predicate.eq(:operating_status, "open"))
number_of_open_valves << open_valves.size
_for valve _over open_valves.elements()
_loop
write(valve.id)
_endloop
Joins are implemented as methods on the parent RWO. For example, a manager might have several employees who report to him:
# get the employee collection.
employees << my_application.database.collection(:gis, :employees)
# find a manager called 'Steve' and get the first matching element
steve << employees.select(predicate.eq(:name, "Steve").and(predicate.eq(:role, "manager")).an_element()
# display the names of his direct reports. name is a field (or column)
# on the employee collection (or table)
_for employee _over steve.direct_reports.elements()
_loop
write(employee.name)
_endloop
Performing a transaction:
# each key in the hash table corresponds to the name of the field (or column) in
# the collection (or table)
valve_data << hash_table.new_with(
:asset_id, 57648576,
:material, "Iron")
# get the valve collection directly
valve_collection << my_application.database.collection(:gis, :valve)
# create an insert transaction to insert a new valve record into the collection a
# comment can be provide that describes the transaction
transaction << record_transaction.new_insert(valve_collection, valve_data, "Inserted a new valve")
transaction.run()
See also
Smallworld Technical Paper No. 8 - GIS Databases Are Different
List of relational database management systems
List of object-relational database management systems
Spatial database
Multiversion concurrency control
Data management
GIS software | VMDS | [
"Technology"
] | 1,327 | [
"Data management",
"Data"
] |
3,047,387 | https://en.wikipedia.org/wiki/Pattinson%27s%20process | Pattinson's process or pattinsonisation is a method for removing silver from lead, discovered by Hugh Lee Pattinson in 1829 and patented in 1833.
The process is dependent on the fact that lead which has least silver in it solidifies first on liquefaction, leaving the remaining liquid richer in silver.
In practice several crystallisations were required, so Pattinson's equipment consisted basically of nothing more complex than a row of up to 13 iron pots, which were heated from below. Some lead, naturally containing a small percentage of silver, was loaded into the central pot and melted. This was then allowed to cool. As the lead solidified it was removed using large perforated iron ladles and moved to the next pot in one direction, and the remaining metal which was now richer in silver was then transferred to the next pot in the opposite direction. The process was repeated from one pot to the next, the lead accumulating in the pot at one end and metal enriched in silver in the pot at the other.
The level of enrichment possible is limited by the lead-silver eutectic and typically the silver content of the silver-rich melt could not be raised above 2% (around 600 to 700 ounces per ton), so further separation is carried out by cupellation.
The process was economic for lead containing at least 250 grams of silver per ton. Being the first process applicable to low-grade lead, it supplemented earlier patio process and pan amalgamation.
It was replaced by the Parkes process in the mid-19th century.
See also
Parkes process - a method for separation of metals from lead through precipitation.
References
External links
Reference to Pattinson's Process in mining.
Pattinson's process.
Pattinson's white lead.
Lead
Silver
Metallurgical processes | Pattinson's process | [
"Chemistry",
"Materials_science"
] | 367 | [
"Metallurgical processes",
"Metallurgy"
] |
3,047,469 | https://en.wikipedia.org/wiki/Plant%20variety%20%28law%29 | Plant variety is a legal term, following the International Union for the Protection of New Varieties of Plants (UPOV) Convention. Recognition of a cultivated plant (a cultivar) as a "variety" in this particular sense provides its breeder with some legal protection, so-called plant breeders' rights, depending to some extent on the internal legislation of the UPOV signatory countries, such as the Plant Variety Protection Act in the US.
This "variety" (which will differ in status according to national law) should not be confused with the international taxonomic rank of "variety" (regulated by the International Code of Nomenclature for algae, fungi, and plants), nor with the term "cultivar" (regulated by the International Code of Nomenclature for Cultivated Plants). Some horticulturists use "variety" imprecisely; for example, viticulturists almost always refer to grape cultivars as "grape varieties".
The EU has established a system that grants intellectual property rights to new plant varieties called Community plant variety right. It is valid throughout the EU and is in line with TRIPS/WTO agreements and the UPOV 1991 convention.
Around 27,260 plant variety applications were filed worldwide in 2022, up 8.2% on 2021 – a seventh consecutive year of growth. China contributed the majority of global growth, followed by the United Kingdom.
See also
Lists of cultivars
Protection of Plant Varieties and Farmers' Rights Act, 2001 of India
References
External links
International Union for the Protection of New Varieties of Plants (UPOV)
Community Plant Variety Office (CPVO)
Legal terminology
Biological patent law
Agricultural law | Plant variety (law) | [
"Biology"
] | 333 | [
"Biotechnology law",
"Biological patent law"
] |
3,047,554 | https://en.wikipedia.org/wiki/Kelly%20criterion | In probability theory, the Kelly criterion (or Kelly strategy or Kelly bet) is a formula for sizing a sequence of bets by maximizing the long-term expected value of the logarithm of wealth, which is equivalent to maximizing the long-term expected geometric growth rate. John Larry Kelly Jr., a researcher at Bell Labs, described the criterion in 1956.
The practical use of the formula has been demonstrated for gambling, and the same idea was used to explain diversification in investment management. In the 2000s, Kelly-style analysis became a part of mainstream investment theory and the claim has been made that well-known successful investors including Warren Buffett and Bill Gross use Kelly methods. Also see intertemporal portfolio choice. It is also the standard replacement of statistical power in anytime-valid statistical tests and confidence intervals, based on e-values and e-processes.
Kelly criterion for binary return rates
In a system where the return on an investment or a bet is binary, so an interested party either wins or loses a fixed percentage of their bet, the expected growth rate coefficient yields a very specific solution for an optimal betting percentage.
Gambling Formula
Where losing the bet involves losing the entire wager, the Kelly bet is:
where:
is the fraction of the current bankroll to wager.
is the probability of a win.
is the probability of a loss.
is the proportion of the bet gained with a win. E.g., if betting $10 on a 2-to-1 odds bet (upon win you are returned $30, winning you $20), then .
As an example, if a gamble has a 60% chance of winning (, ), and the gambler receives 1-to-1 odds on a winning bet (), then to maximize the long-run growth rate of the bankroll, the gambler should bet 20% of the bankroll at each opportunity ().
If the gambler has zero edge (i.e., if ), then the criterion recommends the gambler bet nothing.
If the edge is negative (), the formula gives a negative result, indicating that the gambler should take the other side of the bet.
Investment formula
A more general form of the Kelly formula allows for partial losses, which is relevant for investments:
where:
is the fraction of the assets to apply to the security.
is the probability that the investment increases in value.
is the probability that the investment decreases in value ().
is the fraction that is gained in a positive outcome. If the security price rises 10%, then .
is the fraction that is lost in a negative outcome. If the security price falls 10%, then
Note that the Kelly criterion is perfectly valid only for fully known outcome probabilities, which is almost never the case with investments. In addition, risk-averse strategies invest less than the full Kelly fraction.
The general form can be rewritten as follows
where:
is the win-loss probability (WLP) ratio, which is the ratio of winning to losing bets.
is the win-loss ratio (WLR) of bet outcomes, which is the winning skew.
It is clear that, at least, one of the factors or needs to be larger than 1 for having an edge (so ). It is even possible that the win-loss probability ratio is unfavorable , but one has an edge as long as .
The Kelly formula can easily result in a fraction higher than 1, such as with losing size (see the above expression with factors of and ). This happens somewhat counterintuitively, because the Kelly fraction formula compensates for a small losing size with a larger bet. However, in most real situations, there is high uncertainty about all parameters entering the Kelly formula. In the case of a Kelly fraction higher than 1, it is theoretically advantageous to use leverage to purchase additional securities on margin.
Betting example – behavioural experiment
In a study, each participant was given $25 and asked to place even-money bets on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at $250. But the behavior of the test subjects was far from optimal:
Using the Kelly criterion and based on the odds in the experiment (ignoring the cap of $250 and the finite duration of the test), the right approach would be to bet 20% of one's bankroll on each toss of the coin, which works out to a 2.034% average gain each round. This is a geometric mean, not the arithmetic rate of 4% (r = 0.2 x (0.6 - 0.4) = 0.04). The theoretical expected wealth after 300 rounds works out to $10,505 () if it were not capped.
In this particular game, because of the cap, a strategy of betting only 12% of the pot on each toss would have even better results (a 95% probability of reaching the cap and an average payout of $242.03).
Proof
Heuristic proofs of the Kelly criterion are straightforward. The Kelly criterion maximizes the expected value of the logarithm of wealth (the expectation value of a function is given by the sum, over all possible outcomes, of the probability of each particular outcome multiplied by the value of the function in the event of that outcome). We start with 1 unit of wealth and bet a fraction of that wealth on an outcome that occurs with probability and offers odds of . The probability of winning is , and in that case the resulting wealth is equal to . The probability of losing is and the odds of a negative outcome is . In that case the resulting wealth is equal to . Therefore, the expected geometric growth rate is:
We want to find the maximum r of this curve (as a function of f), which involves finding the derivative of the equation. This is more easily accomplished by taking the logarithm of each side first; because the logarithm is monotonic, it does not change the locations of function extrema. The resulting equation is:
with denoting logarithmic wealth growth. To find the value of for which the growth rate is maximized, denoted as , we differentiate the above expression and set this equal to zero. This gives:
Rearranging this equation to solve for the value of gives the Kelly criterion:
Notice that this expression reduces to the simple gambling formula when , when a loss results in full loss of the wager.
Kelly criterion for non-binary return rates
If the return rates on an investment or a bet are continuous in nature the optimal growth rate coefficient must take all possible events into account.
Application to the stock market
In mathematical finance, if security weights maximize the expected geometric growth rate (which is equivalent to maximizing log wealth), then a portfolio is growth optimal.
The Kelly Criterion shows that for a given volatile security this is satisfied when
where is the fraction of available capital invested that maximizes the expected geometric growth rate, is the expected growth rate coefficient, is the variance of the growth rate coefficient and is the risk-free rate of return. Note that a symmetric probability density function was assumed here.
Computations of growth optimal portfolios can suffer tremendous garbage in, garbage out problems. For example, the cases below take as given the expected return and covariance structure of assets, but these parameters are at best estimates or models that have significant uncertainty. If portfolio weights are largely a function of estimation errors, then Ex-post performance of a growth-optimal portfolio may differ fantastically from the ex-ante prediction. Parameter uncertainty and estimation errors are a large topic in portfolio theory. An approach to counteract the unknown risk is to invest less than the Kelly criterion.
Rough estimates are still useful. If we take excess return 4% and volatility 16%, then yearly Sharpe ratio and Kelly ratio are calculated to be 25% and 150%. Daily Sharpe ratio and Kelly ratio are 1.7% and 150%. Sharpe ratio implies daily win probability of p=(50% + 1.7%/4), where we assumed that probability bandwidth is . Now we can apply discrete Kelly formula for above with , and we get another rough estimate for Kelly fraction . Both of these estimates of Kelly fraction appear quite reasonable, yet a prudent approach suggest a further multiplication of Kelly ratio by 50% (i.e. half-Kelly).
A detailed paper by Edward O. Thorp and a co-author estimates Kelly fraction to be 117% for the American stock market SP500 index.
Significant downside tail-risk for equity markets is another reason to reduce Kelly fraction from naive estimate (for instance, to reduce to half-Kelly).
Proof
A rigorous and general proof can be found in Kelly's original paper or in some of the other references listed below. Some corrections have been published.
We give the following non-rigorous argument for the case with (a 50:50 "even money" bet) to show the general idea and provide some insights.
When , a Kelly bettor bets times their initial wealth , as shown above. If they win, they have after one bet. If they lose, they have . Suppose they make bets like this, and win times out of this series of bets. The resulting wealth will be:
The ordering of the wins and losses does not affect the resulting wealth. Suppose another bettor bets a different amount, for some value of (where may be positive or negative). They will have after a win and after a loss. After the same series of wins and losses as the Kelly bettor, they will have:
Take the derivative of this with respect to and get:
The function is maximized when this derivative is equal to zero, which occurs at:
which implies that
but the proportion of winning bets will eventually converge to:
according to the weak law of large numbers.
So in the long run, final wealth is maximized by setting to zero, which means following the Kelly strategy.
This illustrates that Kelly has both a deterministic and a stochastic component. If one knows K and N and wishes to pick a constant fraction of wealth to bet each time (otherwise one could cheat and, for example, bet zero after the Kth win knowing that the rest of the bets will lose), one will end up with the most money if one bets:
each time. This is true whether is small or large. The "long run" part of Kelly is necessary because K is not known in advance, just that as gets large, will approach . Someone who bets more than Kelly can do better if for a stretch; someone who bets less than Kelly can do better if for a stretch, but in the long run, Kelly always wins.
The heuristic proof for the general case proceeds as follows.
In a single trial, if one invests the fraction of their capital, if the strategy succeeds, the capital at the end of the trial increases by the factor , and, likewise, if the strategy fails, the capital is decreased by the factor . Thus at the end of trials (with successes and failures), the starting capital of $1 yields
Maximizing , and consequently , with respect to leads to the desired result
Edward O. Thorp provided a more detailed discussion of this formula for the general case. There, it can be seen that the substitution of for the ratio of the number of "successes" to the number of trials implies that the number of trials must be very large, since is defined as the limit of this ratio as the number of trials goes to infinity. In brief, betting each time will likely maximize the wealth growth rate only in the case where the number of trials is very large, and and are the same for each trial. In practice, this is a matter of playing the same game over and over, where the probability of winning and the payoff odds are always the same. In the heuristic proof above, successes and failures are highly likely only for very large .
Multiple outcomes
Kelly's criterion may be generalized on gambling on many mutually exclusive outcomes, such as in horse races. Suppose there are several mutually exclusive outcomes. The probability that the -th horse wins the race is , the total amount of bets placed on -th horse is , and
where are the pay-off odds. , is the dividend rate where is the track take or tax, is the revenue rate after deduction of the track take when -th horse wins. The fraction of the bettor's funds to bet on -th horse is . Kelly's criterion for gambling with multiple mutually exclusive outcomes gives an algorithm for finding the optimal set of outcomes on which it is reasonable to bet and it gives explicit formula for finding the optimal fractions of bettor's wealth to be bet on the outcomes included in the optimal set .
The algorithm for the optimal set of outcomes consists of four steps:
Calculate the expected revenue rate for all possible (or only for several of the most promising) outcomes:
Reorder the outcomes so that the new sequence is non-increasing. Thus will be the best bet.
Set (the empty set), , . Thus the best bet will be considered first.
Repeat:
If then insert -th outcome into the set: , recalculate according to the formula: and then set , Otherwise, set and stop the repetition.
If the optimal set is empty then do not bet at all. If the set of optimal outcomes is not empty, then the optimal fraction to bet on -th outcome may be calculated from this formula:
One may prove that
where the right hand-side is the reserve rate. Therefore, the requirement may be interpreted as follows: -th outcome is included in the set of optimal outcomes if and only if its expected revenue rate is greater than the reserve rate. The formula for the optimal fraction may be interpreted as the excess of the expected revenue rate of -th horse over the reserve rate divided by the revenue after deduction of the track take when -th horse wins or as the excess of the probability of -th horse winning over the reserve rate divided by revenue after deduction of the track take when -th horse wins. The binary growth exponent is
and the doubling time is
This method of selection of optimal bets may be applied also when probabilities are known only for several most promising outcomes, while the remaining outcomes have no chance to win. In this case it must be that
and
.
Stock investments
The second-order Taylor polynomial can be used as a good approximation of the main criterion. Primarily, it is useful for stock investment, where the fraction devoted to investment is based on simple characteristics that can be easily estimated from existing historical data – expected value and variance. This approximation may offer similar results as the original criterion, but in some cases the solution obtained may be infeasible.
For single assets (stock, index fund, etc.), and a risk-free rate, it is easy to obtain the optimal fraction to invest through geometric Brownian motion.
The stochastic differential equation governing the evolution of a lognormally distributed asset at time () is
whose solution is
where is a Wiener process, and (percentage drift) and (the percentage volatility) are constants. Taking expectations of the logarithm:
Then the expected log return is
Consider a portfolio made of an asset and a bond paying risk-free rate , with fraction invested in and in the bond. The aforementioned equation for must be modified by this fraction, ie , with associated solution
the expected one-period return is given by
For small , , and , the solution can be expanded to first order to yield an approximate increase in wealth
Solving we obtain
is the fraction that maximizes the expected logarithmic return, and so, is the Kelly fraction.
Thorp arrived at the same result but through a different derivation.
Remember that is different from the asset log return . Confusing this is a common mistake made by websites and articles talking about the Kelly Criterion.
For multiple assets, consider a market with correlated stocks with stochastic returns , and a riskless bond with return . An investor puts a fraction of their capital in and the rest is invested in the bond. Without loss of generality, assume that investor's starting capital is equal to 1.
According to the Kelly criterion one should maximize
Expanding this with a Taylor series around we obtain
Thus we reduce the optimization problem to quadratic programming and the unconstrained solution
is
where and are the vector of means and the matrix of second mixed noncentral moments of the excess returns.
There is also a numerical algorithm for the fractional Kelly strategies and for the optimal solution under no leverage and no short selling constraints.
Bernoulli
In a 1738 article, Daniel Bernoulli suggested that, when one has a choice of bets or investments, one should choose that with the highest geometric mean of outcomes. This is mathematically equivalent to the Kelly criterion, although the motivation is different (Bernoulli wanted to resolve the St. Petersburg paradox).
An English translation of the Bernoulli article was not published until 1954, but the work was well known among mathematicians and economists.
Criticism
Although the Kelly strategy's promise of doing better than any other strategy in the long run seems compelling, some economists have argued strenuously against it, mainly because an individual's specific investing constraints may override the desire for optimal growth rate. The conventional alternative is expected utility theory which says bets should be sized to maximize the expected utility of the outcome (to an individual with logarithmic utility, the Kelly bet maximizes expected utility, so there is no conflict; moreover, Kelly's original paper clearly states the need for a utility function in the case of gambling games which are played finitely many times). Even Kelly supporters usually argue for fractional Kelly (betting a fixed fraction of the amount recommended by Kelly) for a variety of practical reasons, such as wishing to reduce volatility, or protecting against non-deterministic errors in their advantage (edge) calculations. In colloquial terms, the Kelly criterion requires accurate probability values, which isn't always possible for real-world event outcomes. When a gambler overestimates their true probability of winning, the criterion value calculated will diverge from the optimal, increasing the risk of ruin.
Kelly formula can be thought as 'time diversification', which is taking equal risk during different sequential time periods (as opposed to taking equal risk in different assets for asset diversification). There is clearly a difference between time diversification and asset diversification, which was raised by Paul A. Samuelson. There is also a difference between ensemble-averaging (utility calculation) and time-averaging (Kelly multi-period betting over a single time path in real life). The debate was renewed by envoking ergodicity breaking. Yet the difference between ergodicity breaking and Knightian uncertainty should be recognized.
See also
Risk of ruin
Gambling and information theory
Proebsting's paradox
Merton's portfolio problem
References
External links
Optimal decisions
Gambling mathematics
Information theory
Wagering
Articles containing proofs
1956 introductions
Portfolio theories
Formulas | Kelly criterion | [
"Mathematics",
"Technology",
"Engineering"
] | 3,902 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory",
"Articles containing proofs"
] |
3,048,284 | https://en.wikipedia.org/wiki/Moons%20of%20Pluto | The dwarf planet Pluto has five natural satellites. In order of distance from Pluto, they are Charon, Styx, Nix, Kerberos, and Hydra. Charon, the largest, is mutually tidally locked with Pluto, and is massive enough that Pluto and Charon are sometimes considered a binary dwarf planet.
History
The innermost and largest moon, Charon, was discovered by James Christy on 22 June 1978, nearly half a century after Pluto was discovered. This led to a substantial revision in estimates of Pluto's size, which had previously assumed that the observed mass and reflected light of the system were all attributable to Pluto alone.
Two additional moons were imaged by astronomers of the Pluto Companion Search Team preparing for the New Horizons mission and working with the Hubble Space Telescope on 15 May 2005, which received the provisional designations S/2005 P 1 and S/2005 P 2. The International Astronomical Union officially named these moons Nix (Pluto II, the inner of the two moons, formerly P 2) and Hydra (Pluto III, the outer moon, formerly P 1), on 21 June 2006. Kerberos, announced on 20 July 2011, was discovered while searching for Plutonian rings. The discovery of Styx was announced on 7 July 2012 while looking for potential hazards for New Horizons.
Charon
Charon is about half the diameter of Pluto and is massive enough (nearly one eighth of the mass of Pluto) that the system's barycenter lies between them, approximately above Pluto's surface. Charon and Pluto are also tidally locked, so that they always present the same face toward each other. The IAU General Assembly in August 2006 considered a proposal that Pluto and Charon be reclassified as a double planet, but the proposal was abandoned.
Like Pluto, Charon is a perfect sphere to within measurement uncertainty.
Circumbinary moons
Pluto's four small circumbinary moons orbit Pluto at two to four times the distance of Charon, ranging from Styx at 42,700 kilometres to Hydra at 64,800 kilometres from the barycenter of the system. They have nearly circular prograde orbits in the same orbital plane as Charon.
All are much smaller than Charon. Nix and Hydra, the two larger, are roughly 42 and 55 kilometers on their longest axis respectively, and Styx and Kerberos are 7 and 12 kilometers respectively. All four are irregularly shaped.
Characteristics
The Pluto system is highly compact and largely empty: prograde moons could stably orbit Pluto out to 53% of the Hill radius (the gravitational zone of Pluto's influence) of 6 million km, or out to 69% for retrograde moons. However, only the inner 3% of the region where prograde orbits would be stable is occupied by satellites, and the region from Styx to Hydra is packed so tightly that there is little room for further moons with stable orbits within this region.
An intense search conducted by New Horizons confirmed that no moons larger than 4.5 km in diameter exist out to distances up to 180,000 km from Pluto (6% of the stable region for prograde moons), assuming Charon-like albedoes of 0.38 (for smaller distances, this threshold is still smaller).
The orbits of the moons are confirmed to be circular and coplanar, with inclinations differing less than 0.4° and eccentricities less than 0.005.
The discovery of Nix and Hydra suggested that Pluto could have a ring system. Small-body impacts could eject debris off of the small moons which can form into a ring system. However, data from a deep-optical survey by the Advanced Camera for Surveys on the Hubble Space Telescope, by occultation studies, and later by New Horizons, suggest that no ring system is present.
Resonances
Styx, Nix, and Hydra are thought to be in a 3-body Laplace orbital resonance with orbital periods in a ratio of 18:22:33. The ratios should be exact when orbital precession is taken into account. Nix and Hydra are in a simple 2:3 resonance. Styx and Nix are in an 9:11 resonance, while the resonance between Styx and Hydra has a ratio of 6:11. The Laplace resonance also means that ratios of synodic periods are then such that there are 5 Styx–Hydra conjunctions and 3 Nix–Hydra conjunctions for every 2 conjunctions of Styx and Nix. If denotes the mean longitude and the libration angle, then the resonance can be formulated as . As with the Laplace resonance of the Galilean satellites of Jupiter, triple conjunctions never occur. librates about 180° with an amplitude of at least 10°.
All of the outer circumbinary moons are also close to mean motion resonance with the Charon–Pluto orbital period. Styx, Nix, Kerberos, and Hydra are in a 1:3:4:5:6 sequence of near resonances, with Styx approximately 5.4% from its resonance, Nix approximately 2.7%, Kerberos approximately 0.6%, and Hydra approximately 0.3%. It may be that these orbits originated as forced resonances when Charon was tidally boosted into its current synchronous orbit, and then released from resonance as Charon's orbital eccentricity was tidally damped. The Pluto–Charon pair creates strong tidal forces, with the gravitational field at the outer moons varying by 15% peak to peak.
However, it was calculated that a resonance with Charon could boost either Nix or Hydra into its current orbit, but not both: boosting Hydra would have required a near-zero Charonian eccentricity of 0.024, whereas boosting Nix would have required a larger eccentricity of at least 0.05. This suggests that Nix and Hydra were instead captured material, formed around Pluto–Charon, and migrated inward until they were trapped in resonance with Charon. The existence of Kerberos and Styx may support this idea.
Rotation
Prior to the New Horizons mission,
Nix, Hydra, Styx, and Kerberos
were predicted to rotate chaotically or tumble.
However, New Horizons imaging found that they had not tidally
spun down to near a spin synchronous state where chaotic rotation or tumbling would be expected. New Horizons imaging found that all 4 moons were at high obliquity. Either they were born that way, or they were tipped by a spin precession resonance.
Styx may be experiencing intermittent and chaotic obliquity variations.
Mark R. Showalter had speculated that, "Nix can flip its entire pole. It could actually be possible to spend a day on Nix in which the sun rises in the east and sets in the north. It is almost random-looking in the way it rotates."
Only one other moon, Saturn's moon Hyperion, is known to tumble, though it is likely that Haumea's moons do so as well.
Origin
It is suspected that Pluto's satellite system was created by a massive collision, similar to the Theia impact thought to have created the Moon. In both cases, the high angular momenta of the moons can only be explained by such a scenario. The nearly circular orbits of the smaller moons suggests that they were also formed in this collision, rather than being captured Kuiper Belt objects. This and their near orbital resonances with Charon (see below) suggest that they formed closer to Pluto than they are at present and migrated outward as Charon reached its current orbit. Their grey color is different from that of Pluto, one of the reddest bodies in the Solar System. This is thought to be due to a loss of volatiles during the impact or subsequent coalescence, leaving the surfaces of the moons dominated by water ice. However, such an impact should have created additional debris (more moons), yet no moons or rings were discovered by New Horizons, ruling out any more moons of significant size orbiting Pluto. An alternative hypothesis is that the collision happened at about 2,000 miles per hours, not powerful enough to destroy Charon and Pluto. Instead they remained attached to each other for up to ten hours before separating again. The faster rotation of Pluto back then, with one rotation every third hour, would have created a centrifugal force stronger than the gravitational attraction between the two bodies, which made Charon separate from Pluto, but remained gravitationally bound with each other. The same process could have created the other known four moons, from material that escaped Pluto and Charon.
List
Pluto's moons are listed here by orbital period, from shortest to longest. Charon, which is massive enough to have collapsed into a spheroid under its own gravitation, is highlighted in light purple. As the system barycenter lies far above Pluto's surface, Pluto's barycentric orbital elements have been included as well. All elements are with respect to the Pluto-Charon barycenter. The mean separation distance between the centers of Pluto and Charon is 19,596 km.
Scale model of the Pluto system
Mutual events
Transits occur when one of Pluto's moons passes between Pluto and the Sun. This occurs when one of the satellites' orbital nodes (the points where their orbits cross Pluto's ecliptic) lines up with Pluto and the Sun. This can only occur at two points in Pluto's orbit; coincidentally, these points are near Pluto's perihelion and aphelion. Occultations occur when Pluto passes in front of and blocks one of Pluto's satellites.
Charon has an angular diameter of 4 degrees of arc as seen from the surface of Pluto; the Sun appears much smaller, only 39 to 65 arcseconds. By comparison, the Moon as viewed from Earth has an angular diameter of only 31 minutes of arc, or just over half a degree of arc. Therefore, Charon would appear to have eight times the diameter, or 25 times the area of the Moon; this is due to Charon's proximity to Pluto rather than size, as despite having just over one-third of a Lunar radius, Earth's Moon is 20 times more distant from Earth's surface as Charon is from Pluto's. This proximity further ensures that a large proportion of Pluto's surface can experience an eclipse. Because Pluto always presents the same face towards Charon due to tidal locking, only the Charon-facing hemisphere experiences solar eclipses by Charon.
The smaller moons can cast shadows elsewhere. The angular diameters of the four smaller moons (as seen from Pluto) are uncertain. Nix's is 3–9 minutes of arc and Hydra's is 2–7 minutes. These are much larger than the Sun's angular diameter, so total solar eclipses are caused by these moons.
Eclipses by Styx and Kerberos are more difficult to estimate, as both moons are very irregular, with angular dimensions of 76.9 x 38.5 to 77.8 x 38.9 arcseconds for Styx, and 67.6 x 32.0 to 68.0 x 32.2 for Kerberos. As such, Styx has no annular eclipses, its widest axis being more than 10 arcseconds larger than the Sun at its largest. However, Kerberos, although slightly larger, cannot make total eclipses as its largest minor axis is a mere 32 arcseconds. Eclipses by Kerberos and Styx will entirely consist of partial and hybrid eclipses, with total eclipses being extremely rare.
The next period of mutual events due to Charon will begin in October 2103, peak in 2110, and end in January 2117. During this period, solar eclipses will occur once each Plutonian day, with a maximum duration of 90 minutes.
Exploration
The Pluto system was visited by the New Horizons spacecraft in July 2015. Images with resolutions of up to 330 meters per pixel were returned of Nix and up to 1.1 kilometers per pixel of Hydra. Lower-resolution images were returned of Styx and Kerberos.
Notes
References
Sources
Codex Regius (2016), Pluto & Charon, CreateSpace Independent Publishing Platform
IAU Circular No. 8625, describing the discovery of 2005 P1 and P2
IAU Circular No. 8686, reporting a more neutral color for 2005 P2
IAU Circular No. 8723 announcing the names of Nix and Hydra
Background Information Regarding Our Two Newly Discovered Satellites of Pluto – The website of the discoverers of Nix and Hydra
External links
Scott S. Sheppard: Pluto Moons
Interactive 3D visualisation of the Plutonian system
Animation of the Plutonian system
Hubble Spots Possible New Moons Around Pluto (NASA)
Two More Moons Discovered Orbiting Pluto (SPACE.com)
New Horizons Mission Site
Lists of moons
Articles containing video clips
Solar System | Moons of Pluto | [
"Astronomy"
] | 2,625 | [
"Outer space",
"Solar System"
] |
3,048,571 | https://en.wikipedia.org/wiki/Reverse%20learning | Reverse learning is a neurobiological theory of dreams. In 1983, in a paper published in the science journal Nature, Crick and Mitchison's reverse learning model likened the process of dreaming to a computer in that it was "off-line" during dreaming or the REM phase of sleep. During this phase, the brain sifts through information gathered throughout the day and throws out all unwanted material. According to the model, we dream in order to forget and this involves a process of 'reverse learning' or 'unlearning'.
The cortex cannot cope with the vast amount of information received throughout the day without developing "parasitic" thoughts that would disrupt the efficient organisation of memory. During REM sleep, these unwanted connections in cortical networks are wiped out or damped down by the Crick-Mitchison process making use of impulses bombarding the cortex from sub-cortical areas.
The Crick-Mitchison theory is a variant upon Hobson and McCarley's activation-synthesis hypothesis, published in December 1977. Hobson and McCarley hypothesized that a brain stem neuronal mechanism sends pontine-geniculo-occipital (or PGO) waves that automatically activate the mammalian forebrain. By comparing information generated in specific brain areas with information stored in memory, the forebrain synthesizes dreams during REM sleep.
Crick verbatim on the function of REM sleep
Suppose one did not have REM, then one would mix things up. That is not necessarily a bad thing — it is the basis of fantasy, imagination, and so forth. Imagination means seeing a connection between two things that are different but which have something in common which you had not noticed before. If one had too much REM, one would predict one would be a rather prosaic person without too much imagination. But the process is not 100% efficient. If one goes on too far, one begins to wipe out everything.
Another way to look at it is to say "How could you prevent the brain being overloaded"? One way would be to make it bigger, to have more neurons. So perhaps the important thing to say is "The function of REM is to allow your brain or your cortex to be smaller".
Support for the theory of reverse learning
In the echidna, a primitive egg-laying mammal that has no REM sleep, there is a very enlarged frontal cortex. Crick and Mitchison argue that this excessive cortical development is necessary to store both adaptive memories and parasitic memories, which in more highly evolved animals are disposed of during REM sleep.
This theory solves the brain information storage problem, as our cortex would need to be much larger due to the inefficient storage of information. It also explains why we forget dreams extremely easily.
Objections to the theory of reverse learning
One problem for reverse-learning theory is that dreams are often organized into clear narratives (stories). It is unclear why dreams would be organized in a systematic way if they consisted only of disposable parasitic thoughts. It is also unclear why babies sleep so much, because it seems they would have less to forget. Additionally, the brain of the echidna has far less folding than the brains of other mammals, so has less surface area (the location of the neo-cortex). It may actually have less capacity for higher thought than that of other mammals, rather than more, as its greater mass suggests.
In response to these objections, Crick and Mitchison proposed that the principal target for the unlearning process could be obsessive memories (strong attractors) and that the dream/REM purpose is to equalize the strength of memories.
A computational model by Kinouchi and Kinouchi (2002) implementing a chaotic itinerancy dynamics in a Hopfield net shows that the Crick-Mitchison unlearning mechanism produces a trajectory of associated attractors ("a narrative") where the strong ("emotional", "obsessive" or "overplastic") memories have their dominance downplayed and an equalization between memory basins produces a better recovery of memories not recalled during the "dream".
It is argued that fetuses and babies sleep so much to downgrade ("unlearn") the force of synapses present in these developmental phases.
See also
Francis Crick: Neuroscience and other interests
References
External links
Sleep | Reverse learning | [
"Biology"
] | 916 | [
"Behavior",
"Sleep"
] |
3,048,699 | https://en.wikipedia.org/wiki/Moist%20desquamation | Moist desquamation is a description of the clinical pattern seen as a consequence of radiation exposure where the skin thins and then begins to weep because of loss of integrity of the epithelial barrier and decreased oncotic pressure. Moist desquamation is a rare complication for most forms of radiology, however it is far more common in fluoroscopy where threshold doses lie between 10-15 Gy and increasingly common above 15 Gy. It has been noted that fractionation of fluoroscopic procedures significantly reduces the likelihood of moist desquamation occurring. In animal studies done on pig skin, moist desquamation was found to occur with a 50% of the time after a single dose of 28 Gy, however a 2×18 Gy fractionation scheme (36 Gy total dose) was needed to produce the same 50% occurrence.
Moist desquamation is a common side effect of radiotherapy treatment, where approximately 36% of radiotherapy patients will present with symptoms of moist desquamation. While modern megavoltage external beam radiotherapy have peak radiation doses below the skin, older orthvoltage systems have peak radiation doses at the skin of a patient. As such, moist desquamtation and other skin related radiotherapy complications were significantly more commonplace before the introduction of higher energy cobalt therapy and linear accelerator systems between the 1950s to 1970s.
Historically, this was a common phenomenon in Hiroshima and Nagasaki during World War II with the atomic bomb attacks from the United States. The phenomenon was described by John Hersey in his 1946 article, and later book, Hiroshima.
Clinical characteristics
Sloughing of the epidermis and exposure of the dermal layer clinically characterize moist desquamation. Moist desquamation presents as tender, red skin associated with serous exudate, hemorrhagic crusting, and has the potential for development of bullae.
Treatment
Due to the deterministic nature of moist desquamation, once symptoms occur the condition itself can not be reversed and a patient must wait for the condition to subside. Management of these partial-thickness wounds has been influenced by the Winter principle of moist wound healing, which suggests that wounds heal more rapidly in a moist environment. Hydrocolloid dressings applied directly to these wounds prevent the evaporation of moisture from the exposed dermis and create a moist environment at the wound site that promotes cell migration. As additional radiation exposure may either exacerbate or cause the re-occurrence of moist dequamation, patients are advised to use sunscreen over the irradiated area after completion of treatment.
References
Radiation health effects | Moist desquamation | [
"Chemistry",
"Materials_science"
] | 533 | [
"Radiation effects",
"Radiation health effects",
"Radioactivity"
] |
3,048,803 | https://en.wikipedia.org/wiki/Dome%20Mine | Dome Mine is situated in the City of Timmins, Ontario, Canada; and was developed during the Porcupine Gold Rush. Last operated by Canadian company Goldcorp, before it became a subsidiary of American company Newmont, it is one of three mines (along with Hoyle Pond underground and Hollinger Open Pit, both still active) owned by Newmont in the Porcupine district in and around Timmins.
The original Dome Mine was discovered by Jack Wilson of the Harry Preston crew in 1909, one of the crews whose successful finds launched the Porcupine Gold Rush. The vein Preston discovered dripped with gold and was referred to as the "Golden Staircase".
A new company, Dome Mines Limited, was capitalized in 1910 to develop the namesake Dome Mine, producing 247 tons of high-grade ore its first year. The company built a community, also called Dome, of approximately 60 houses leased to miners with families. The mine was developed using open-pit mining for the first 200 feet, then resorted to underground mining methods.
After the Great Porcupine Fire of 1911 ravaged communities and mining infrastructure throughout the region, the mine was rebuilt such that by March 1912, a 40-stamp mill was processing 400 tons a day. The mine was incorporated in 1912 and acquired Dome Extension in 1916. Through its early years, Ambrose Monell, Joseph Delamar, and Jules Bache served as presidents of Dome Mines Limited. A rich ore body was discovered at the 23-level of the Dome Extension in 1933.
Goldcorp ceased the mining operations on December 31, 2017, after 107 years of production, at that time "the longest continuously operating mine in Canada". An enormous discontinued open pit (Super Pit), huge man-made mountain of waste rock and working mill are now in place at the location.
See also
List of mines in Ontario
References
External links
Goldcorp - Porcupine Gold Mines (copy archived May 9, 2012)
Dome Mine at Mindat.org
Gold mines in Ontario
Mines in Timmins
Surface mines in Canada
Underground mines in Canada
Stamp mills | Dome Mine | [
"Chemistry",
"Engineering"
] | 420 | [
"Stamp mills",
"Metallurgical facilities",
"Mining equipment"
] |
3,048,845 | https://en.wikipedia.org/wiki/Christopher%20Kelk%20Ingold | Sir Christopher Kelk Ingold (28 October 1893 – 8 December 1970) was a British chemist based in Leeds and London. His groundbreaking work in the 1920s and 1930s on reaction mechanisms and the electronic structure of organic compounds was responsible for the introduction into mainstream chemistry of concepts such as nucleophile, electrophile, inductive and resonance effects, and such descriptors as SN1, SN2, E1, and E2. He also was a co-author of the Cahn–Ingold–Prelog priority rules. Ingold is regarded as one of the chief pioneers of physical organic chemistry.
Early life and education
Born in London to a silk merchant who died of tuberculosis when Ingold was five years old, Ingold began his scientific studies at Hartley University College at Southampton (now Southampton University) taking an external BSc in 1913 with the University of London. He then joined the laboratory of Jocelyn Field Thorpe at Imperial College, London, with a brief hiatus from 1918–1920 during which he conducted research into chemical warfare and the manufacture of poison gas with Cassel Chemical at Glasgow. Ingold received an MSc from the University of London and returned to Imperial College in 1920 to work with Thorpe. He was awarded a PhD in 1918 and a DSc in 1921.
Academic career
In 1924 Ingold moved to the University of Leeds where he spent six years as Professor of Organic Chemistry working alongside his wife, Dr. Edith Hilda Ingold (Usherwood). He returned to London in 1930, and served for 24 years as head of the chemistry department at University College London, from 1937 until his retirement in 1961.
During his study of alkyl halides, Ingold found evidence for two possible reaction mechanisms for nucleophilic substitution reactions. He found that tertiary alkyl halides underwent a two-step mechanism (SN1) while primary and secondary alkyl halides underwent a one-step mechanism (SN2). This conclusion was based on the finding that reactions of tertiary alkyl halides with nucleophiles were dependent on the concentration of the alkyl halide only. Meanwhile, he discovered that primary and secondary alkyl halides, when reacting with nucleophiles, depend on both the concentration of the alkyl halide and the concentration of the nucleophile.
Starting around 1926, Ingold and Robert Robinson carried out a heated debate on the electronic theoretical approaches to organic reaction mechanisms. See, for example, the summary by Saltzman.
Ingold authored and co-authored 443 papers. Notable students include Peter de la Mare, Ronald Gillespie and Ronald Nyholm.
Honours
In 1920, Ingold was awarded the British Empire Medal (BEM) for his wartime research involving "great courage in carrying out work in a poisonous atmosphere, and risking his life on several occasions in preventing serious accidents," though he subsequently never discussed the award or this period in his life.
Ingold was elected a Fellow of the Royal Society (FRS) in 1924. He received the Longstaff Medal of the Royal Society of Chemistry in 1951, the Royal Medal of the Royal Society in 1952, and was knighted in 1958.
The chemistry department of University College London is now housed in the Sir Christopher Ingold building, opened in 1969.
Personal life
Ingold married Dr. Edith Hilda Ingold (Usherwood) in 1923. She was a fellow chemist with whom he collaborated. They had two daughters and a son, the chemist Keith Ingold.
Death
Ingold died in London in 1970, aged 77.
References
Further reading
Dr. Malmberg's class: K.P.
Review of Leffek's book by John D. Roberts
External links
Biography at Michigan State University
Biography and history at University College London.
British organic chemists
Academics of the University of Leeds
Academics of University College London
Knights Bachelor
Recipients of the British Empire Medal
Royal Medal winners
Fellows of the Royal Society
1893 births
1970 deaths
Alumni of the University of Southampton
Alumni of Imperial College London
Stereochemists
People from Edgware | Christopher Kelk Ingold | [
"Chemistry"
] | 823 | [
"Organic chemists",
"British organic chemists",
"Stereochemistry",
"Stereochemists"
] |
3,048,850 | https://en.wikipedia.org/wiki/Sodium%20trimetaphosphate | Sodium trimetaphosphate (also STMP), with formula Na3P3O9, is one of the metaphosphates of sodium. It has the formula but the hexahydrate is also well known. It is the sodium salt of trimetaphosphoric acid. It is a colourless solid that finds specialised applications in food and construction industries.
Although drawn with a particular resonance structure, the trianion has high symmetry.
Synthesis and reactions
Trisodium trimetaphosphate is produced industrially by heating sodium dihydrogen phosphate to 550 °C, a method first developed in 1955:
The trimetaphosphate dissolves in water and is precipitated by the addition of sodium chloride (common ion effect), affording the hexahydrate. STMP can also prepared by heating samples of sodium polyphosphate, or by a thermal reaction of orthophosphoric acid and sodium chloride at 600°C.
Hydrolysis of the ring leads to the acyclic sodium triphosphate:
Na3P3O9 + H2O → H2Na3P3O10
The analogous reaction of the metatriphosphate anion involves ring-opening by amine nucleophiles.
References
Food additives
Sodium compounds
Metaphosphates | Sodium trimetaphosphate | [
"Chemistry"
] | 275 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
3,049,349 | https://en.wikipedia.org/wiki/Lignosulfonates | Lignosulfonates (LS) are water-soluble anionic polyelectrolyte polymers: they are byproducts from the production of wood pulp using sulfite pulping. Most delignification in sulfite pulping involves acidic cleavage of ether bonds, which connect many of the constituents of lignin. Sulfonated lignin (SL) refers to other forms of lignin by-product, such as those derived from the much more popular Kraft process, that have been processed to add sulfonic acid groups. The two have similar uses and are commonly confused with each other, with SL being much cheaper. LS and SL both appear as free-flowing powders; the former is light brown while the latter is dark brown.
Lignosulfonates have very broad ranges of molecular mass (they are very polydisperse). A range of from 1,000 to 140,000 Da has been reported for softwood lignosulfonates with lower values reported for hardwoods. Sulfonated Kraft lignin tends to have smaller molecules at 2,000–3,000 Da. SL and LS are non-toxic, non-corrosive, and biodegradable. A range of further modifications may be applied to LS and SL, including oxidation, hydroxymethylation, sulfomethylation, and a combination thereof.
Preparation
Lignosulfonates
Lignosulfonates are recovered from the spent pulping liquids (red or brown liquor) from sulfite pulping. Ultrafiltration can also be used to separate lignosulfonates from the spent pulping liquid. A list of CAS numbers for the various metal salts of lignosulfonate is available.
The electrophilic carbocations produced during ether cleavage react with bisulfite ions (HSO3−) to give sulfonates.
R-O-R' + H+ → R+ + R'OH
R+ + HSO3− → R-SO3H
The primary site for ether cleavage is the α-carbon (carbon atom attached to the aromatic ring) of the propyl (linear three carbon) side chain. The following structures do not specify the structure since lignin and its derivatives are complex mixtures: the purpose is to give a general idea of the structure of lignosulfonates. The groups R1 and R2 can be a wide variety of groups found in the structure of lignin. Sulfonation occurs on the side chains, not on the aromatic ring like in p-toluenesulfonic acid.
Sulfonated Kraft lignin
Kraft lignin from black liquor, which is produced in much higher amounts, may be processed into sulfonated lignin. The lignin is first precipitated by acidifying the liquor with then washed (other methods for isolation exist). Reaction with sodium sulfite or sodium bisulfite and an aldehyde under a basic environment completes sulfonation. Here the sulfonic acid groups end up on the aromatic ring instead of the aliphatic sidechain.
Uses
LS and SL have a wide variety of applications. They are used to stably disperse pesticides, dyes, carbon black, and other insoluble solids and liquids into water. As a binder it suppresses dust on unpaved roads. It is also a humectant and a in water treatment. Chemically, it may be used as a tannin for tanning leather and as a feedstock for a variety of products.
Dispersant
The single largest use for lignosulfonates is as plasticizers in making concrete, where they allow concrete to be made with less water (giving stronger concrete) while maintaining the ability of the concrete to flow. Lignosulfonates are also used during the production of cement, where they act as grinding aids in the cement mill and as a rawmix slurry deflocculant (that reduces the viscosity of the slurry).
Lignosulfonates are also used for the production of plasterboard to reduce the amount of water required to make the stucco flow and form the layer between two sheets of paper. The reduction in water content allows lower kiln temperatures to dry the plasterboard, saving energy.
The ability of lignosulfonates to reduce the viscosity of mineral slurries (deflocculation) is used to advantage in oil drilling mud, where it replaced tannic acids from quebracho (a tropical tree). Furthermore, lignosulphates are being researched for use in enhanced oil recovery (EOR) due to their ability to reduce interfacial tension in foams, allowing for improved sweep efficiency, and hence increased recovery factor.
Binder
Besides their use as dispersants lignosulfonates are also good binders. They are used as binders in well-paper, particle boards, linoleum flooring, coal briquettes, and roads.
They also form a constituent of the paste used to coat the lead-antimony-calcium or lead-antimony-selenium grids in a Lead-acid battery.
Aqueous lignosulfonate solutions are also widely used as a non-toxic dust suppression agent for unpaved road surfaces, where it is popularly, if erroneously, called "tree sap". Roads treated with lignosulfonates can be distinguished from those treated with calcium chloride by color: lignosulfonates give the road surface a dark grey color, while calcium chloride lend the road surface a distinctive tan or brown color. As lignosulfonates do not rely on water to provide their binding properties, they tend to be more useful in arid locations.
It is used as a soil stabilizer.
Chemical feedstock
Oxidation of lignosulfonates from softwood trees produced vanillin (artificial vanilla flavor).
Dimethyl sulfide and dimethyl sulfoxide (an important organic solvent) are produced from lignosulfonates. The first step involves heating lignosulfonates with sulfides or elemental sulfur to produce dimethyl sulfide. The methyl groups come from methyl ethers present in the lignin. Oxidation of dimethyl sulfide with nitrogen dioxide produces dimethyl sulfoxide (DMSO).
Other uses
The anti-oxidant effect of lignosulfonates is utilized in feeds, ensilage and flame retardants.
The UV absorbance of lignosulfonates is utilized in sun screens and bio-pesticides.
Lignosulfonate is used in agriculture as an analogue of humic substances. As a soil conditioner, it is mainly used to enhance the absorption and retention of fertilizers and other nutrients. It is able to chelate minerals while remaining bio-degradable, an improvement compared to EDTA. Further hydrolysis and oxidation produces a product even more similar to humus, marketed as "lignohumate".
References
See also
Sodium lignosulfonate
Papermaking
Petroleum production
Organic polymers
Polyelectrolytes
Concrete admixtures | Lignosulfonates | [
"Chemistry"
] | 1,501 | [
"Organic compounds",
"Organic polymers"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.