id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
2,276,235 | https://en.wikipedia.org/wiki/Gevil | Gevil or gewil () or () is a type of parchment made from full-grain animal hide that has been prepared as a writing material in Jewish scribal documents, in particular a Sefer Torah (Torah scroll).
Etymology
Related to גויל, gewil, a rolling (i.e. unhewn) stone, "to roll." (Jastrow)
Definition and production
Gevil is a form skin for safrut (halakhic writing) that is made of tanned, whole hide. The precise requirements for processing gevil are laid down by the Talmud, Geonim and Rishonim.
Rabbi Ḥiyya bar Ami said in the name of Ulla: There are three [untanned] hide [stages before it is tanned into gevil]: matza, ḥifa, and diftera.
According to Jewish law, the preparation of gevil follows a procedure of salting, flouring and tanning with afatzim (lit. "tannin"), which latter is derived from gallnuts, or similar substances having tannic acid.
Maimonides required rubbing down the raw hide with flour (presumably barley flour), although Simeon Kayyara, in his Halachot Gedolot, required flour being placed inside a tub of water, into which the raw hide was inserted and left for a few days. The action of the flour-based liquor served to soften the hide.
These requirements were reconfirmed as a Law given to Moses at Sinai by Maimonides, in his Mishneh Torah. Gallnuts are rich in tannic acid and are the product of a tree's reaction to an invasive parasitic wasp's egg. The pure black tint of the ink used in writing Torah scrolls results from the reaction between the tannic acid and iron sulfate (a powder used to make the ink).
The three types of tanned skin
There are three forms of tanned skin known to Jewish law. The other two forms (klaf and dukhsustus) result from splitting the hide into two layers. The rabbinic scholars are divided upon which is the inner and which is the outer of the two halves. Maimonides is of the opinion that was the inner layer and that was the outer layer
The Shulchan Aruch rules in the reverse that was the outer layer and that was the inner layer. The opinion of the Shulchan Aruch is the accepted ruling in all Jewish communities.
Recently a small group has advocated for the return to using the full hide known as gevil for Sifrei Torah as it avoids this issue, but unfortunately this solution does not work for tefillin which must be written on klaf and are not kosher if written on gevil.
Maimonides' rules for use
According to most views of Jewish law, a Sefer Torah (Torah scroll) should be written on gevil parchment, as was done by Moses for the original Torah scroll he transcribed. Further, a reading of the earliest extant manuscripts of the Mishneh Torah indicate that gevil was halakha derived from Moses and thus required for Torah scrolls.
Maimonides wrote that it is a law given to Moses at Sinai that a Torah scroll must be written on either gevil or klaf (in Maimonides' interpretation, contrary to that of the "Shulchan Aruch": the half-skin from the hair side) in order to be valid, and that it is preferable that they be written on gevil. To this end, hides procured from sheep or goats and calves were mostly used. The hide of a fully-grown cow, being so thick that it requires being shaved down to half its thickness on its fleshy side before it can be used (in order to remove the epidermis from the hide to make it thinner), was less common.
Maimonides made further prescriptions for the use of each of the three types of processed skin. Torah scrolls must be written on g'vil only on the side on which the hair had grown, and never on duchsustos (understood as the half-skin from the flesh side). Phylacteries, if written on k'laf, must be written on the flesh side. A mezuzah, when written on duchsustos, must be written on the hair side. It is unacceptable to write on k'laf on the hair side or on the split skin (either g'vil or duchsustos) on the flesh side.
Today's practice
According to the Talmud, Moses used gevil for the Torah scroll he placed into the Ark of the Covenant. Elsewhere in the Talmud, there is testimony that Torah scrolls were written on gevil.
Today, a handful of Jewish scribes and artisans continue to make scroll material in this way. However, the majority of Torah scrolls are written on klaf, in their belief that the Talmud recommends (as opposed to requires) gevil and relates to the optimal beautification of the scrolls rather than an essential halachic requirement. Given the uncertainty about which layer of the hide is in fact the klaf, there is a growing movement for insisting on a return to gevil in Torah scrolls in order to avoid all doubts.
Most of the Dead Sea Scrolls (written around 200 BCE), found in and around the caves of Qumran near the Dead Sea, are written on gevil.
Properly, klaf should be used for tefillin and duchsustus for mezuzot. However, this rule is only a preference, not an obligation and klaf is used for mezuzot today but there is a minority which seeks to return to the law.
See also
Ktav Stam
References
External links
The Gevil Institute: Machon Gevil The only online organization dedicated to the preservation of gevil.
https://web.archive.org/web/20080410134250/http://www.ccdesigninc.com/MishmeresStam/Leaflet.pdf
Hides (skin)
Book design
Jewish law and rituals
Writing media
Leather in Judaism
Torah
Hebrew words and phrases in Jewish law | Gevil | Engineering | 1,280 |
3,479,393 | https://en.wikipedia.org/wiki/Firefly%20%28website%29 | Firefly.com (1995–1999) was a community website featuring collaborative filtering.
History
The Firefly website was created by Firefly Network, Inc.(originally known as Agents Inc.) The company was founded in March 1995 by a group of engineers from MIT Media Lab and some business people from Harvard Business School, including Pattie Maes (Media Lab professor), Upendra Shardanand, Nick Grouf, Max Metral, David Waxman and Yezdi Lashkari. At the Media Lab, under the supervision of Maes, some of the engineers built a music recommendation system called HOMR (Helpful Online Music Recommendation Service; preceded by RINGO, an email-based system) which used collaborative filtering to help navigate the music domain to find other artists and albums that a user might like. With Matt Bruck and Khinlei Myint-U, the team wrote a business plan and Agents Inc took second place in the 1995 MIT 10K student business plan competition. Firefly's core technology was based on the work done on HOMR.
The Firefly website was launched in October 1995. It went through several iterations but remained a community throughout. It was initially created as a community for users to navigate and discover new musical artists and albums. Later it was changed to allow users to discover movies, websites, and communities as well.
Firefly technology was adopted by a number of well-known businesses, including the recommendation engine for barnesandnoble.com, ZDnet, launch.com (later purchased by Yahoo) and MyYahoo.
Since Firefly was amassing large amounts of profile data from its users, privacy became a big concern of the company. They worked with the Federal Government to help define consumer privacy protection in the digital age. They also were key contributors to OPS (Open Profiling Standard), a recommendation to the W3C (along with Netscape and VeriSign) to what eventually became known as the P3P (Platform for Privacy Preferences).
In April 1998, Microsoft purchased Firefly, presumably because of their innovations in privacy, and their long-term goal of creating a safe marketplace for consumers' profile data which the consumer controlled. The Firefly team at Microsoft was largely responsible for the first versions of Microsoft Passport.
Microsoft shut down the website in August 1999.
Homepages
The Firefly website had distinctive design and graphics. Early designs featured bright colors and a fun and eclectic look. Later redesigns reflected the company's push towards corporate customers and desire to de-emphasize the Firefly community website.
See also
Collaborative filtering
References
External links
Spanish Firefly - a Suck.com parody
HBS bulletin on Firefly
American social networking websites
Social information processing
Social software
Internet properties established in 1995
Defunct websites | Firefly (website) | Technology | 562 |
41,414,139 | https://en.wikipedia.org/wiki/Hinged%20dissection | In geometry, a hinged dissection, also known as a swing-hinged dissection or Dudeney dissection, is a kind of geometric dissection in which all of the pieces are connected into a chain by "hinged" points, such that the rearrangement from one figure to another can be carried out by swinging the chain continuously, without severing any of the connections. Typically, it is assumed that the pieces are allowed to overlap in the folding and unfolding process; this is sometimes called the "wobbly-hinged" model of hinged dissection.
History
The concept of hinged dissections was popularised by the author of mathematical puzzles, Henry Dudeney. He introduced the famous hinged dissection of a square into a triangle (pictured) in his 1907 book The Canterbury Puzzles. The Wallace–Bolyai–Gerwien theorem, first proven in 1807, states that any two equal-area polygons must have a common dissection. However, the question of whether two such polygons must also share a hinged dissection remained open until 2007, when Erik Demaine et al. proved that there must always exist such a hinged dissection, and provided a constructive algorithm to produce them. This proof holds even under the assumption that the pieces may not overlap while swinging, and can be generalised to any pair of three-dimensional figures which have a common dissection (see Hilbert's third problem). In three dimensions, however, the pieces are not guaranteed to swing without overlap.
Other hinges
Other types of "hinges" have been considered in the context of dissections. A twist-hinge dissection is one which use a three-dimensional "hinge" which is placed on the edges of pieces rather than their vertices, allowing them to be "flipped" three-dimensionally. As of 2002, the question of whether any two polygons must have a common twist-hinged dissection remains unsolved.
References
Bibliography
External links
An applet demonstrating Dudeney's hinged square-triangle dissection
A gallery of hinged dissections
Geometric dissection
Recreational mathematics
Discrete geometry
Euclidean plane geometry | Hinged dissection | Mathematics | 459 |
788,561 | https://en.wikipedia.org/wiki/List%20of%20formal%20language%20and%20literal%20string%20topics | This is a list of formal language and literal string topics, by Wikipedia page.
Formal languages
Abstract syntax tree
Backus-Naur form
Categorial grammar
Chomsky hierarchy
Concatenation
Context-free grammar
Context-sensitive grammar
Context-sensitive language
Decidable language
ECLR-attributed grammar
Finite language
Formal grammar
Formal language
Formal system
Generalized star height problem
Kleene algebra
Kleene star
L-attributed grammar
LR-attributed grammar
Myhill-Nerode theorem
Parsing expression grammar
Prefix grammar
Pumping lemma
Recursively enumerable language
Regular expression
Regular grammar
Regular language
S-attributed grammar
Star height
Star height problem
Syntactic monoid
Syntax (logic)
Tree-adjoining grammar
Literal strings
Anagram
Case sensitivity
Infinite monkey theorem
Lexical analysis
Lexeme
Lexicography
Lexicon
Lipogram
The Library of Babel
Palindrome
Pangram
Sequence alignment
Classical cryptography
Atbash cipher
Autokey cipher
Bazeries cylinder
Bible code
Bifid cipher
Caesar cipher
Cardan grille
Enigma machine
Frequency analysis
Index of coincidence
Playfair cipher
Polyalphabetic substitution
Polybius square
ROT13, ROT47
Scytale
Steganography
Substitution cipher
Tabula recta
Transposition cipher
Vigenère cipher
Formal languages | List of formal language and literal string topics | Mathematics | 246 |
4,267,651 | https://en.wikipedia.org/wiki/Parable%20of%20the%20Workers%20in%20the%20Vineyard |
The Parable of the Workers in the Vineyard (also called the Parable of the Laborers in the Vineyard or the Parable of the Generous Employer) is a parable of Jesus which appears in chapter 20 of the Gospel of Matthew in the New Testament. It is not included in the other canonical gospels. It has been described as a difficult parable to interpret.
Text
Interpretations
The parable has often been interpreted to mean that even those who are converted late in life earn equal rewards along with those converted early, and also that people who convert early in life need not feel jealous of those later converts. An alternative interpretation identifies the early laborers as Jews, some of whom resent the late-comers (Gentiles) being welcomed as equals in God's Kingdom. Both of these interpretations are discussed in Matthew Henry's 1706 Commentary on the Bible.
An alternative interpretation is that all Christians can be identified with the eleventh-hour workers. Arland J. Hultgren writes: "While interpreting and applying this parable, the question inevitably arises: Who are the eleventh-hour workers in our day? We might want to name them, such as deathbed converts or persons who are typically despised by those who are longtime veterans and more fervent in their religious commitment. But it is best not to narrow the field too quickly. At a deeper level, we are all the eleventh-hour workers; to change the metaphor, we are all honored guests of God in the kingdom. It is not really necessary to decide who the eleventh-hour workers are. The point of the parable—both at the level of Jesus and the level of Matthew's Gospel—is that God saves by grace, not by our worthiness. That applies to all of us."
A USCCB interpretation is that the parable's "close association with Mt 19:30 suggests that its teaching is the equality of all the disciples in the reward of inheriting eternal life." The USCCB interpret Mt 19:30 as: "[A]ll who respond to the call of Jesus, at whatever time (first or last), will be the same in respect to inheriting the benefits of the kingdom, which is the gift of God." In giving himself via the beatific vision, God is the greatest reward.
Some commentators have used the parable to justify the principle of a "living wage", though generally conceding that this is not the main point of the parable. An example is John Ruskin in the 19th century, who quoted the parable in the title of his book Unto This Last. Ruskin did not discuss the religious meaning of the parable but rather its social and economic implications.
Parallels
Many details of the parable, including when the workers receive their pay at the end of the day, the complaints from those who worked a full day, and the response from the king/landowner are paralleled in a similar parable found in tractate Berakhot in the Jerusalem Talmud:
To what can Rebbi Bun bar Ḥiyya be likened? To a king who hired many workers and there was one worker who was exceptionally productive in his work. What did the king do? He took him and walked with him the long and the short. In the evening, the workers came to receive their wages and he gave him his total wages with them. The workers complained and said: we were toiling the entire day and this one did toil only for two hours and he gave him his total wages with us! The king told them: This one produced in two hours more than what you produced all day long. So Rebbi Bun produced in Torah in 28 years what an outstanding Or, with the Arabic ותֹיק “constant, resolute”. student cannot learn in a hundred years. (Jerusalem Berakhot 2.8)
See also
Life of Jesus in the New Testament
Ministry of Jesus
References
Workers in the Vineyard, Parable of the
Gospel of Matthew
Works about labor
Fair division | Parable of the Workers in the Vineyard | Mathematics | 815 |
48,724,391 | https://en.wikipedia.org/wiki/The%20Erotic | The Erotic, is a concept of a source of power and resources that are available within all humans, which draws on feminine and spiritual approaches to introspection. The erotic was first described by Audre Lorde in her 1978 essay in Sister Outsider, "Uses of the Erotic: The Erotic as Power". The essay was later published in 1982 as a pamphlet by Out & Out Books.
Lorde's essay on the erotic conceptualizes the erotic as a subliminal power that all women possess that provides satisfaction and joy in several ways besides lust and carnal desire. Other feminist scholars moved on with Lorde's argument on the erotic's purpose in daily life, furthering this progressive theory into a more contemporary understanding of everyday life and modern porn culture. Since the foundational work set forth by Lorde, feminist discourses on the nature of empowerment and human exchange have been inspired by her writings.
Conceptualization of the erotic
Audre Lorde's presentation at the Fourth Berkshire Conference on the History of Women in 1978 was pivotal in bringing the erotic into feminist discourse. Conferences such as this one at Mount Holyoke College did open a space for one to speak explicitly about women's history, but forbade any discourse concerning lesbian identity. Lorde and her panelists resisted by naming the panel "Lesbians and Power," leading conference organizers to eliminate the word "lesbian" and give the panelists a very small room. In response, there a flyer campaign where Lorde and others reclaimed their title, leading to a venue that would accommodate around two thousand people.
During the event, Lorde read her essay, calling societal norms by redefining the erotic as a source of strength and resistance, making a critical contribution to feminist and queer discourses. One of the first instances of the time to remove negative connotations from the word "erotic," this form of thinking inspired those beyond Lourde to break down the walls and barriers of aspects of femininity that are considered taboo, and to embrace those aspects rather than to live in shame of it. By showing how identities, those including race and sexuality, could be a powerful way to express independence and personality, Lorde inspired women to express their identity openly, rather than to hide or disregard these aspects of their identities. This became important in fostering even more inclusive feminist spaces that recognized the importance of the connections between race, sexuality, and class.
In the essay, Lorde describes the erotic as "the nurturer or nursemaid of our deepest knowledge," meaning it is an important source of one's inner wisdom, comfort, and insight into one's self. Through this lens, the erotic becomes a powerful resource in enabling women to reclaim and honor parts of themselves that otherwise would be cast aside. When she says, "The erotic is a lens through which we can scrutinize all aspects of our existence," she means that the erotic is not just something related to sensation; it is a tool in deepening one's relationship with themselves to inspire a fuller, more intentional engagement with life. Lorde describes the erotic as "a well of replenishing and provocative force to the woman who does not fear its revelation, nor succumb to the belief that sensation is enough," she claims the erotic as a source of replenishment and creative vigor for one willing to open fully to it. The erotic is a deep and abiding force for women who do not fear its depth or deny it by forcing it into limited preconceptions of what it represents: just physical pleasure. Lorde suggests that such a deeper understanding of the erotic would move beyond a superficial feeling to allow women access to a more complete sense of self which would be benefit them intellectually, emotionally, and spiritually.
Elements of the erotic
The scholar Caleb Ward argues that there are four essential facets of the erotic as described by Lorde that help remove some of the associated ambiguity surrounding the term:
The erotic is about feeling.
The erotic is a source of knowledge.
The erotic is a source of power in the face of oppression.
The erotic can catalyze concerted political action and coalition across differences.
The etymology of the erotic comes from the Greek word eros, which Audre Lorde describes as "the personification of love in all its aspects".
Misinterpretation of the erotic through pornography
Over time, Audre Lorde's idea of the erotic has gotten misconstrued and oversimplified into a concept focusing on physical or sexual pleasure. This interpretation fails to represent the inner strength and creativity that Lorde attached to the erotic. Misreading it solely in relation to sexuality, the erotic loses its radical potential that it could embody as a force towards empowerment, connection, and resistance. More than a sensual pleasure, the erotic, for Lorde was deeply connected to one's work and relationships. That connection was powerful, carrying knowledge and inner strength that could empower people to resist and challenge multiple oppressive systems.
Lorde writes that "The erotic has often been misnamed by men and used against women…[confused] with its opposite, the pornographic… [and] pornography emphasizes sensation without feeling". Lorde proposes the erotic has often been confused with pornography, though they are inherently distinct. In addition, she states that "pornography is a direct denial of the power of the erotic, for it represents the suppression of true feeling. Pornography emphasizes sensation without feeling". Pornography suppresses true feelings and focuses on superficial sensations, while the erotic represents a deeply emotional connection and creativity. Lorde relates the erotic to feminine and spiritual creativity, describing it as "the lifeforce of women; of that creative energy empowered, the knowledge and use of which [women] are now reclaiming in our language, our history, our dancing, our loving, our work, our lives…[and exemplify] how acutely and fully we can feel in doing."
Additionally, the term "erotic" has often been misrepresented and used as a tool to over-sexualize and under value women in a patriarchal society. While this term is often synonymous with pornography, it was meant to provide liberation for women, a freedom that can be found from self reflection and human connection with other woman. Patriarchal and capitalist systems diminish the erotic by prioritizing profit over human needs and reducing both work and life to mere duties. This suppression disconnects people from the joy and creative power inherent in their work. The primary mechanism of oppression, is found in the misuse and understanding of systemic power structures that continue to oppress women in their voice and expression of self. The erotic has the potential to be transformative. When women embrace the erotic, they resist societal oppression, including racism, sexism, and the patriarchal structures that dictate how they should live.
Erotic in everyday life
In "Uses of the Erotic," Lorde highlights the transformative potential of incorporating the erotic in daily life, drawing an analogy between the erotic and a deep source of joy and energy found in simple yet creative acts. The erotic transforms ordinary actions, such as dancing, writing, or creating, into profound satisfaction and fulfillment that empowers individuals to live more authentic and passionate lives. Engaging with the erotic enables women to gain more profound satisfaction and wholeness in their lives. It drives excellence and emotional fulfillment across all aspects of life, extending beyond sexual contexts. Embracing and utilizing the erotic appropriately empowers women to pursue greater depth and meaning in their lives, work, and relationships.
The etymological affiliation of the Erotic as eros with notions of "life force" or "creative energies" underlines the presence of the Erotic in daily life. Though it is tied to passion and sensuality, in Audre Lorde's terms, the erotic is "far more than sexual or sensual contexts; it motivates excellence, survival, and delight through all of life's activities." This influence reveals itself in the small, meaningful moments that bring self-realization and dignity to simple, everyday acts. The Erotic is expressed when a person invests deeply in what truly fulfills them, like cooking a meal with care or taking the time to savor nature. In these simply daily actions, joy and happiness emerge, emphasizing experiences that align with one’s values and encourage a genuine, holistic way of living.
Lorde also suggests sharing the erotic promotes in-depth emotional connections and strengthens interpersonal bonds. Lorde proposes, "The sharing of joy [...] forms a bridge between the sharers which can be the basis for understanding much of what is not shared between them, and lessens the threat of their difference." This form of sharing involves mutual joy and recognition of each other's humanity rather than using each other as a means of superficial satisfaction.
Furthering the erotic: insights from other feminist scholars
The idea of the erotic as a source of power and agency has been furthered by a number of feminist scholars and activists in the late 20th century. For example, in All About Love: New Visions, Bell Hooks argues that love opens people to intimate connections in a way very similar to Lorde's idea of the erotic. For Hooks, the erotic as love strengthens connections with others to be the grounding for solidarity to effectively resist systems of oppression and reclaim one's identity.
Similarly, writer and activist Adrienne Rich questioned the political practice of compulsory heterosexuality within her essay "Compulsory Heterosexuality and Lesbian Existence." She writes in the foreword that "the depth and breadth of woman identification and woman bonding... can become increasingly a politically activating impulse." This echoes the concept of the erotic: the idea of women in solidarity, connected in intimacy, to take back their power and resist oppressive systems.
In her book How Three Black Women Writers Combined Spiritual and Sexual Love, Cherie Ann Turpin explains the way that the similarities between Audre Lorde, Dionne Brand, and Toni Morrison all similarly conceptualize the erotic within their works. As stated in the foreword by Frank E. Dobson Jr. “ Turpin discusses intellectual recovery and ownership of the black woman's body, through the evocation of the erotic. "The erotic is a disruption of the test that insists on sameness, the 'pattern' that gives the impression of totality" (4). In discussing this disruption, she suggests that each of the authors imagines and articulates erotic subjectivity in such a manner that tradition and stereotype are confronted and countered.” This echoes Lorde’s own call to action within the erotic to feel it’s power firsthand. “The erotic cannot be felt secondhand. As a Black lesbian feminist, I have a particular feeling, knowledge, and understanding for those sisters with whom I have danced hard, played, or even fought. This deep participation has often been the forerunner for joint concerted actions not possible before.”
Use in the critique of modern porn culture
Catharine MacKinnon, an American legal scholar, builds upon Lorde's concepts that underscore the pornographic as a form of oppression by emphasizing that pornography not only works to oppress the erotic power of women, but also suppresses women's freedom of speech in her piece "Pornography, Civil Rights, and Speech". Pornography eroticizes "the unspeakable abuse: the rape, the battery, the sexual harassment, the prostitution, and the sexual abuse of children. Only in the pornography it is called something else: sex, sex, sex, sex, and sex, respectively" which thus contributes to the perpetuation of inequality between men and women, promoting a sense of normalization for these atrocities of abuse. The erotic power that Lorde describes, a resource that "lies in a deeply female and spiritual plane" becomes twisted, perverted and used against women to maintain female subordination in the pornographic. In her work, MacKinnon draws connections between pornographic depictions of sexual acts and documented cases of sexual assault in which the abusive actions of the male perpetrator demonstrate a direct correlation between the pornographic depictions of sexuality and sexual acts of aggression. In this same work, she quotes a study detailing cases in which men who watched pornography depicting acts of sexual assault self-reported to being more inclined towards committing aggressive acts of behavior towards women, including greater likelihood of engaging in acts of sexual assault. These images create a desensitization regarding this particular type of aggressive behavior constructing a reality that silences women and the violence committed against women's bodies. When women report instances of sexual assault or violent sexual behavior, their voices are dismissed, as pornography has distorted the reality of sexual aggression. Pornography becomes another way of silencing women, another way of distorting their experiences. Pornography becomes the snatching away of credibility, sexual violence replaced with a westernized version of 'eroticism'.
Audre Lorde
Audre Lorde (1934–1992) is best known for her work as an, "American poet, essayist, and autobiographer known for her passionate writings on lesbian feminism and racial issues." Her powerful writing included over a dozen publications in the form of poetry and essays, winning multiple national and international awards for her writing, and was one of the primary founders of Kitchen Table: Women of Color Press. She has also been hailed as, " The Black feminist, lesbian, poet, mother, and warrior." Other famous poems and essays written by Lorde include:
A Burst Of Light
The Black Unicorn
Between Ourselves
Cables To Rage
The Cancer Journals
The First Cities
From A Land Where Other People Live
I Am Your Sister: Black Women Organizing Across Sexualities
Lesbian Party: An Anthology
Need: A Chorale For Black Women Voices
The New York Head Shop And Museum
Our Dead Behind Us: Poems
Sister Outsider: Essays And Speeches
The Marvelous Arithmetics Of Distance: Poems
Undersong: Chosen Poems Old And New
Uses Of The Erotic: The Erotic As Power
Woman Poet—The East
Zami: A New Spelling of My Name
See also
Black lesbian literature in the United States
Berkshire Conference of Women Historians
Kitchen Table: Women of Color Press
Black Feminism
Black Feminist Thought
Sister Outsider
Audre Lorde Project
Community Organizing
References
Feminism
Erotica
Women-only spaces
Sexuality
Pornography
Audre Lorde | The Erotic | Biology | 2,904 |
75,520,475 | https://en.wikipedia.org/wiki/Diethofencarb | Diethofencarb is a carbamate fungicide which is used to control Botrytis infections on a variety of fruit and vegetable crops.
References
Fungicides
Ethoxy compounds
Isopropyl esters
Carbamates
Anilines | Diethofencarb | Chemistry,Biology | 52 |
10,797,278 | https://en.wikipedia.org/wiki/Cockade%20of%20Spain | The Cockade of Spain is a national symbol that arose after the French Revolution, by pleating a golden pin over the former red ribbon, colors of the ancient Royal Bend of Castile. The resulting insignia is a circle that symbolizes the colors of the Spanish flag: Red and Yellow, being carried as individual representation in case of distinctions or prizes or by other types of events. At the moment it is not used in Spain, except as a roundel for the identification of Spanish Armed Forces aircraft.
Gallery
See also
Roundel of the Spanish Republican Air Force
References
Antonio Cánovas del Castillo, De la escarapela roja y las banderas y divisas utilizadas en España
National symbols of Spain
Cockades | Cockade of Spain | Mathematics | 148 |
4,855,513 | https://en.wikipedia.org/wiki/Mark%20I%20%28detector%29 | The Mark I, also known as the SLAC-LBL Magnetic Detector, was a particle detector that operated at the interaction point of the SPEAR collider from 1973 to 1977. It was the first 4π detector, i.e. the first detector to uniformly cover as much of the 4π steradians (units of solid angle) around the interaction point as possible with different types of component particle detectors arranged in layers. This design proved quite successful, and the detector was used in discoveries of the particle and tau lepton, which both resulted in Nobel Prizes (for Burton Richter in 1976 and Martin Lewis Perl in 1995). This basic design philosophy continues to be used in all modern collider detectors.
Details of the detector
The detector was enormous for the early 1970's, weighing in at ~150 tons, with a length of 12 feet and a height of 20 feet. The colliding electron and positron beams were contained within a vacuum chamber of about 6 inches in diameter. The beam pipe was constructed from a very thin (0.008 inch) corrugated stainless steel tube. The two counter-rotating beams were collided at the center of the detector.
A solenoid coil generated a magnetic field roughly parallel to the beam direction, which enabled measurement of the transverse momentum of particles emerging from the collision point.
The steel flux return was constructed from 8 pieces of steel arranged in an octagon around the detector; and two removable steel end caps, one at each end of the detector. Construction of the original detector, designed by Bill-Davies White, took about a year, and was completed in 1973.
The original detector consisted of a series of components in cylindrical layers.
Inner Trigger Scintillation Counters
Four inner trigger scintillation counters were positioned around the beam pipe. Charged particles traversing these counters generated light pulses, detected by photo-multiplier tubes and associated electronics.
Multi-Wire Proportional Chambers
SLAC collaborators developed the MWPC system.
Cylindrical Wire Spark Chambers
There were 4 concentric sets of electronic read-out wire spark chambers. Design and construction of these detectors were overseen by Roy Schwitters of the SLAC collaboration
Outer Trigger Counters
Sandwiched between the outermost cylindrical wire chamber and the magnetic coil were 48 scintillation counters. Again, light pulses generated by the passage of charged particles traversing these counters were detected by photo-multiplier tubes at each end and associated electronics. Time-of-arrival of the pulse was also recorded for each photomultiplier.
Magnet Coil
A solenoid coil was powered with DC current to produce a .4 T (check) magnetic field, to bend charged particles in the plane perpendicular to the beam. This made it possible to detect tracks in three dimensions, and measure charged particles, to determine if they originated from the interaction region of the beam pipe.
Lead-Scintillator Shower Detectors
Just outside of the magnet coil were 24 shower counter. Each counter was roughly 4 feet wide by 12 feet long. 10 plates of .25 inch lead were alternated with 10 plastic scintillators.
Electrons or photons passing through this sandwich detector produced electromagnetic cascade showers.
Light pulses from the scintillator plates were guided to a photomultiplier tube at each end, using plastic (lucite) light guides.
These counters, plus one spare, were designed and constructed at LBL, and transported to SLAC.
Iron Flux Return
Eight 8 inch (25 cm) iron plates, plus two endcap steel pieces, completed the magnetic flux return path. The eight iron plates form an octagon.
Muon Spark Chambers
References
Article on SPEAR history
Particle experiments | Mark I (detector) | Physics | 743 |
901,369 | https://en.wikipedia.org/wiki/Pet%20peeve | A pet peeve, pet aversion, or pet hate is a minor annoyance that an individual finds particularly irritating to a greater degree than the norm.
Origin of the concept
The noun peeve, meaning an annoyance, is believed to have originated in the United States early in the twentieth century, derived by back-formation from the adjective peevish, meaning "ornery or ill-tempered", which dates from the late 14th-century.
The term pet peeve was introduced to a wide readership in the single-panel comic strip The Little Pet Peeve in the Chicago Tribune during the period 1916–1920. The strip was created by cartoonist Frank King, who also created the long-running Gasoline Alley strip. King's "little pet peeves" were humorous critiques of generally thoughtless behaviors and nuisance frustrations. Examples included people reading the inter-titles in silent films aloud, cracking an egg only to smell that it's gone rotten, back-seat drivers, and rugs that keep catching the bottom of the door and bunching up. King's readers submitted topics, including theater goers who unwrap candy in crinkly paper during a live performance, and (from a 12-year-old boy) having his mother come in to sweep when he has the pieces of a building toy spread out on the floor.
Current usage and examples
Pet peeves often involve specific behaviors of someone close, such as a spouse or significant other. These behaviors may involve disrespect, manners, personal hygiene, relationships, and family issues. A key aspect of a pet peeve is that it may well seem acceptable or insignificant to others, while the person is likewise not bothered by things that might upset others. For example, a supervisor may have a pet peeve about people leaving the lid up on the copier, when others interrupt when speaking, or their subordinates having messy desks.
References
External links
Word Detective Origins of Pet Peeve
Human behavior
1910s neologisms | Pet peeve | Biology | 407 |
43,638,795 | https://en.wikipedia.org/wiki/Gas%20blending | Gas blending is the process of mixing gases for a specific purpose where the composition of the resulting mixture is defined, and therefore, controlled.
A wide range of applications include scientific and industrial processes, food production and storage and breathing gases.
Gas mixtures are usually specified in terms of molar gas fraction (which is closely approximated by volumetric gas fraction for many permanent gases): by percentage, parts per thousand or parts per million. Volumetric gas fraction converts trivially to partial pressure ratio, following Dalton's law of partial pressures. Partial pressure blending at constant temperature is computationally simple, and pressure measurement is relatively inexpensive, but maintaining constant temperature during pressure changes requires significant delays for temperature equalization. Blending by mass fraction is unaffected by temperature variation during the process, but requires accurate measurement of mass or weight, and calculation of constituent masses from the specified molar ratio. Both partial pressure and mass fraction blending are used in practice.
Applications
Shielding gases for welding
Shielding gases are inert or semi-inert gases used in gas metal arc welding and gas tungsten arc welding to protect the weld area from oxygen and water vapour, which can reduce the quality of the weld or make the welding more difficult.
Gas metal arc welding (GMAW), or metal inert gas (MIG) welding, is a process that uses a continuous wire feed as a consumable electrode and an inert or semi-inert gas mixture to protect the weld from contamination.
Gas tungsten arc welding (GTAW), or tungsten inert gas (TIG) welding, is a manual welding process that uses a nonconsumable tungsten electrode, an inert or semi-inert gas mixture, and a separate filler material.
Modified atmosphere packaging in the food industry
Modified atmosphere packaging preserves fresh produce to improve delivered quality of the product and extend its life. The gas composition used to pack food products depends on the product. A high oxygen content helps to retain the red colour of meat, while low oxygen reduces mould growth in bread and vegetables.
Gas mixtures for brewing
Sparging: An inert gas such as nitrogen is bubbled through the wine, which removes the dissolved oxygen. Carbon dioxide is also removed and to ensure that an appropriate amount of carbon dioxide remains, a mixture of nitrogen and carbon dioxide may be used for the sparging gas.
Purging and blanketing: The removal of oxygen from the headspace above the wine in a container by flushing with a similar gas mixture to that used for sparging is called purging, and if it is left there it is called blanketing or inerting.
Breathing gas mixtures for diving
A breathing gas is a mixture of gaseous chemical elements and compounds used for respiration. The essential component for any breathing gas is a partial pressure of oxygen of between roughly 0.16 and 1.60 bar at the ambient pressure. The oxygen is usually the only metabolically active component unless the gas is an anaesthetic mixture. Some of the oxygen in the breathing gas is consumed by the metabolic processes, and the inert components are unchanged, and serve mainly to dilute the oxygen to an appropriate concentration, and are therefore also known as diluent gases.
Scuba diving
Gas blending for scuba diving is the filling of diving cylinders with non-air breathing gases such as nitrox, trimix and heliox. Use of these gases is generally intended to improve overall safety of the planned dive, by reducing the risk of decompression sickness and/or nitrogen narcosis, and may improve ease of breathing.
Surface supplied and saturation diving
Gas blending for surface supplied and saturation diving may include the filling of bulk storage cylinders and bailout cylinders with breathing gases, but it also involves the mixing of breathing gases at lower pressure which are supplied directly to the diver or to the hyperbaric life-support system. Part of the operation of the life-support system is the replenishment of oxygen used by the occupants, and removal of the carbon dioxide waste product by the gas conditioning unit. This entails monitoring of the composition of the chamber gas and periodic addition of oxygen to the chamber gas at the internal pressure of the chamber.
The gas mixing unit is part of the life support equipment of a saturation system, along with other components which may include bulk gas storage, compressors, helium recovery unit, bell and diver hot water supply, gas conditioning unit and emergency power supply
Medical gas mixtures
The anesthetic machine is used to blend breathing gas for patients under anesthesia during surgery. The gas mixing and delivery system lets the anesthetist control oxygen fraction, nitrous oxide concentration and the concentration of volatile anesthetic agents.
The machine is usually supplied with oxygen (O2) and nitrous oxide (N2O) from low pressure lines and high pressure reserve cylinders, and the metered gas is mixed at ambient pressure, after which additional anesthetic agents may be added by a vaporizer, and the gas may be humidified. Air is used as a diluent to decrease oxygen concentration. In special cases other gases may also be added to the mixture. These may include carbon dioxide (CO2), used to stimulate respiration, and helium (He) to reduce resistance to flow or to enhance heat transfer.
Gas mixing systems may be mechanical, using conventional rotameter banks, or electronic, using proportional solenoids or pulsed injectors, and control may be manual or automatic.
Chemical production processes
Providing reactive gaseous materials for chemical production processes in the required ratio
Controlled atmosphere manufacture and storage
Protective gas mixtures may be used to exclude air or other gases from the surface of sensitive materials during processing.
Examples include melting of reactive metals such as magnesium, and heat treatment of steels.
Customized gas mixtures for analytical applications
Calibration gases:
Span gases are used for testing and calibrating gas detection equipment by exposing the sensor to a known concentration of a contaminant. The gases are used as a reference point to ensure correct readings after calibration and have very accurate composition, with a content of the gas to be detected close to the set value for the detector.
Zero gas is normally a gas free of the component to be measured, and as similar as practicable to the composition of the gas to be monitored, used to calibrate the zero point of the sensor.
Calibration gas mixtures are generally produced in batches by gravimetric or volumetric methods.
The gravimetric method uses sensitive and accurately calibrated scales to weigh the amounts of gases added into the cylinder. Precise measurement is required as inaccuracy or impurities can result in incorrect calibration. The container for calibration gas must be as close to perfectly clean as practicable. The cylinders may be cleaned by purging with high purity nitrogen, the vacuumed. For particularly critical mixtures the cylinder may be heated while being vacuumed to facilitate removal of any impurities adhering to the walls.
After filling, the gas mixture must be thoroughly mixed to ensure that all components are evenly distributed throughout the container to prevent possible variations on composition within the container. This is commonly done by rolling the container horizontally for 2 to 4 hours.
Methods
Several methods are available for gas blending. These may be distinguished as batch methods and continuous processes.
Batch methods
Batch gas blending requires the appropriate amounts of the constituent gases to be measured and mixed together until the mixture is homogeneous. The amounts are based on the mole (or molar) fractions, but measured either by volume or by mass. Volume measurement may be done indirectly by partial pressure, as the gases are often sequentially decanted into the same container for mixing, and therefore occupy the same volume. Weight measurement is generally used as a proxy for mass measurement as acceleration can usually be considered constant.
The mole fraction is also called the amount fraction, and is the number of molecules of a constituent divided by the total number of all molecules in the mixture. For example, a 50% oxygen, 50% helium mixture will contain approximately the same number of molecules of oxygen and helium. As both oxygen and helium approximate ideal gases at pressures below 200 bar, each will occupy the same volume at the same pressure and temperature, so they can be measured by volume at the same pressure, then mixed, or by partial pressure when decanted into the same container.
The mass fraction can be calculated from the molar fraction by multiplying the molar fraction by the molecular mass for each constituent, to find a constituent mass, and comparing it to the summed masses of all the constituents. The actual mass of each constituent needed for a mixture is calculated by multiplying the mass fraction by the desired mass of the mixture.
Partial pressure blending
Also known as volumetric blending. This must be done at constant temperature for best accuracy, though it is possible to compensate for temperature changes in proportion to the accuracy of the temperature measured before and after each gas is added to the mixture.
Partial pressure blending is commonly used for breathing gases for diving. The accuracy required for this application can be achieved by using a pressure gauge which reads accurately to 0.5 bar, and allowing the temperature to equilibrate after each gas is added.
Mass fraction blending
Also known as gravimetric blending. This is relatively unaffected by temperature, and accuracy depends on the accuracy of mass measurement of the constituents.
Mass fraction blending is used where great accuracy of the mixture is critical, such as in calibration gases. The method is not suited to moving platforms where the accelerations can cause inaccurate measurement, and therefore is unsuitable for mixing diving gases on vessels.
Continuous processes
Additive
Constant flow blending – a controlled flow of the constituent gases is mixed to form the product. Blending may occur at ambient pressure or at a pressure setting above ambient but lower than supply gas pressures.
Constant mass flow supply: Precision mass flow controllers are used to control the flow rate of each gas for blending. Mass flow meters may be installed on the outputs of the mass flow controllers to monitor the output. The gases may be passed through a static mixer to ensure homogeneous output.
Continuous gas blending is used for some surface supplied diving applications, and for many chemical processes using reactive gas mixtures, particularly where there may be a need to alter the mixture during the operation or process.
Subtractive
These processes start with a mixture of gases, usually air, and reduce the concentration of one or more of the constituents. These processes can be used for the production of Nitrox for scuba diving and deoxygenated air for blanketing purposes.
Pressure swing adsorption – Selective adsorption of gas on a medium which is reversible and proportional to pressure. Gas is loaded onto the medium during the high pressure phase and is released during the low pressure phase.
Membrane gas separation – Gas is forced through a semi-permeable membrane by a pressure difference. Some of the constituent gases pass through the membrane more easily than the others, and the output from the low pressure side is enriched with the gases which pass through more easily. Gases which are slower to pass through the membrane accumulate on the high pressure side and are continuously discharged to retain a steady concentration. The process may be repeated in several stages to increase concentrations.
Gas analysis
Gas mixtures must generally be analysed either in process or after blending for quality control. This is particularly important for breathing gas mixtures where errors can affect the health and safety of the end user.
Oxygen content is relatively simple to monitor using electro-galvanic cells and these are routinely used in the underwater diving industry for this purpose, though other methods may be more accurate and reliable.
References
See also
Gas blending for scuba diving
Industrial processes
Industrial gases | Gas blending | Chemistry | 2,382 |
3,650,941 | https://en.wikipedia.org/wiki/S%20phase%20index | S-phase index (SPI), is a measure of cell growth and viability, especially the capacity of tumor cells to proliferate. It is defined as the number of BrdU-incorporating cells relative to the volume of DNA staining determined from whole mount confocal analyses.
Only cells in the S phase will incorporate BrdU into their DNA structure, which assists in determining length of the cell cycle.
References
Murphy, Terence D. "Drosophila skpA, a component of SCF ubiquitin ligases, regulates centrosome duplication independently of cyclin E accumulation", Journal of Cell Science 116, 2321-2332 (2003).
Cellular processes | S phase index | Biology | 142 |
823,251 | https://en.wikipedia.org/wiki/Wildflower | A wildflower (or wild flower) is a flower that grows in the wild, the flower of a wild or uncultivated plant or the plant bearing it. Meaning it was not intentionally seeded or planted. The term implies that the plant is neither a hybrid nor a selected cultivar that is any different from the native plant, even if it is growing where it would not naturally be found. The term can refer to the whole plant, even when not in bloom, and not just the flower.
"Wildflower" is an imprecise term. More exact terms include:
native species naturally occurring in the area (see flora)
exotic or introduced species not native to the area, including
invasive species that out-compete other plants, whether native or not
imported (introduced to an area whether deliberately or accidentally)
naturalized are imported but come to be considered by the public as native
In the United Kingdom, the organization Plantlife International instituted the "County Flowers scheme" in 2002, see County flowers of the United Kingdom for which members of the public nominated and voted for a wildflower emblem for their county. The aim was to spread awareness of the heritage of native species and about the need for conservation, as some of these species are endangered. For example, Somerset has adopted the cheddar pink (Dianthus gratianopolitanus), London the rosebay willowherb (Chamerion angustifolium) and Denbighshire/Sir Ddinbych in Wales the rare limestone woundwort (Stachys alpina).
Examples
Adonis aestivalis, summer pheasant's-eye
Anagallis, pimpernel
Agrostemma githago, common corn-cockle
Alnus glutinosa, common alder
Anthemis arvensis, corn chamomile
Callirhoe involucrata, purple poppy-mallow
Centaurea cyanus, cornflower
Coreopsis tinctoria, plains coreopsis
Dianthus barbatus, sweet William
Digitalis purpurea, foxglove
Dimorphotheca aurantiaca, glandular Cape marigold
Eschscholzia californica, California poppy
Ficaria verna, lesser celandine
Glebionis segetum, corn marigold
Gypsophila elegans, annual baby's-breath
Lantana spp., shrub verbenas
Papaver rhoeas, common poppy
Petasites hybridus, butterbur
Phlox drummondii, annual phlox
Potentilla sterilis, strawberryleaf cinquefoil
Prunus padus, bird cherry
Silene latifolia, white campion
Tussilago farfara, coltsfoot
Ulmus sp., elm
Viola riviniana, common dog-violet
Viola tricolor'', wild pansy
See also
List of San Francisco Bay Area wildflowers
Superbloom
Megaherb
Native plant
Naturalisation
Weed
Escaped plant
References
External links
Wildflower Magazine promotes the use and conservation of wildflowers and native plants, Lady Bird Johnson Wildflower Center. Formerly published by the North American Native Plant Society
Plantlife, UK organization
Wildflower in Cyprus Information on 1250 native plant species to North Cyprus.
Ontario Wildflowers Detailed information about wildflowers of Ontario (Canada) and Northeastern North America
Wild flowers of the north-eastern states. (1895) Being three hundred and eight individuals common to the north-eastern United States.
Western USA wildflower reports
NPIN: Native Plant Database
Native Plant Database from the North American Native Plant Society
Plants
Flowers | Wildflower | Biology | 731 |
50,964 | https://en.wikipedia.org/wiki/Liouville%20number | In number theory, a Liouville number is a real number with the property that, for every positive integer , there exists a pair of integers with such that
The inequality implies that Liouville numbers possess an excellent sequence of rational number approximations. In 1844, Joseph Liouville proved a bound showing that there is a limit to how well algebraic numbers can be approximated by rational numbers, and he defined Liouville numbers specifically so that they would have rational approximations better than the ones allowed by this bound. Liouville also exhibited examples of Liouville numbers thereby establishing the existence of transcendental numbers for the first time.
One of these examples is Liouville's constant
in which the nth digit after the decimal point is 1 if is the factorial of a positive integer and 0 otherwise. It is known that and , although transcendental, are not Liouville numbers.
The existence of Liouville numbers (Liouville's constant)
Liouville numbers can be shown to exist by an explicit construction.
For any integer and any sequence of integers such that for all and for infinitely many , define the number
In the special case when , and for all , the resulting number is called Liouville's constant:
It follows from the definition of that its base- representation is
where the th term is in the th place.
Since this base- representation is non-repeating it follows that is not a rational number. Therefore, for any rational number , .
Now, for any integer , and can be defined as follows:
Then,
Therefore, any such is a Liouville number.
Notes on the proof
The inequality follows since ak ∈ {0, 1, 2, ..., b−1} for all k, so at most ak = b−1. The largest possible sum would occur if the sequence of integers (a1, a2, ...) were (b−1, b−1, ...), i.e. ak = b−1, for all k. will thus be less than or equal to this largest possible sum.
The strong inequality follows from the motivation to eliminate the series by way of reducing it to a series for which a formula is known. In the proof so far, the purpose for introducing the inequality in #1 comes from intuition that (the geometric series formula); therefore, if an inequality can be found from that introduces a series with (b−1) in the numerator, and if the denominator term can be further reduced from to , as well as shifting the series indices from 0 to , then both series and (b−1) terms will be eliminated, getting closer to a fraction of the form , which is the end-goal of the proof. This motivation is increased here by selecting now from the sum a partial sum. Observe that, for any term in , since b ≥ 2, then , for all k (except for when n=1). Therefore, (since, even if n=1, all subsequent terms are smaller). In order to manipulate the indices so that k starts at 0, partial sum will be selected from within (also less than the total value since it is a partial sum from a series whose terms are all positive). Choose the partial sum formed by starting at k = (n+1)! which follows from the motivation to write a new series with k=0, namely by noticing that .
For the final inequality , this particular inequality has been chosen (true because b ≥ 2, where equality follows if and only if n=1) because of the wish to manipulate into something of the form . This particular inequality allows the elimination of (n+1)! and the numerator, using the property that (n+1)! – n! = (n!)n, thus putting the denominator in ideal form for the substitution .
Irrationality
Here the proof will show that the number where and are integers and cannot satisfy the inequalities that define a Liouville number. Since every rational number can be represented as such the proof will show that no Liouville number can be rational.
More specifically, this proof shows that for any positive integer large enough that [equivalently, for any positive integer )], no pair of integers exists that simultaneously satisfies the pair of bracketing inequalities
If the claim is true, then the desired conclusion follows.
Let and be any integers with Then,
If then
meaning that such pair of integers would violate the first inequality in the definition of a Liouville number, irrespective of any choice of .
If, on the other hand, since then, since is an integer, we can assert the sharper inequality From this it follows that
Now for any integer the last inequality above implies
Therefore, in the case such pair of integers would violate the second inequality in the definition of a Liouville number, for some positive integer .
Therefore, to conclude, there is no pair of integers with that would qualify such an as a Liouville number.
Hence a Liouville number cannot be rational.
Liouville numbers and transcendence
No Liouville number is algebraic. The proof of this assertion proceeds by first establishing a property of irrational algebraic numbers. This property essentially says that irrational algebraic numbers cannot be well approximated by rational numbers, where the condition for "well approximated" becomes more stringent for larger denominators. A Liouville number is irrational but does not have this property, so it cannot be algebraic and must be transcendental. The following lemma is usually known as Liouville's theorem (on diophantine approximation), there being several results known as Liouville's theorem.
Lemma: If is an irrational root of an irreducible polynomial of degree with integer coefficients, then there exists a real number such that for all integers with ,
Proof of Lemma: Let be a minimal polynomial with integer coefficients, such that .
By the fundamental theorem of algebra, has at most distinct roots.
Therefore, there exists such that for all we get .
Since is a minimal polynomial of we get , and also is continuous.
Therefore, by the extreme value theorem there exists and such that for all we get .
Both conditions are satisfied for .
Now let be a rational number. Without loss of generality we may assume that . By the mean value theorem, there exists such that
Since and , both sides of that equality are nonzero. In particular and we can rearrange:
Proof of assertion: As a consequence of this lemma, let x be a Liouville number; as noted in the article text, x is then irrational. If x is algebraic, then by the lemma, there exists some integer n and some positive real A such that for all p, q
Let r be a positive integer such that 1/(2r) ≤ A and define m = r + n. Since x is a Liouville number, there exist integers a, b with b > 1 such that
which contradicts the lemma. Hence a Liouville number cannot be algebraic, and therefore must be transcendental.
Establishing that a given number is a Liouville number proves that it is transcendental. However, not every transcendental number is a Liouville number. The terms in the continued fraction expansion of every Liouville number are unbounded; using a counting argument, one can then show that there must be uncountably many transcendental numbers which are not Liouville. Using the explicit continued fraction expansion of e, one can show that e is an example of a transcendental number that is not Liouville. Mahler proved in 1953 that is another such example.
Uncountability
Consider the number
3.1400010000000000000000050000....
3.14(3 zeros)1(17 zeros)5(95 zeros)9(599 zeros)2(4319 zeros)6...
where the digits are zero except in positions n! where the digit equals the nth digit following the decimal point in the decimal expansion of .
As shown in the section on the existence of Liouville numbers, this number, as well as any other non-terminating decimal with its non-zero digits similarly situated, satisfies the definition of a Liouville number. Since the set of all sequences of non-null digits has the cardinality of the continuum, the same is true of the set of all Liouville numbers.
Moreover, the Liouville numbers form a dense subset of the set of real numbers.
Liouville numbers and measure
From the point of view of measure theory, the set of all Liouville numbers is small. More precisely, its Lebesgue measure, , is zero. The proof given follows some ideas by John C. Oxtoby.
For positive integers and set:
then
Observe that for each positive integer and , then
Since
and then
Now
and it follows that for each positive integer , has Lebesgue measure zero. Consequently, so has .
In contrast, the Lebesgue measure of the set of all real transcendental numbers is infinite (since the set of algebraic numbers is a null set).
One could show even more - the set of Liouville numbers has Hausdorff dimension 0 (a property strictly stronger than having Lebesgue measure 0).
Structure of the set of Liouville numbers
For each positive integer , set
The set of all Liouville numbers can thus be written as
Each is an open set; as its closure contains all rationals (the from each punctured interval), it is also a dense subset of real line. Since it is the intersection of countably many such open dense sets, is comeagre, that is to say, it is a dense Gδ set.
Irrationality measure
The Liouville–Roth irrationality measure (irrationality exponent, approximation exponent, or Liouville–Roth constant) of a real number is a measure of how "closely" it can be approximated by rationals. It is defined by adapting the definition of Liouville numbers: instead of requiring the existence of a sequence of pairs that make the inequality hold for each —a sequence which necessarily contains infinitely many distinct pairs—the irrationality exponent is defined to be the supremum of the set of for which such an infinite sequence exists, that is, the set of such that is satisfied by an infinite number of integer pairs with . For any value , the infinite set of all rationals satisfying the above inequality yields good approximations of . Conversely, if , then there are at most finitely many with that satisfy the inequality. If is a Liouville number then .
See also
Brjuno number
Markov constant
Diophantine approximation
References
External links
The Beginning of Transcendental Numbers
Diophantine approximation
Mathematical constants
Articles containing proofs
Real transcendental numbers
Irrational numbers | Liouville number | Mathematics | 2,253 |
244,593 | https://en.wikipedia.org/wiki/Lock%20%28computer%20science%29 | In computer science, a lock or mutex (from mutual exclusion) is a synchronization primitive that prevents state from being modified or accessed by multiple threads of execution at once. Locks enforce mutual exclusion concurrency control policies, and with a variety of possible methods there exist multiple unique implementations for different applications.
Types
Generally, locks are advisory locks, where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implement mandatory locks, where attempting unauthorized access to a locked resource will force an exception in the entity attempting to make the access.
The simplest type of lock is a binary semaphore. It provides exclusive access to the locked data. Other schemes also provide shared access for reading data. Other widely implemented access modes are exclusive, intend-to-exclude and intend-to-upgrade.
Another way to classify locks is by what happens when the lock strategy prevents the progress of a thread. Most locking designs block the execution of the thread requesting the lock until it is allowed to access the locked resource. With a spinlock, the thread simply waits ("spins") until the lock becomes available. This is efficient if threads are blocked for a short time, because it avoids the overhead of operating system process re-scheduling. It is inefficient if the lock is held for a long time, or if the progress of the thread that is holding the lock depends on preemption of the locked thread.
Locks typically require hardware support for efficient implementation. This support usually takes the form of one or more atomic instructions such as "test-and-set", "fetch-and-add" or "compare-and-swap". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation.
Uniprocessor architectures have the option of using uninterruptible sequences of instructions—using special instructions or instruction prefixes to disable interrupts temporarily—but this technique does not work for multiprocessor shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantial synchronization issues.
The reason an atomic operation is required is because of concurrency, where more than one task executes the same logic. For example, consider the following C code:
if (lock == 0) {
// lock free, set it
lock = myPID;
}
The above example does not guarantee that the task has the lock, since more than one task can be testing the lock at the same time. Since both tasks will detect that the lock is free, both tasks will attempt to set the lock, not knowing that the other task is also setting the lock. Dekker's or Peterson's algorithm are possible substitutes if atomic locking operations are not available.
Careless use of locks can result in deadlock or livelock. A number of strategies can be used to avoid or recover from deadlocks or livelocks, both at design-time and at run-time. (The most common strategy is to standardize the lock acquisition sequences so that combinations of inter-dependent locks are always acquired in a specifically defined "cascade" order.)
Some languages do support locks syntactically. An example in C# follows:
public class Account // This is a monitor of an account
{
// Use `object` in versions earlier than C# 13
private readonly Lock _balanceLock = new();
private decimal _balance = 0;
public void Deposit(decimal amount)
{
// Only one thread at a time may execute this statement.
lock (_balanceLock)
{
_balance += amount;
}
}
public void Withdraw(decimal amount)
{
// Only one thread at a time may execute this statement.
lock (_balanceLock)
{
_balance -= amount;
}
}
}
C# introduced in C# 13 on .NET 9.
The code lock(this) can lead to problems if the instance can be accessed publicly.
Similar to Java, C# can also synchronize entire methods, by using the MethodImplOptionsSynchronized attribute.
[MethodImpl(MethodImplOptions.Synchronized)]
public void SomeMethod()
{
// do stuff
}
Granularity
Before being introduced to lock granularity, one needs to understand three concepts about locks:
lock overhead: the extra resources for using locks, like the memory space allocated for locks, the CPU time to initialize and destroy locks, and the time for acquiring or releasing locks. The more locks a program uses, the more overhead associated with the usage;
lock contention: this occurs whenever one process or thread attempts to acquire a lock held by another process or thread. The more fine-grained the available locks, the less likely one process/thread will request a lock held by the other. (For example, locking a row rather than the entire table, or locking a cell rather than the entire row);
deadlock: the situation when each of at least two tasks is waiting for a lock that the other task holds. Unless something is done, the two tasks will wait forever.
There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization.
An important property of a lock is its granularity. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less lock overhead when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased lock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a deadlock.
In a database management system, for example, a lock could protect, in order of decreasing granularity, part of a field, a field, a record, a data page, or an entire table. Coarse granularity, such as using table locks, tends to give the best performance for a single user, whereas fine granularity, such as record locks, tends to give the best performance for multiple users.
Database locks
Database locks can be used as a means of ensuring transaction synchronicity. i.e. when making transaction processing concurrent (interleaving transactions), using 2-phased locks ensures that the concurrent execution of the transaction turns out equivalent to some serial ordering of the transaction. However, deadlocks become an unfortunate side-effect of locking in databases. Deadlocks are either prevented by pre-determining the locking order between transactions or are detected using waits-for graphs. An alternate to locking for database synchronicity while avoiding deadlocks involves the use of totally ordered global timestamps.
There are mechanisms employed to manage the actions of multiple concurrent users on a database—the purpose is to prevent lost updates and dirty reads. The two types of locking are pessimistic locking and optimistic locking:
Pessimistic locking: a user who reads a record with the intention of updating it places an exclusive lock on the record to prevent other users from manipulating it. This means no one else can manipulate that record until the user releases the lock. The downside is that users can be locked out for a very long time, thereby slowing the overall system response and causing frustration.
Where to use pessimistic locking: this is mainly used in environments where data-contention (the degree of users request to the database system at any one time) is heavy; where the cost of protecting data through locks is less than the cost of rolling back transactions, if concurrency conflicts occur. Pessimistic concurrency is best implemented when lock times will be short, as in programmatic processing of records. Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data, because records might be locked for relatively large periods of time. It is not appropriate for use in Web application development.
Optimistic locking: this allows multiple concurrent users access to the database whilst the system keeps a copy of the initial-read made by each user. When a user wants to update a record, the application determines whether another user has changed the record since it was last read. The application does this by comparing the initial-read held in memory to the database record to verify any changes made to the record. Any discrepancies between the initial-read and the database record violates concurrency rules and hence causes the system to disregard any update request. An error message is generated and the user is asked to start the update process again. It improves database performance by reducing the amount of locking required, thereby reducing the load on the database server. It works efficiently with tables that require limited updates since no users are locked out. However, some updates may fail. The downside is constant update failures due to high volumes of update requests from multiple concurrent users - it can be frustrating for users.
Where to use optimistic locking: this is appropriate in environments where there is low contention for data, or where read-only access to data is required. Optimistic concurrency is used extensively in .NET to address the needs of mobile and disconnected applications, where locking data rows for prolonged periods of time would be infeasible. Also, maintaining record locks requires a persistent connection to the database server, which is not possible in disconnected applications.
Lock compatibility table
Several variations and refinements of these major lock types exist, with respective variations of blocking behavior. If a first lock blocks another lock, the two locks are called incompatible; otherwise the locks are compatible. Often, lock types blocking interactions are presented in the technical literature by a Lock compatibility table. The following is an example with the common, major lock types:
✔ indicates compatibility
X indicates incompatibility, i.e, a case when a lock of the first type (in left column) on an object blocks a lock of the second type (in top row) from being acquired on the same object (by another transaction). An object typically has a queue of waiting requested (by transactions) operations with respective locks. The first blocked lock for operation in the queue is acquired as soon as the existing blocking lock is removed from the object, and then its respective operation is executed. If a lock for operation in the queue is not blocked by any existing lock (existence of multiple compatible locks on a same object is possible concurrently), it is acquired immediately.
Comment: In some publications, the table entries are simply marked "compatible" or "incompatible", or respectively "yes" or "no".
Disadvantages
Lock-based resource protection and thread/process synchronization have many disadvantages:
Contention: some threads/processes have to wait until a lock (or a whole set of locks) is released. If one of the threads holding a lock dies, stalls, blocks, or enters an infinite loop, other threads waiting for the lock may wait indefinitely until the computer is power cycled.
Overhead: the use of locks adds overhead for each access to a resource, even when the chances for collision are very rare. (However, any chance for such collisions is a race condition.)
Debugging: bugs associated with locks are time dependent and can be very subtle and extremely hard to replicate, such as deadlocks.
Instability: the optimal balance between lock overhead and lock contention can be unique to the problem domain (application) and sensitive to design, implementation, and even low-level system architectural changes. These balances may change over the life cycle of an application and may entail tremendous changes to update (re-balance).
Composability: locks are only composable (e.g., managing multiple concurrent locks in order to atomically delete item X from table A and insert X into table B) with relatively elaborate (overhead) software support and perfect adherence by applications programming to rigorous conventions.
Priority inversion: a low-priority thread/process holding a common lock can prevent high-priority threads/processes from proceeding. Priority inheritance can be used to reduce priority-inversion duration. The priority ceiling protocol can be used on uniprocessor systems to minimize the worst-case priority-inversion duration, as well as prevent deadlock.
Convoying: all other threads have to wait if a thread holding a lock is descheduled due to a time-slice interrupt or page fault.
Some concurrency control strategies avoid some or all of these problems. For example, a funnel or serializing tokens can avoid the biggest problem: deadlocks. Alternatives to locking include non-blocking synchronization methods, like lock-free programming techniques and transactional memory. However, such alternative methods often require that the actual lock mechanisms be implemented at a more fundamental level of the operating software. Therefore, they may only relieve the application level from the details of implementing locks, with the problems listed above still needing to be dealt with beneath the application.
In most cases, proper locking depends on the CPU providing a method of atomic instruction stream synchronization (for example, the addition or deletion of an item into a pipeline requires that all contemporaneous operations needing to add or delete other items in the pipe be suspended during the manipulation of the memory content required to add or delete the specific item). Therefore, an application can often be more robust when it recognizes the burdens it places upon an operating system and is capable of graciously recognizing the reporting of impossible demands.
Lack of composability
One of lock-based programming's biggest problems is that "locks don't compose": it is hard to combine small, correct lock-based modules into equally correct larger programs without modifying the modules or at least knowing about their internals. Simon Peyton Jones (an advocate of software transactional memory) gives the following example of a banking application: design a class that allows multiple concurrent clients to deposit or withdraw money to an account, and give an algorithm to transfer money from one account to another.
The lock-based solution to the first part of the problem is:
class Account:
member balance: Integer
member mutex: Lock
method deposit(n: Integer)
mutex.lock()
balance ← balance + n
mutex.unlock()
method withdraw(n: Integer)
deposit(−n)
The second part of the problem is much more complicated. A routine that is correct for sequential programs would be
function transfer(from: Account, to: Account, amount: Integer)
from.withdraw(amount)
to.deposit(amount)
In a concurrent program, this algorithm is incorrect because when one thread is halfway through , another might observe a state where has been withdrawn from the first account, but not yet deposited into the other account: money has gone missing from the system. This problem can only be fixed completely by putting locks on both accounts prior to changing either one, but then the locks have to be placed according to some arbitrary, global ordering to prevent deadlock:
function transfer(from: Account, to: Account, amount: Integer)
if from < to // arbitrary ordering on the locks
from.lock()
to.lock()
else
to.lock()
from.lock()
from.withdraw(amount)
to.deposit(amount)
from.unlock()
to.unlock()
This solution gets more complicated when more locks are involved, and the function needs to know about all of the locks, so they cannot be hidden.
Language support
Programming languages vary in their support for synchronization:
Ada provides protected objects that have visible protected subprograms or entries as well as rendezvous.
The ISO/IEC C standard provides a standard mutual exclusion (locks) application programming interface (API) since C11. The current ISO/IEC C++ standard supports threading facilities since C++11. The OpenMP standard is supported by some compilers, and allows critical sections to be specified using pragmas. The POSIX pthread API provides lock support. Visual C++ provides the synchronize attribute of methods to be synchronized, but this is specific to COM objects in the Windows architecture and Visual C++ compiler. C and C++ can easily access any native operating system locking features.
C# provides the lock keyword on a thread to ensure its exclusive access to a resource.
Visual Basic (.NET) provides a SyncLock keyword like C#'s lock keyword.
Java provides the keyword synchronized to lock code blocks, methods or objects and libraries featuring concurrency-safe data structures.
Objective-C provides the keyword @synchronized to put locks on blocks of code and also provides the classes NSLock, NSRecursiveLock, and NSConditionLock along with the NSLocking protocol for locking as well.
PHP provides a file-based locking as well as a Mutex class in the pthreads extension.
Python provides a low-level mutex mechanism with a Lock class from the threading module.
The ISO/IEC Fortran standard (ISO/IEC 1539-1:2010) provides the lock_type derived type in the intrinsic module iso_fortran_env and the lock/unlock statements since Fortran 2008.
Ruby provides a low-level mutex object and no keyword.
Rust provides the Mutex<T> struct.
x86 assembly language provides the LOCK prefix on certain operations to guarantee their atomicity.
Haskell implements locking via a mutable data structure called an MVar, which can either be empty or contain a value, typically a reference to a resource. A thread that wants to use the resource ‘takes’ the value of the MVar, leaving it empty, and puts it back when it is finished. Attempting to take a resource from an empty MVar results in the thread blocking until the resource is available. As an alternative to locking, an implementation of software transactional memory also exists.
Go provides a low-level Mutex object in standard's library sync package. It can be used for locking code blocks, methods or objects.
Mutexes vs. semaphores
See also
Critical section
Double-checked locking
File locking
Lock-free and wait-free algorithms
Monitor (synchronization)
Mutual exclusion
Read/write lock pattern
References
External links
Tutorial on Locks and Critical Sections
Concurrency control
Software design patterns
Programming language comparisons
Articles with example C code
Articles with example C Sharp code
Articles with example Java code
Articles with example pseudocode | Lock (computer science) | Technology | 3,861 |
3,144,059 | https://en.wikipedia.org/wiki/Carrion%20flower | Carrion flowers, also known as corpse flowers or stinking flowers, are mimetic flowers that emit an odor that smells like rotting flesh. Apart from the scent, carrion flowers often display additional characteristics that contribute to the mimesis of a decaying corpse. These include their specific coloration (red, purple, brown), the presence of setae, and orifice-like flower architecture. Carrion flowers attract mostly scavenging flies and beetles as pollinators. Some species may trap the insects temporarily to ensure the gathering and transfer of pollen.
Plants known as "carrion flower"
Amorphophallus
Many plants in the genus Amorphophallus (family Araceae) are known as carrion flowers. One such plant is the Titan arum (Amorphophallus titanum), which has the world's largest unbranched inflorescence. Rather than a single flower, the titan arum presents an inflorescence or compound flower composed of a spadix or stalk of small and anatomically reduced male and female flowers, surrounded by a spathe that resembles a single giant petal. This plant has a mechanism to heat up the spadix enhancing the emission of the strong odor of decaying meat to attract its pollinators, carrion-eating beetles and "flesh flies" (family Sarcophagidae). It was first described scientifically in 1878 in Sumatra.
Rafflesia
Flowers of plants in the genus Rafflesia (family Rafflesiaceae) emit an odor similar to that of decaying meat. This odor attracts the flies that pollinate the plant. The world's largest single bloom is that of R. arnoldii. This rare flower is found in the rainforests of Borneo and Sumatra. It can grow to be across and can weigh up to . R. arnoldii is a parasitic plant on Tetrastigma vine, which grows only in primary rainforests. It has no visible leaves, roots, or stem. It does not photosynthesize, but rather uses the host plant to obtain water and nutrients.
Stapelia
Plants in the genus Stapelia are also called "carrion flowers". They are small, spineless, cactus-like succulent plants. Most species are native to South Africa, and are grown as potted plants elsewhere. The flowers of all species are hairy to varying degrees and generate the odor of rotten flesh. The color of the flowers also mimics rotting meat. This attracts scavenging flies, for pollination. The flowers in some species can be very large, notably Stapelia gigantea can reach in diameter.
Smilax or Nemexia
In North America, the herbaceous vines of the genus Smilax are known as carrion flowers. These plants have a cluster of small greenish flowers. The most familiar member of this groups is Smilax herbacea. These plants are sometimes placed in the genus Nemexia.
Bulbophyllum (Orchid)
Orchids of the genus Bulbophyllum produce strongly scented flowers. The flowers produce various odors resembling sap, urine, blood, dung, carrion, and, in some species, fragrant fruity aromas. Most are fly-pollinated, and attract hordes of flies. Bulbophyllum beccarii, Bulbophyllum fletcherianum and Bulbophyllum phalaenopsis in bloom have been likened to smelling like a herd of dead elephants. Their overpowering floral odors are sometimes described as making it difficult to walk into a greenhouse in which they in bloom.
Scent
The sources of the flowers' unique scent are not fully identified, partly due to the extremely low concentration of the compounds (5 to 10 parts per billion). Biochemical tests on Amorphophallus species revealed foul-smelling dimethyl sulfides such as dimethyl disulfide and dimethyl trisulfide, and in other species, trace amounts of amines such as putrescine and cadaverine have been found. Methyl thioacetate (which has a cheesy, garlic-like odor) and isovaleric acid (smells of sweat) also contribute to the smell of the flower. Trimethylamine is the cause of the "rotten fish smell" towards the end of the flower's life.
Pollination
Both visual interactions and odor are important attractants for pollinators. In order for pollination to occur, a relationship of attraction and reward must be present between the flower and the pollinator. The pollinator's body mechanically promotes pollen adherence, which is necessary for effective pollen dispersal. The recognizable scent of the carrion flowers is produced in the petals of both male and female flowers and the pollen reward attracts beetles and flies. Popular pollinators of carrion flowers are blowflies (Calliphoridae), house flies (Muscidae), flesh flies (Sarcophagidae) and varying types of beetles, due to the scents produced by the plant. Fly pollinators are typically attracted to pale, dull plants or those with translucent patches. Additionally, these plants produce pollen, do not have present nectar guides and flowers resemble a funnel or complex trap. The host plant can sometimes trap the pollinator during the pollination/feeding process.
Other plants with carrion-scented flowers
Annonaceae
Asimina, commonly referred to as "pawpaw"
Sapranthus palanga
Apocynaceae
subtribe Stapeliinae: Boucerosia frerei, Caralluma, Duvalia, Echidnopsis, Edithcolea grandis, Hoodia, Huernia, Orbea, Piaranthus, Pseudolithos
Araceae
Arum dioscoridis, A. maculatum
Dracunculus vulgaris
Helicodiceros muscivorus
Lysichiton americanum
Symplocarpus foetidus
Aristolochiaceae
Aristolochia californica, A. grandiflora, A. microstoma, A. salvadorensis, A. littoralis
Hydnora
Asparagaceae
Eucomis bicolor
Balanophoraceae
Sarcophyte sanguinea subsp. sanguinea
Bignoniaceae
Crescentia alata
Burmanniaceae
Tiputinia foetida
Cytinaceae
Bdallophytum
Iridaceae
Moraea lurida
Ferraria crispa
Malvaceae
Sterculia foetida
Melanthiaceae
Trillium erectum, T. foetidissimum, T. sessile, T. stamineum
Orchidaceae
Satyrium pumilum
Masdevallia elephanticeps, M. angulata, M. colossus, M. picea
See also
Stinkhorn — fungi that use the same basic principle for spore dispersal
Aseroe rubra — fungi that use the same basic principle for spore dispersal
References
External links
All about stinking flowers
Carrion and Dung Mimicry in Plants
Plant common names
Pollination | Carrion flower | Biology | 1,434 |
19,841,585 | https://en.wikipedia.org/wiki/Citatuzumab%20bogatox | Citatuzumab bogatox (VB6-845) is a monoclonal antibody Fab fragment fused with bouganin, a ribosome inactivating protein from the plant Bougainvillea spectabilis. It has undergone preclinical development for the treatment of ovarian cancer and other solid tumors.
References
Antibody-drug conjugates
Monoclonal antibodies for tumors | Citatuzumab bogatox | Biology | 88 |
21,404,975 | https://en.wikipedia.org/wiki/ServerNet | ServerNet is a switched fabric communications link primarily used in proprietary computers made by Tandem Computers, Compaq, and HP.
Its features include good scalability, clean fault containment, error detection and failover. The ServerNet architecture specification defines a connection between nodes, either processor or high performance I/O nodes such as storage devices.
History
Tandem Computers developed the original ServerNet architecture and protocols for use in its own proprietary computer systems starting in 1992, and released the first ServerNet systems in 1995. Early attempts to license the technology and interface chips to other companies failed, due in part to a disconnect between the culture of selling complete hardware / software / middleware computer systems and that needed for selling and supporting chips and licensing technology. A follow-on development effort ported the Virtual Interface Architecture to ServerNet with PCI interface boards connecting personal computers. Infiniband directly inherited many ServerNet features. As of 2017, systems still ship based on the ServerNet architecture.
References
W. E. Baker, R. Horst, D. Sonnier, W. Watson, "A Flexible ServerNet-based Fault-Tolerant Architecture," in Proc. 25th Int. Symp. Fault-Tolerant Computing, Pasadena, CA, June 27–30 1995.
R. Horst, "ServerNet Deadlock Avoidance and Fractahedral Topologies," in Proc. 10th Int'l Parallel Processing Symposium, Honolulu, Hawaii, pp. 274–280, 1995.
D. Garcia, et al., "Servernet II", Parallel Computer Routing and Communication International Workshop, Jun. 26, 1997, pp. 119–135, XP002103164, Atlanta, GA.
R. Horst and D. Garcia, "ServerNet SAN I/O Architecture," Proc. Hot Interconnects V, August 1997.
D.R Avresky, V. Shurbanov, R. Horst, “The effect of router arbitration policy on scalability of ServerNet Topologies,” Microprocessors and Microsystems 21, pp. 545–561, 1998.
D.R Avresky, V. Shurbanov, R. Horst, W. Watson, L. Young, D. Jewett. “Performance Modeling of ServerNet SAN Topologies,” The Journal of Supercomputing, V. 14, pp. 19–37, 1999.
D.R Avresky, V. Shurbanov, R. Wilkinson, R. Horst, W. Watson, L. Young, “ Maximum delivery time and hot spots in ServerNet topologies, Computer Networks 31, pp. 1891–1910, 1999.
A. Hossain, S. Kang, R. Horst, “ ServerNet and ATM Interconnects: Comparison for Compressed Video Transmission,” Journal of Communications and Networks, V. 1, No. 2, June 1999.
Computer networks
Supercomputing | ServerNet | Technology | 598 |
38,832,942 | https://en.wikipedia.org/wiki/Candystorm | Candystorm is a loanword used in the German language and is the antonym of shitstorm. Green German MP Volker Beck gave distinction to the term by using it to describe a wave of party support for Claudia Roth's bid for Party leadership on Twitter in late 2012. Roth had just before failed in her bid to be nominated as the party's top candidate in the 2013 federal elections, and was rumored not to be running for re-election as party leader.
Volker Beck called in July 2013 for a "candystorm for Edward Snowden", calling for admission of Snowden under hashtag #snowstorm22.
Cultural debate about the phenomenon
Axel Hoffmann, vice chairman of the liberal Friedrich Naumann Stiftung, saw this phenomenon as paradigmatic for the digital society: "The end of the liberal civil society is in sight. Shitstorm and candystorm rule." („Das Ende einer liberalen Bürgergesellschaft ist in Sicht. Der shit- oder candy-storm regiert.“)
Press references
Social Media Campaigning about candystorm in English 7 March 2013
die tageszeitung (TAZ): Die kleine Wortkunde – „Candystorm“. Der neue #flausch., 11-12-2013
Newspapers:
Süddeutsche Zeitung: Grünen-Parteichefin #Candystorm für Claudia, 12 November 2012
Westdeutsche Allgemeine Zeitung: Candystorm - Claudia Roth freut sich über "Candystorm", 12 November 2012
Frankfurter Allgemeine Zeitung: Candystorm, 13 November 2012
Augsburger Allgemeine: Der "Candy-Storm" und eine grüne Feuerwehrcouch, 16. November 2012
Hamburger Abendblatt: Nach der Urwahl - "Candystorm" für Claudia Roth im Internet, 12 November 2012
Die Welt: Claudia Roth bleibt – dem Candystorm sei Dank, 12. November 2012
Die Welt: Traumergebnis und realer Candystorm für Claudia Roth, 17. November 2012,
Der Tagesspiegel: Candystorm statt Shitstorm auf Twitter, 12. November 2012
Frankfurter Rundschau: Netzgemeinde - Claudia Roth und der erste Candystorm, 13 November 2012,
die tageszeitung (TAZ): Die kleine Wortkunde – „Candystorm“. Der neue #flausch., 12. November 2012
Weekly magazines and newspapers:
Focus, Tatjana Heid: Wiederwahl zur Grünen-Chefin. Grüne hätscheln Claudia Roth nach Urwahl-Desaster, 17 November 2012
Der Spiegel, Veit Medick: Parteiaufruhr nach Grünen-Urwahl : Befehl zum Liebhaben, 12 November 2012
Stern: Claudia Roth im #candystorm, 12 November 2012
Die Zeit: «Candystorm» für Claudia Roth im Internet, 12. November 2012
Die Zeit: Candy-Storm, 15. November 2012
TV:
tagesschau.de, Reaktionen auf Roths Bekenntnis zum Parteivorsitz - Von "Candystorms" und Wehners Zettel, 12 November 2012
ZDF Netzschau: " Netzschau:-Candystorm-für-Claudia-Roth, 12 November 2012
n-tv: "Candy-Storm" ermutigt Partei-Chefin. Roth macht weiter. 12 November 2012
References
Digital technology
Mass media technology | Candystorm | Technology | 740 |
76,726,515 | https://en.wikipedia.org/wiki/Behavioral%20economics%20and%20public%20policy | Behavioral economics and public policy is a field that investigates how the discipline of behavioral economics can be used to enhance the formation, implementation and evaluation of public policy. Using behavioral insights, it explores how to make policies more effective, efficient and humane by considering real-world human behavior and decision-making.
Overview
Behavioral economics as a subfield of economics is a fairly recent development and the implications of it for public policy have yet to be systematically explored. Behavioral economists have accumulated extensive findings indicating, contrary to standard economic assumptions, that people do not act rationally, that they are not perfectly self-interested, and that they hold inconsistent preferences. These deviations from the standard assumptions about behavior have become increasingly more important for economic policy in recent years. In the book Policy and Choice: Public Finance through the Lens of Behavioral Economics economists William J. Congdon, Jeffrey R. Kling and Sendhil Mullainathan argue that though traditional public finance provides a comprehensive framework for policy analysis, insights from behavioral economics can be applied to questions of economic policy, such as tax policy, pension systems, health policy, and other government interventions.
Use of behavioral insights
Taxation
Traditional public finance has a well-developed framework for determining how to set taxes optimally. One broad insight is that efficient taxes are those that minimally distort consumer choices, because a change in choices resulting from taxation represents a welfare cost. Behavioral economics complicates this logic.The behavioral concept of "tax salience" refers to the visibility of the tax-inclusive price and how the way taxes are displayed can influence consumer behavior. It emphasizes that people are more likely to change their behavior in response to highly visible and highly salient taxes. Commodity taxes that are included in the posted prices that consumers see when shopping have larger effects on demand. On the other hand, individuals may fail to attend to sales taxes that are not included in the prices on store shelves but are computed at the register (such as in the United States) – they are not salient at the time of choice.
Tax non-salience should represent an opportunity for governments of raising revenues without distorting behavior. But the lack of response to a non-salient tax is not the same as lack of response to a salient tax. It means that consumers make choices as if an item costs $X, but in reality they spend $X + $Y. As a result, they have $Y less to spend in the future than they had planned. If the lost money is treated as a pure income effect (individuals see that they have $Y less to spend on all other goods and adjust), then that would turn the non-salient tax into a lump-sum tax and governments should use non-salient taxes heavily. But rather than thinking of their overall budget as depleted by $Y, individuals could also think of $Y as depleting their grocery budget specifically, and they may spend $Y less during their next shopping. Or they may never change consumption and instead end up saving less. In such cases, the low demand response to non-salient taxes is misleading: though it does not generate distortions in the demand for the good being taxed, it is creating possibly higher distortions elsewhere. So governments would need to take into account other potential demand responses before using non-salient taxes.
Retirement and pension systems
Old-age insurance and savings policies can help to ensure adequate levels of consumption in old age by assisting individuals with accumulating adequate wealth during their working years. A behavioral approach offers new perspectives on how to effectively accomplish that. Behavioral economics recognizes that individuals often have trouble saving and planning for their own retirement – they save too little, they invest in the wrong assets, etc. If low levels of saving are simply due to it being unattractive relative to consumption, then subsidization, such as through existing tax incentives, is sufficient. But if low levels reflect choice errors or a failure of self-control, subsidies may not be sufficient or necessary. The key implication is that if policy seeks to encourage more saving, it can do so by making it easier to save. In enrollment in retirement saving plans, behavioral economics has shown that default rules (actively enrolling or automatically enrolled, with the ability to opt out) can have substantial effects on participation and saving. Policies encouraging firms to automatically enroll their workers in 401(k) plans, rather than waiting for individuals to sign up on their own, seem to encourage participation and savings to an extent that is difficult to rationalize under standard assumptions about preference and choice. Alternatively, forcing individuals to choose or dramatically simplifying the enrollment process also increases participation and saving. Simplifying the process of opening and contributing to other types of retirement accounts, such as IRAs, could have similar effects.
To address that part of low saving that results from the difficulty that individuals can have in exerting self-control, policy might seek to aid individuals in committing to saving and make it harder for them to procrastinate or give in to short-term temptations. Policies can assist individuals with following through on their intentions through automatic enrollment and automatic escalation, which have been shown to be effective tools. Further, when individuals are tempted to use their retirement funds before they retire, policies can reduce the temptation through penalties and fees that are features of most existing tax-favored retirement savings plans. Policies can also seek to expand individuals' capacity to make good choices through education and efforts to promote financial literacy.
Health insurance
Public finance focuses on the possibility that unregulated markets for health insurance are susceptible to failure and inefficiency due to asymmetries of information that occur when some market participants have more complete information than others about relevant market features. One of them is adverse selection. Consumers have private information about their health status that insurers do not and the resulting selection effect in consumption can cause the market to unwind. Policymakers can encourage or even force risk pooling (insurance companies coming together to form one) through regulating and subsidizing private insurance markets, or they can provide fully public insurance. However, individuals may be imperfect optimizers or may hold non-standard preferences. Decision-making errors such as overconfidence, difficulty in evaluating risks and making judgments under uncertainty mean that the private information associated with health status may not necessarily translate into the sort of adverse selection predicted by the traditional model. Moreover, some behavioral tendencies that might mitigate adverse selection might also lead to decision-making errors that themselves lead to a loss of welfare. The net impact of behavioral tendencies on the market for health insurance is theoretically unclear. Therefore, the success or failure of policy responses to increase access to health insurance might depend on how the policies address issues related to individual choice.
Another asymmetry is moral hazard. While moral hazard suggests that people with insurance will overconsume drugs or doctor visits, self-control problems might deter individuals from doing so or even lead them to underconsume those services. In drawing conclusions about the social welfare implications of alternative policies for expanding health insurance, the costs associated with moral hazard must be considered in context rather than assumed to follow from behavior consistent with standard assumptions.
The policy response to adverse selection and moral hazard must consider the ways in which behavioral tendencies affect how those forces operate. One approach is to promote the function of private health insurance markets through a combination of subsidies that make health insurance more affordable and regulations that encourage pooling and discourage selection, both in group and nongroup health insurance markets. Another is to provide health insurance coverage directly through public programs, which can target vulnerable populations and can be explicitly designed to pool risks and avoid adverse selection. The psychology of targeted individuals plays an important but distinct role in the operation of each type of policy environment.
Use of behavioral insights
unemployment benefits (behavior under coverage that looks much like standard moral hazard may be a result of behavioral tendencies such as failure of self-control or reference-dependent preferences).
education (more immediate incentives, short-run, salient incentives)
environmental policy (short-run, salient incentives, clear messages)
provision of public goods (policy can foster conditions under which individuals will provide or contribute to public goods on a voluntary basis)
poverty and inequality (moral hazard, distinguishing properly who requires assistance)
Future directions
In the future, behavioral economics has a great potential to aid in the field of public policy. Psychological factors reshape core concepts in public finance and can be integrated into policy making and improve the outcomes of policies. However, further research is needed. For instance, resolving the net welfare consequences of tax salience is an important future line of research. The net impact of behavioral tendencies on the market for health insurance is theoretically unclear and also requires further study.
References
Further reading
Bhargava, Saurabh; Loewenstein, George (2015). "Behavioral Economics and Public Policy 102: Beyond Nudging". The American Economic Review. 105 (5): 396–401.
Mullainathan, Sendhil; Schwartzstein, Joshua; Congdon, William J. (2012). "A Reduced-Form Approach to Behavioral Public Finance". Annual Review of Economics. 4: 511–540. .
Amir, On; Ariely, Dan; Cooke, Alan; Dunning, David; Epley, Nicholas; Gneezy, Uri; Koszegi, Botond; Lichtenstein, Donald; Mazar, Nina; Mullainathan, Sendhil; Prelec, Drazen; Shafir, Eldar; Silva, Jose (2005). "Psychology, Behavioral Economics, and Public Policy". Marketing Letters. 16 (3/4): 443–454.
Public finance
Behavioral economics
Public policy | Behavioral economics and public policy | Biology | 1,979 |
5,920,062 | https://en.wikipedia.org/wiki/Crooks%20fluctuation%20theorem | The Crooks fluctuation theorem (CFT), sometimes known as the Crooks equation, is an equation in statistical mechanics that relates the work done on a system during a non-equilibrium transformation to the free energy difference between the final and the initial state of the transformation. During the non-equilibrium transformation the system is at constant volume and in contact with a heat reservoir. The CFT is named after the chemist Gavin E. Crooks (then at University of California, Berkeley) who discovered it in 1998.
The most general statement of the CFT relates the probability of a space-time trajectory to the time-reversal of the trajectory . The theorem says if the dynamics of the system satisfies microscopic reversibility, then the forward time trajectory is exponentially more likely than the reverse, given that it produces entropy,
If one defines a generic reaction coordinate of the system as a function of the Cartesian coordinates of the constituent particles ( e.g. , a distance between two particles), one can characterize every point along the reaction coordinate path by a parameter , such that and correspond to two ensembles of microstates for which the reaction coordinate is constrained to different values. A dynamical process where is externally driven from zero to one, according to an arbitrary time scheduling, will be referred as forward transformation , while the time reversal path will be indicated as backward transformation. Given these definitions, the CFT sets a relation between the following five quantities:
, i.e. the joint probability of taking a microstate from the canonical ensemble corresponding to and of performing the forward transformation to the microstate corresponding to ;
, i.e. the joint probability of taking the microstate from the canonical ensemble corresponding to and of performing the backward transformation to the microstate corresponding to ;
, where is the Boltzmann constant and the temperature of the reservoir;
, i.e. the work done on the system during the forward transformation (from to );
, i.e. the Helmholtz free energy difference between the state and , represented by the canonical distribution of microstates having and , respectively.
The CFT equation reads as follows:
In the previous equation the difference corresponds to the work dissipated in the forward transformation, . The probabilities and become identical when the transformation is performed at infinitely slow speed, i.e. for equilibrium transformations. In such cases, and
Using the time reversal relation , and grouping together all the trajectories yielding the same work (in the forward and backward transformation), i.e. determining the probability distribution (or density) of an amount of work being exerted by a random system trajectory from to , we can write the above equation in terms of the work distribution functions as follows
Note that for the backward transformation, the work distribution function must be evaluated by taking the work with the opposite sign. The two work distributions for the forward and backward processes cross at . This phenomenon has been experimentally verified using optical tweezers for the
process of unfolding and refolding of a small RNA hairpin and an RNA three-helix junction.
The CFT implies the Jarzynski equality.
Notes
Non-equilibrium thermodynamics
Statistical mechanics theorems | Crooks fluctuation theorem | Physics,Mathematics | 651 |
14,419,822 | https://en.wikipedia.org/wiki/Storage%40home | Storage@home was a distributed data store project designed to store massive amounts of scientific data across a large number of volunteer machines. The project was developed by some of the Folding@home team at Stanford University, from about 2007 through 2011.
Function
Scientists such as those running Folding@home deal with massive amounts of data, which must be stored and backed up, and this is very expensive. Traditionally, methods such as storing the data on RAID servers are used, but these become impractical for research budgets at this scale. Pande's research group already dealt with storing hundreds of terabytes of scientific data. Professor Vijay Pande and student Adam Beberg took experience from Folding@home and began work on Storage@home. The project is designed based on the distributed file system known as Cosm, and the workload and analysis needed for Folding@home results. While Folding@home volunteers can easily participate in Storage@home, much more disk space is needed from the user than Folding@home, to create a robust network. Volunteers each donate 10 GB of storage space, which would hold encrypted files. These users gain points as a reward for reliable storage. Each file saved on the system is replicated four times, each spread across 10 geographically distant hosts. Redundancy also occurs over different operating systems and across time zones. If the servers detect the disappearance of an individual contributor, the data blocks held by that user would then be automatically duplicated to other hosts. Ideally, users would participate for a minimum of six months, and would alert the Storage@home servers before certain changes on their end such as a planned move of a machine or a bandwidth downgrade. Data stored on Storage@home was maintained through redundancy and monitoring, with repairs done as needed. Through careful application of redundancy, encryption, digital signatures, automated monitoring and correction, large quantities of data could be reliably and easily retrieved. This ensures a robust network that will lose the least possible data.
Storage Resource Broker is the closest storage project to Storage@home.
Status
Storage@home was first made available on September 15, 2009, in a testing phase. It first monitored availability data and other basic statistics on the user's machine, which would be used to create a robust and capable storage system for storing massive amounts of scientific data. However, in the same year it became inactive, despite initial plans for more to come. On April 11, 2011, Pande stated his group had no active plans with Storage@home.
See also
Folding@home
Cosm
SETI@home
Genome@home
RAID
OFF System
References
Distributed computing projects
Distributed file systems | Storage@home | Engineering | 535 |
757,745 | https://en.wikipedia.org/wiki/List%20of%20network%20theory%20topics | Network theory is an area of applied mathematics.
This page is a list of network theory topics.
Network theorems
Max flow min cut theorem
Menger's theorem
Metcalfe's law
Network properties
Centrality
Betweenness centrality
Closeness
Network theory applications
Bose-Einstein condensation: a network theory approach
Networks with certain properties
Complex network
Scale-free network
Small-world network
Small world phenomenon
Other terms
Bottleneck (network)
Blockmodeling
Network automaton
Network effect
Network flow
Pathfinder network
Scalability
Sorting network
Space syntax
Spanning tree protocol
Strategyproof
Structural cohesion
Vickrey–Clarke–Groves
Tree and hypertree networks
Examples of networks
Bayesian network
Bridges of Königsberg
Computer network
Ecological network
Electrical network
Gene regulatory network
Global shipping network
Neural network
Project network
Petri net
Semantic network
Social network
Spin network
Telecommunications network
Value network
Workflow
Metabolic network
Metabolic network modelling
Outlines of mathematics and logic
Outlines
Lists of topics | List of network theory topics | Mathematics | 184 |
60,215,472 | https://en.wikipedia.org/wiki/Aluminum%20internal%20combustion%20engine | An aluminum internal combustion engine is an internal combustion engine made mostly from aluminum metal alloys.
Many internal combustion engines use cast iron and steel extensively for their strength and low cost. Aluminum offers lighter weight at the expense of strength, hardness and often cost. However, with care it can be substituted for many of the components and is widely used. Aluminum crank cases, cylinder blocks, heads and pistons are commonplace. The first airplane engine to fly, in the Wright Flyer of 1903, had an aluminum cylinder block.
All-aluminum engines are rare, as the material is difficult to use in more highly stressed components such as connecting rods and crankshafts. The BSA A10 motorcycle engine had aluminum conrods, while the Škoda 935 Dynamic auto engine had an aluminum crankshaft.
Russian Aluminum ICE project
An aircraft engine made 90 percent from aluminum alloys was developed by scientists and engineers Novosibirsk State Technical University. Work on it was carried out for four years.
The engineers of NSTU, while working on this engine, applied the development of Institute of Inorganic Chemistry SB RAS. The designers were assisted by scientists Alexei Rogov and Olga Terleeva.
The crankshaft and main engine gearbox are made of aluminum. This allows reduction of mass by 40-50 percent, while maintaining the same power, compared to conventional steel engines.
A prototype engine was tested on ordinary AI-95 gasoline. Tests were going on throughout 2018 and completed in early 2019. As a result, the high performance characteristics of the heavy-duty coating, which the aluminum parts are processed with, were confirmed.
According to the professor of the Aircraft and Helicopter Engineering Department of the Faculty of Aircraft of NSTU, Ilya Zverkov, this engine was developed for the aircraft Yak-52 by order of the Russian Aviation Revival Foundation, which is based at the Mochishche airfield near Novosibirsk.
References
External links
В Новосибирске собрали первый в мире двигатель из алюминия
В Новосибирске ученые сделали и запустили первый в мире двигатель из алюминия
Internal combustion engine | Aluminum internal combustion engine | Technology,Engineering | 499 |
14,285 | https://en.wikipedia.org/wiki/History%20of%20science%20and%20technology | The history of science and technology (HST) is a field of history that examines the development of the understanding of the natural world (science) and humans' ability to manipulate it (technology) at different points in time. This academic discipline also examines the cultural, economic, and political context and impacts of scientific practices; it likewise may study the consequences of new technologies on existing scientific fields.
Academic study of history of science
History of science is an academic discipline with an international community of specialists. Main professional organizations for this field include the History of Science Society, the British Society for the History of Science, and the European Society for the History of Science.
Much of the study of the history of science has been devoted to answering questions about what science is, how it functions, and whether it exhibits large-scale patterns and trends.
History of the academic study of history of science
Histories of science were originally written by practicing and retired scientists, starting primarily with William Whewell's History of the Inductive Sciences (1837), as a way to communicate the virtues of science to the public.
Auguste Comte proposed that there should be a specific discipline to deal with the history of science.
The development of the distinct academic discipline of the history of science and technology did not occur until the early 20th century. Historians have suggested that this was bound to the changing role of science during the same time period.
After World War I, extensive resources were put into teaching and researching the discipline, with the hopes that it would help the public better understand both Science and Technology as they came to play an exceedingly prominent role in the world.
In the decades since the end of World War II, history of science became an academic discipline, with graduate schools, research institutes, public and private patronage, peer-reviewed journals, and professional societies.
Formation of academic departments
In the United States, a more formal study of the history of science as an independent discipline was initiated by George Sarton's publications, Introduction to the History of Science (1927) and the journal Isis (founded in 1912). Sarton exemplified the early 20th-century view of the history of science as the history of great men and great ideas. He shared with many of his contemporaries a Whiggish belief in history as a record of the advances and delays in the march of progress.
The study of the history of science continued to be a small effort until the rise of Big Science after World War II. With the work of I. Bernard Cohen at Harvard University, the history of science began to become an established subdiscipline of history in the United States.
In the United States, the influential bureaucrat Vannevar Bush, and the president of Harvard, James Conant, both encouraged the study of the history of science as a way of improving general knowledge about how science worked, and why it was essential to maintain a large scientific workforce.
Universities with history of science and technology programs
Argentina
Buenos Aires Institute of Technology, Argentina, has been offering courses on History of the Technology and the Science.
National Technological University, Argentina, has a complete history program on its offered careers.
Australia
The University of Sydney offers both undergraduate and postgraduate programmes in the History and Philosophy of Science, run by the Unit for the History and Philosophy of Science, within the Science Faculty. Undergraduate coursework can be completed as part of either a Bachelor of Science or a Bachelor of Arts Degree. Undergraduate study can be furthered by completing an additional Honours year. For postgraduate study, the Unit offers both coursework and research-based degrees. The two course-work based postgraduate degrees are the Graduate Certificate in Science (HPS) and the Graduate Diploma in Science (HPS). The two research based postgraduate degrees are a Master of Science (MSc) and Doctor of Philosophy (PhD).
Belgium
University of Liège, has a Department called Centre d'histoire des Sciences et Techniques.
Canada
Carleton University Ottawa offer courses in Ancient Science and Technology in its Technology, Society and Environment program.
University of Toronto has a program in History and Philosophy of Science and Technology.
Huron University College offers a course in the History of Science which follows the development and philosophy of science from 10,000 BCE to the modern day.
University of King's College in Halifax, Nova Scotia has a History of Science and Technology Program.
France
Nantes University has a dedicated Department called Centre François Viète.
Paris Diderot University (Paris 7) has a Department of History and Philosophy of Science.
A CNRS research center in History and Philosophy of Science SPHERE, affiliated with Paris Diderot University, has a dedicated history of technology section.
Pantheon-Sorbonne University (Paris 1) has a dedicated Institute of History and Philosophy of Science and Technics.
The École Normale Supérieure de Paris has a history of science department.
Germany
Technische Universität Berlin, has a program in the History of Science and Technology.
The of Masterpieces of Science and Technology in Munich is one of the largest science and technology museums in the world in terms of exhibition space, with about 28,000 exhibited objects from 50 fields of science and technology.
Greece
The University of Athens has a Department of Philosophy and History of Science
India
History of science and technology is a well-developed field in India. At least three generations of scholars can be identified.
The first generation includes D.D.Kosambi, Dharmpal, Debiprasad Chattopadhyay and Rahman. The second generation mainly consists of Ashis Nandy, Deepak Kumar, Dhruv Raina, S. Irfan Habib, Shiv Visvanathan, Gyan Prakash, Stan Lourdswamy, V.V. Krishna, Itty Abraham, Richard Grove, Kavita Philip, Mira Nanda and Rob Anderson. There is an emergent third generation that includes scholars like Abha Sur and Jahnavi Phalkey.
Departments and Programmes
The National Institute of Science, Technology and Development Studies had a research group active in the 1990s which consolidated social history of science as a field of research in India.
Currently there are several institutes and university departments offering HST programmes.
Jawaharlal Nehru University has an Mphil-PhD program that offer specialisation in Social History of Science. It is at the History of Science and Education group of the Zakir Husain Centre for Educational Studies (ZHCES) in the School of Social Sciences. Renowned Indian science historians Deepak Kumar and Dhruv Raina teach here. Also, *Centre for Studies in Science Policy has an Mphil-PhD program that offers specialization in Science, Technology, and Society along with various allied subdisciplines.
Central University of Gujarat has an MPhil-PhD programme in Studies in Science, Technology & Innovation Policy at the Centre for Studies in Science, Technology & Innovation Policy (CSSTIP), where Social History of Science and Technology in India is a major emphasis for research and teaching.
Banaras Hindu University has programs: one in History of Science and Technology at the Faculty of Science and one in Historical and Comparative Studies of the Sciences and the Humanities at the Faculty of Humanities.
Andhra University has now set History of Science and Technology as a compulsory subject for all the First year B-Tech students.
Israel
Tel Aviv University. The Cohn Institute for the History and Philosophy of Science and Ideas is a research and graduate teaching institute within the framework of the School of History of Tel Aviv University.
Bar-Ilan University has a graduate program in Science, Technology, and Society.
Japan
Kyoto University has a program in the Philosophy and History of Science.
Tokyo Institute of Technology has a program in the History, Philosophy, and Social Studies of Science and Technology.
The University of Tokyo has a program in the History and Philosophy of Science.
Netherlands
Utrecht University, has two co-operating programs: one in History and Philosophy of Science at the Faculty of Natural Sciences and one in Historical and Comparative Studies of the Sciences and the Humanities at the Faculty of Humanities.
Poland
Institute for the History of Science of the Polish Academy of Sciences offers PhD programmes and habilitation degrees in the fields of History of Science, Technology and Ideas.
Russia
Spain
University of the Basque Country, offers a master's degree and PhD programme in History and Philosophy of Science and runs since 1952 THEORIA. International Journal for Theory, History and Foundations of Science. The university also sponsors the Basque Museum of the History of Medicine and Science, the only open museum of History of Science of Spain, that in the past offered also PhD courses.
Universitat Autònoma de Barcelona, offers a master's degree and PhD programme in HST together with the Universitat de Barcelona.
Universitat de València, offers a master's degree and PhD programme in HST together with the Consejo Superior de Investigaciones Científicas.
Sweden
Linköpings universitet, has a Science, Technology, and Society program which includes HST.
Switzerland
University of Bern, has an undergraduate and a graduate program in the History and Philosophy of Science.
Ukraine
State University of Infrastructure and Technologies, has a Department of Philosophy and History of Science and technology.
United Kingdom
University of Bristol has a masters and PhD program in the Philosophy and History of Science.
University of Cambridge has an undergraduate course and a large masters and PhD program in the History and Philosophy of Science (including the History of Medicine).
University of Durham has several undergraduate History of Science modules in the Philosophy department, as well as Masters and PhD programs in the discipline.
University of Kent has a Centre for the History of the Sciences, which offers Masters programmes and undergraduate modules.
University College London's Department of Science and Technology Studies offers undergraduate programme in History and Philosophy of Science, including two BSc single honour degrees (UCAS V550 and UCAS L391), plus both major and minor streams in history, philosophy and social studies of science in UCL's Natural Sciences programme. The department also offers MSc degrees in History and Philosophy of Science and in the study of contemporary Science, Technology, and Society. An MPhil/PhD research degree is offered, too. UCL also contains a Centre for the History of Medicine. This operates a small teaching programme in History of Medicine.
University of Leeds has both undergraduate and graduate programmes in History and Philosophy of Science in the Department of Philosophy.
University of Manchester offers undergraduate modules and postgraduate study in History of Science, Technology and Medicine and is sponsored by the Wellcome Trust.
University of Oxford has a one-year graduate course in 'History of Science: Instruments, Museums, Science, Technology' associated with the Museum of the History of Science.
The London Centre for the History of Science, Medicine, and Technology – this Centre closed in 2013. It was formed in 1987 and ran a taught MSc programme, jointly taught by University College London's Department of Science and Technology Studies and Imperial College London. The Masters programme transferred to UCL.
United States
Academic study of the history of science as an independent discipline was launched by George Sarton at Harvard with his book Introduction to the History of Science (1927) and the Isis journal (founded in 1912). Sarton exemplified the early 20th century view of the history of science as the history of great men and great ideas. He shared with many of his contemporaries a Whiggish belief in history as a record of the advances and delays in the march of progress. The History of Science was not a recognized subfield of American history in this period, and most of the work was carried out by interested Scientists and Physicians rather than professional Historians. With the work of I. Bernard Cohen at Harvard, the history of Science became an established subdiscipline of history after 1945.
Arizona State University's Center for Biology and Society offers several paths for MS or PhD students who are interested in issues surrounding the history and philosophy of the science.
California Institute of Technology offers courses in the History and Philosophy of Science to fulfill its core humanities requirements.
Case Western Reserve University has an undergraduate interdisciplinary program in the History and Philosophy of Science and a graduate program in the History of Science, Technology, Environment, and Medicine (STEM).
Cornell University offers a variety of courses within the Science and Technology course.
Georgia Institute of Technology has an undergraduate and graduate program in the History of Technology and Society.
Harvard University has an undergraduate and graduate program in History of Science
Indiana University offers undergraduate courses and a masters and PhD program in the History and Philosophy of Science.
Johns Hopkins University has an undergraduate and graduate program in the History of Science, Medicine, and Technology.
Lehigh University offers an undergraduate level STS concentration (founded in 1972) and a graduate program with emphasis on the History of Industrial America.
Massachusetts Institute of Technology has a Science, Technology, and Society program which includes HST.
Michigan State University offers an undergraduate major and minor in History, Philosophy, and Sociology of Science through its Lyman Briggs College.
New Jersey Institute of Technology has a Science, Technology, and Society program which includes the History of Science and Technology
Oregon State University offers a Masters and Ph.D. in History of Science through its Department of History.
Princeton University has a program in the History of Science.
Rensselaer Polytechnic Institute has a Science and Technology Studies department
Rutgers has a graduate Program in History of Science, Technology, Environment, and Health.
Stanford has a History and Philosophy of Science and Technology program.
Stevens Institute of Technology has an undergraduate and graduate program in the History of Science.
University of California, Berkeley offers a graduate degree in HST through its History program, and maintains a separate sub-department for the field.
University of California, Los Angeles has a relatively large group History of Science and Medicine faculty and graduate students within its History department, and also offers an undergraduate minor in the History of Science.
University of California, Santa Barbara has an interdisciplinary graduate program emphasis in Technology & Society through the Center for Information Technology & Society.
University of Chicago offers a B.A. program in the History, Philosophy, and Social Studies of Science and Medicine as well as M.A. and Ph.D. degrees through its Committee on the Conceptual and Historical Studies of Science.
University of Florida has a Graduate Program in 'History of Science, Technology, and Medicine' at the University of Florida provides undergraduate and graduate degrees.
University of Minnesota has a Ph.D. program in History of Science, Technology, and Medicine as well as undergraduate courses in these fields.
University of Oklahoma has an undergraduate minor and a graduate degree program in History of Science.
University of Pennsylvania has a program in History and Sociology of Science.
University of Pittsburgh's Department of History and Philosophy of Science offers graduate and undergraduate courses.
University of Puget Sound has a Science, Technology, and Society program, which includes the history of Science and Technology.
University of Wisconsin–Madison has a program in History of Science, Medicine and Technology. It offers M.A. and Ph.D. degrees as well as an undergraduate major.
Wesleyan University has a Science in Society program.
Yale University has a program in the History of Science and Medicine.
Prominent historians of the field
Wiebe Bijker
Peter J. Bowler
Janet Browne
Stephen G. Brush
James Burke
Edwin Arthur Burtt (1892–1989)
Johann Beckmann (1739–1811)
Jim Bennett
Herbert Butterfield (1900–1979)
Martin Campbell-Kelly
Georges Canguilhem (1904–1995)
Allan Chapman
I. Bernard Cohen (1914–2003)
A. C. Crombie (1915–1996)
E. J. Dijksterhuis (1892–1965)
A. G. Drachmann (1891–1980)
Pierre Duhem (1861–1916)
A. Hunter Dupree (1921–2019)
George Dyson
Jacques Ellul (1912–1994)
Eugene S. Ferguson (1916–2004)
Peter Galison
Sigfried Giedion
Charles Coulston Gillispie
Robert Gunther (1869–1940)
Paul Forman (historian)
Donna Haraway
Peter Harrison
Ahmad Y Hassan
John L. Heilbron
Boris Hessen
Reijer Hooykaas
David A. Hounshell
Thomas P. Hughes
Evelyn Fox Keller
Daniel Kevles
Alexandre Koyré (1892–1964)
Melvin Kranzberg
Thomas Kuhn
Deepak Kumar
Gilbert LaFreniere
Bruno Latour
David C. Lindberg
G. E. R. Lloyd
Jane Maienschein
Anneliese Maier
Leo Marx
Lewis Mumford (1895–1990)
John E. Murdoch (1927–2010)
Otto Neugebauer (1899–1990)
William R. Newman
David Noble
Ronald Numbers
David E. Nye
Abraham Pais (1918–2000)
Trevor Pinch
Theodore Porter
Lawrence M. Principe
Raúl Rojas
Michael Ruse
A. I. Sabra
Jan Sapp
George Sarton (1884–1956)
Simon Schaffer
Howard Segal (1948–2020)
Steven Shapin
Wolfgang Schivelbusch
Charles Singer (1876–1960)
Merritt Roe Smith
Stephen Snobelen
M. Norton Wise
Frances A. Yates (1899–1981)
Journals and periodicals
Annals of Science
The British Journal for the History of Science
Centaurus
Dynamis
History and Technology (magazine)
History of Science and Technology (journal)
History of Technology (book series)
Historical Studies in the Physical and Biological Sciences (HSPS)
Historical Studies in the Natural Sciences (HSNS)
HoST - Journal of History of Science and Technology
ICON
IEEE Annals of the History of Computing
Isis
Journal of the History of Biology
Journal of the History of Medicine and Allied Sciences
Notes and Records of the Royal Society
Osiris
Science & Technology Studies
Science in Context
Science, Technology, & Human Values
Social History of Medicine
Social Studies of Science
Technology and Culture
Transactions of the Newcomen Society
Historia Mathematica
Bulletin of the Scientific Instrument Society
See also
History of science
History of technology
Ancient Egyptian technology
History of science and technology in China
History of science and technology in Japan
History of science and technology in France
History of science and technology in the Indian subcontinent
Mesopotamian science
Productivity improving technologies (historical)
Science and technology in Argentina
Science and technology in Canada
Science and technology in Iran
Science and technology in the United States
Science in the medieval Islamic world
Science tourism
Technological and industrial history of the United States
Timeline of science and engineering in the Islamic world
Professional societies
The British Society for the History of Science (BSHS)
History of Science Society (HSS)
Newcomen Society
Society for the History of Technology (SHOT)
Society for the Social Studies of Science (4S)
Scientific Instrument Society
References
Bibliography
Historiography of science
H. Floris Cohen, The Scientific Revolution: A Historiographical Inquiry, University of Chicago Press 1994 – Discussion on the origins of modern science has been going on for more than two hundred years. Cohen provides an excellent overview.
Ernst Mayr, The Growth of Biological Thought, Belknap Press 1985
Michel Serres,(ed.), A History of Scientific Thought, Blackwell Publishers 1995
Companion to Science in the Twentieth Century, John Krige (Editor), Dominique Pestre (Editor), Taylor & Francis 2003, 941pp
The Cambridge History of Science, Cambridge University Press
Volume 4, Eighteenth-Century Science, 2003
Volume 5, The Modern Physical and Mathematical Sciences, 2002
History of science as a discipline
J. A. Bennett, 'Museums and the Establishment of the History of Science at Oxford and Cambridge', British Journal for the History of Science 30, 1997, 29–46
Dietrich von Engelhardt, Historisches Bewußtsein in der Naturwissenschaft : von der Aufklärung bis zum Positivismus, Freiburg [u.a.] : Alber, 1979
A.-K. Mayer, 'Setting up a Discipline: Conflicting Agendas of the Cambridge History of Science Committee, 1936–1950.' Studies in History and Philosophy of Science, 31, 2000
Science and technology
Technological change
Technology
Technology systems
History by topic | History of science and technology | Technology,Engineering | 4,029 |
48,892,446 | https://en.wikipedia.org/wiki/Automatic%20bug%20fixing | Automatic bug-fixing is the automatic repair of software bugs without the intervention of a human programmer. It is also commonly referred to as automatic patch generation, automatic bug repair, or automatic program repair. The typical goal of such techniques is to automatically generate correct patches to eliminate bugs in software programs without causing software regression.
Specification
Automatic bug fixing is made according to a specification of the expected behavior which can be for instance a formal specification or a test suite.
A test-suite – the input/output pairs specify the functionality of the program, possibly captured in assertions can be used as a test oracle to drive the search. This oracle can in fact be divided between the bug oracle that exposes the faulty behavior, and the regression oracle, which encapsulates the functionality any program repair method must preserve. Note that a test suite is typically incomplete and does not cover all possible cases. Therefore, it is often possible for a validated patch to produce expected outputs for all inputs in the test suite but incorrect outputs for other inputs. The existence of such validated but incorrect patches is a major challenge for generate-and-validate techniques. Recent successful automatic bug-fixing techniques often rely on additional information other than the test suite, such as information learned from previous human patches, to further identify correct patches among validated patches.
Another way to specify the expected behavior is to use formal specifications Verification against full specifications that specify the whole program behavior including functionalities is less common because such specifications are typically not available in practice and the computation cost of such verification is prohibitive. For specific classes of errors, however, implicit partial specifications are often available. For example, there are targeted bug-fixing techniques validating that the patched program can no longer trigger overflow errors in the same execution path.
Techniques
Generate-and-validate
Generate-and-validate approaches compile and test each candidate patch to collect all validated patches that produce expected outputs for all inputs in the test suite. Such a technique typically starts with a test suite of the program, i.e., a set of test cases, at least one of which exposes the bug. An early generate-and-validate bug-fixing systems is GenProg. The effectiveness of generate-and-validate techniques remains controversial, because they typically do not provide patch correctness guarantees. Nevertheless, the reported results of recent state-of-the-art techniques are generally promising. For example, on systematically collected 69 real world bugs in eight large C software programs, the state-of-the-art bug-fixing system Prophet generates correct patches for 18 out of the 69 bugs.
One way to generate candidate patches is to apply mutation operators on the original program. Mutation operators manipulate the original program, potentially via its abstract syntax tree representation, or a more coarse-grained representation such as operating at the statement-level or block-level. Earlier genetic improvement approaches operate at the statement level and carry out simple delete/replace operations such as deleting an existing statement or replacing an existing statement with another statement in the same source file. Recent approaches use more fine-grained operators at the abstract syntax tree level to generate more diverse set of candidate patches. Notably, the statement deletion mutation operator, and more generally removing code, is a reasonable repair strategy, or at least a good fault localization strategy.
Another way to generate candidate patches consists of using fix templates. Fix templates are typically predefined changes for fixing specific classes of bugs. Examples of fix templates include inserting a conditional statement to check whether the value of a variable is null to fix null pointer exception, or changing an integer constant by one to fix off-by-one errors.
Synthesis-based
Repair techniques exist that are based on symbolic execution. For example, Semfix uses symbolic execution to extract a repair constraint. Angelix introduced the concept of angelic forest in order to deal with multiline patches.
Under certain assumptions, it is possible to state the repair problem as a synthesis problem.
SemFix uses component-based synthesis.
Dynamoth uses dynamic synthesis.
S3 is based on syntax-guided synthesis.
SearchRepair converts potential patches into an SMT formula and queries candidate patches that allow the patched program to pass all supplied test cases.
Data-driven
Machine learning techniques can improve the effectiveness of automatic bug-fixing systems. One example of such techniques learns from past successful patches from human developers collected from open source repositories in GitHub and SourceForge. It then use the learned information to recognize and prioritize potentially correct patches among all generated candidate patches. Alternatively, patches can be directly mined from existing sources. Example approaches include mining patches from donor applications or from QA web sites.
Getafix is a language-agnostic approach developed and used in production at Facebook. Given a sample of code commits where engineers fixed a certain kind of bug, it learns human-like fix patterns that apply to future bugs of the same kind. Besides using Facebook's own code repositories as training data, Getafix learnt some fixes from open source Java repositories. When new bugs get detected, Getafix applies its previously learnt patterns to produce candidate fixes and ranks them within seconds. It presents only the top-ranked fix for final validation by tools or an engineer, in order to save resources and ideally be so fast that no human time was spent on fixing the same bug, yet.
Template-based repair
For specific classes of errors, targeted automatic bug-fixing techniques use specialized templates:
null pointer exception repair with insertion of a conditional statement to check whether the value of a variable is null.
integer overflow repair
buffer overflow repair
memory leak repair, with automated insertion of missing memory deallocation statements.
Comparing to generate-and-validate techniques, template-based techniques tend to have better bug-fixing accuracy but a much narrowed scope.
Use
There are multiple uses of automatic bug fixing:
In a development environment: When encountering a bug the developer activates a feature to search for a patch (for instance by clicking on a button). This search can also happen in the background, when the IDE proactively searches for solutions to potential problems, without waiting for explicit action from the developer.
At runtime: When a failure happens at runtime, a binary patch can be searched for and applied online. An example of such a repair system is ClearView, which does repair on x86 code, with x86 binary patches.
Search space
In essence, automatic bug fixing is a search activity, whether deductive-based or heuristic-based. The search space of automatic bug fixing is composed of all edits that can be possibly made to a program. There have been studies to understand the structure of this search space. Qi et al. showed that the original fitness function of Genprog is not better than random search to drive the search. Long et al.'s study indicated that correct patches can be considered as sparse in the search space and that incorrect overfitting patches are vastly more abundant (see also discussion about overfitting below).
Overfitting
Sometimes, in test-suite based program repair, tools generate patches that pass the test suite, yet are actually incorrect, this is known as the "overfitting" problem. "Overfitting" in this context refers to the fact that the patch overfits to the test inputs. There are different kinds of overfitting: incomplete fixing means that only some buggy inputs are fixed, regression introduction means some previously working features are broken after the patch (because they were poorly tested). Early prototypes for automatic repair suffered a lot from overfitting: on the Manybugs C benchmark, Qi et al. reported that 104/110 of plausible GenProg patches were overfitting. In the context of synthesis-based repair, Le et al. obtained more than 80% of overfitting patches.
One way to avoid overfitting is to filter out the generated patches. This can be done based on dynamic analysis.
Alternatively, Tian et al. propose heuristic approaches to assess patch correctness.
Limitations of automatic bug-fixing
Automatic bug-fixing techniques that rely on a test suite do not provide patch correctness guarantees, because the test suite is incomplete and does not cover all cases. A weak test suite may cause generate-and-validate techniques to produce validated but incorrect patches that have negative effects such as eliminating desirable functionalities, causing memory leaks, and introducing security vulnerabilities. One possible approach is to amplify the failing test suite by automatically generating further test cases that are then labelled as passing or failing. To minimize the human labelling effort, an automatic test oracle can be trained that gradually learns to automatically classify test cases as passing or failing and only engages the bug-reporting user for uncertain cases.
A limitation of generate-and-validate repair systems is the search space explosion. For a program, there are a large number of statements to change and for each statement there are a large number of possible modifications. State-of-the-art systems address this problem by assuming that a small modification is enough for fixing a bug, resulting in a search space reduction.
The limitation of approaches based on symbolic analysis is that real world programs are often converted to intractably large formulas especially for modifying statements with side effects.
Benchmarks
Benchmarks of bugs typically focus on one specific programming language.
In C, the Manybugs benchmark collected by GenProg authors contains 69 real world defects and it is widely used to evaluate many other bug-fixing tools for C.
In Java, the main benchmark is Defects4J now extensively used in most research papers on program repair for Java. Alternative benchmarks exist, such as the Quixbugs benchmark, which contains original bugs for program repair. Other benchmarks of Java bugs include Bugs.jar, based on past commits.
Example tools
Automatic bug-fixing is an active research topic in computer science. There are many implementations of various bug-fixing techniques especially for C and Java programs. Note that most of these implementations are research prototypes for demonstrating their techniques, i.e., it is unclear whether their current implementations are ready for industrial usage or not.
C
ClearView: A generate-and-validate tool of generating binary patches for deployed systems. It is evaluated on 10 security vulnerability cases. A later study shows that it generates correct patches for at least 4 of the 10 cases.
GenProg: A seminal generate-and-validate bug-fixing tool. It has been extensively studied in the context of the ManyBugs benchmark.
SemFix: The first solver-based bug-fixing tool for C.
CodePhage: The first bug-fixing tool that directly transfer code across programs to generate patch for C program. Note that although it generates C patches, it can extract code from binary programs without source code.
LeakFix: A tool that automatically fixes memory leaks in C programs.
Prophet: The first generate-and-validate tool that uses machine learning techniques to learn useful knowledge from past human patches to recognize correct patches. It is evaluated on the same benchmark as GenProg and generate correct patches (i.e., equivalent to human patches) for 18 out of 69 cases.
SearchRepair: A tool for replacing buggy code using snippets of code from elsewhere. It is evaluated on the IntroClass benchmark and generates much higher quality patches on that benchmark than GenProg, RSRepair, and AE.
Angelix: An improved solver-based bug-fixing tool. It is evaluated on the GenProg benchmark. For 10 out of the 69 cases, it generate patches that is equivalent to human patches.
Learn2Fix: The first human-in-the-loop semi-automatic repair tool. Extends GenProg to learn the condition under which a semantic bug is observed by systematic queries to the user who is reporting the bug. Only works for programs that take and produce integers.
Java
PAR: A generate-and-validate tool that uses a set of manually defined fix templates.
QACrashFix: A tool that fixes Java crash bugs by mining fixes from Q&A web site.
ARJA: A repair tool for Java based on multi-objective genetic programming.
NpeFix: An automatic repair tool for NullPointerException in Java, available on Github.
Other languages
AutoFixE: A bug-fixing tool for Eiffel language. It relies the contracts (i.e., a form of formal specification) in Eiffel programs to validate generated patches.
Getafix: Operates purely on AST transformations and thus requires only a parser and formatter. At Facebook it has been applied to Hack, Java and Objective-C.
Proprietary
DeepCode integrates public and private GitHub, GitLab and Bitbucket repositories to identify code-fixes and improve software.
Kodezi utilizes opensource data from GitHub repositories, Stack Overflow, and private trained models to analyze code, provide solutions, and descriptions about the coding bugs instantly.
References
External links
datasets, tools, etc., related to automated program repair research.
Debugging
Software development | Automatic bug fixing | Technology,Engineering | 2,735 |
7,657,737 | https://en.wikipedia.org/wiki/Wet%20lab | A wet lab, or experimental lab, is a type of laboratory where it is necessary to handle various types of chemicals and potential "wet" hazards, so the room has to be carefully designed, constructed, and controlled to avoid spillage and contamination.
A dry lab might have large experimental equipment but minimal chemicals, or instruments for analyzing data produced elsewhere.
Overview
A wet lab is a type of laboratory in which a wide range of experiments are performed, for example, characterizing of enzymes in biology, titration in chemistry, diffraction of light in physics, etc. - all of which may sometimes involve dealing with hazardous substances. Due to the nature of these experiments, the proper appropriate arrangement of safety equipment are of great importance.
The researchers (the occupants) are required to know basic laboratory techniques including safety procedures and techniques related to the experiments that they perform.
Laboratory design
At the present, lab design tends to focus on increasing the interactions between researchers through the use of open plans, allowing the space and opportunity for researchers to exchange ideas, share equipment, and share storage space; increasing productivity and efficiency of experiments. This style of design has been proposed to support team-based work, though more compartmentalised or individual spaces are still important for some types of processes which require separate/isolated space such as electron microscopes, tissue cultures, work/workers that may be disturbed by noise levels, etc.
Flexibility of laboratory design should also be promoted, for example, the wall and ceiling should be removable in case of expansion or contraction, the pipes, tubes and fume hoods should also be removable for future expansion, reallocation and change of use. A well thought-through design will ensure that a lab can be adjusted for any future use. The sustainability of resources is also a concern, so the amount of resources and energy used in the lab should be reduced where possible to save the environment, but still yield the same products.
As a laboratory consists of many areas such as wet lab, dry lab and office areas, wet labs should be separated from other spaces using controlling devices or dividers to prevent cross-contamination or spillage.
Due to the nature of processes used in wet labs, the environmental conditions may need to be carefully considered and controlled using a cleanroom system.
See also
Wet chemistry
References
Laboratory types
Science experiments | Wet lab | Chemistry | 470 |
42,552,765 | https://en.wikipedia.org/wiki/LibreSSL | LibreSSL is an open-source implementation of the Transport Layer Security (TLS) protocol. The implementation is named after Secure Sockets Layer (SSL), the deprecated predecessor of TLS, for which support was removed in release 2.3.0. The OpenBSD project forked LibreSSL from OpenSSL 1.0.1g in April 2014 as a response to the Heartbleed security vulnerability, with the goals of modernizing the codebase, improving security, and applying development best practices.
History
After the Heartbleed security vulnerability was discovered in OpenSSL, the OpenBSD team audited the codebase and decided it was necessary to fork OpenSSL to remove dangerous code. The libressl.org domain was registered on 11 April 2014; the project announced the name on 22 April 2014. In the first week of development, more than 90,000 lines of C code were removed. Unused code was removed, and support for obsolete operating systems (Classic Mac OS, NetWare, OS/2, 16-bit Windows) and some older operating systems (OpenVMS) was removed.
LibreSSL was initially developed as an intended replacement for OpenSSL in OpenBSD 5.6, and was ported to other platforms once a stripped-down version of the library was stable. , the project was seeking a "stable commitment" of external funding. On 17 May 2014, Bob Beck presented "LibreSSL: The First 30 Days, and What The Future Holds" during the 2014 BSDCan conference, in which he described the progress made in the first month. On 5 June 2014, several OpenSSL bugs became public. While several projects were notified in advance, LibreSSL was not; Theo de Raadt accused the OpenSSL developers of intentionally withholding this information from OpenBSD and LibreSSL.
On 20 June 2014, Google created another fork of OpenSSL called BoringSSL, and promised to exchange fixes with LibreSSL. Google has already relicensed some of its contributions under the ISC license, as it was requested by the LibreSSL developers. On 21 June 2014, Theo de Raadt welcomed BoringSSL and outlined the plans for LibreSSL-portable. Starting on 8 July, code porting for macOS and Solaris began, while the initial porting to Linux began on 20 June.
As of 2021, OpenBSD uses LibreSSL as the primary TLS library. Alpine Linux supported LibreSSL as its primary TLS library for three years, until release 3.9.0 in January 2019. Gentoo supported LibreSSL until February 2021. Python 3.10 dropped LibreSSL after being supported since Python 3.4.3 (2015).
Adoption
LibreSSL is the default provider of TLS for:
Dragonfly BSD
OpenBSD
Hyperbola GNU/Linux-libre
OpenSSH on Windows
LibreSSL is the default provider of TLS for these now-discontinued systems:
OpenELEC
TrueOS packages
LibreSSL is a selectable provider of TLS for:
FreeBSD packages
Gentoo packages (support dropped as of February 2021)
OPNsense packages (will be dropped after 22.7)
macOS
Changes
Memory-related
Changes include replacement of custom memory calls to ones in a standard library (for example, strlcpy, calloc, asprintf, reallocarray, etc.). This process may help later on to catch buffer overflow errors with more advanced memory analysis tools or by observing program crashes (via ASLR, use of the NX bit, stack canaries, etc.).
Fixes for potential double free scenarios have also been cited in the VCS commit logs (including explicit assignments of null pointer values). There have been extra sanity checks also cited in the commit logs related to ensuring length arguments, unsigned-to-signed variable assignments, pointer values, and method returns.
Proactive measures
In order to maintain good programming practice, a number of compiler options and flags designed for safety have been enabled by default to help in spotting potential issues so they can be fixed earlier (-Wall, -Werror, -Wextra, -Wuninitialized). There have also been code readability updates which help future contributors in verifying program correctness (KNF, white-space, line-wrapping, etc.). Modification or removal of unneeded method wrappers and macros also help with code readability and auditing (Error and I/O abstraction library references).
Changes were made to ensure that LibreSSL will be year 2038 compatible along with maintaining portability for other similar platforms. In addition, explicit_bzero and bn_clear calls were added to prevent the compiler from optimizing them out and prevent attackers from reading previously allocated memory.
Cryptographic
There were changes to help ensure proper seeding of random number generator-based methods via replacements of insecure seeding practices (taking advantage of features offered by the kernel itself natively). In terms of notable additions made, OpenBSD has added support for newer and more reputable algorithms (ChaCha stream cipher and Poly1305 message authentication code) along with a safer set of elliptic curves (brainpool curves from RFC 5639, up to 512 bits in strength).
Added features
The initial release of LibreSSL added a number of features: the ChaCha and Poly1305 algorithm, the Brainpool and ANSSI elliptic curves, and the AES-GCM and ChaCha20-Poly1305 AEAD modes.
Later versions added the following:
2.1.0: Automatic ephemeral EC keys.
2.1.2: Built-in arc4random implementation on macOS and FreeBSD.
2.1.2: Reworked GOST cipher suite support.
2.1.3: ALPN support.
2.1.3: Support for SHA-256 and Camellia cipher suites.
2.1.4: TLS_FALLBACK_SCSV server-side support.
2.1.4: certhash as a replacement of the c_rehash script.
2.1.4: X509_STORE_load_mem API for loading certificates from memory (enhance chroot support).
2.1.4: Experimental Windows binaries.
2.1.5: Minor update mainly for improving Windows support, first working 32- and 64-bit binaries.
2.1.6: declared stable and enabled by default.
2.2.0: AIX and Cygwin support.
2.2.1: Addition of EC_curve_nid2nist and EC_curve_nist2nid from OpenSSL, initial Windows XP/2003 support.
2.2.2: Defines LIBRESSL_VERSION_NUMBER, added TLS_*methods as a replacement for the SSLv23_*method calls, cmake build support.
Old insecure features
The initial release of LibreSSL disabled a number of features by default. Some of the code for these features was later removed, including Kerberos, US-Export ciphers, TLS compression, DTLS heartbeat, SSL v2 and SSL v3.
Later versions disabled more features:
2.1.1: Following the discovery of the POODLE vulnerability in the legacy SSL 3.0 protocol, LibreSSL now disables the use of SSL 3.0 by default.
2.1.3: GOST R 34.10-94 signature authentication.
2.2.1: Removal of Dynamic Engine and MDC-2DES support
2.2.2: Removal of SSL 3.0 from the openssl binary, removal of Internet Explorer 6 workarounds, RSAX engine.
2.3.0: Complete removal of SSL 3.0, SHA-0 and DTLS1_BAD_VER.
Code removal
The initial release of LibreSSL has removed a number of features that were deemed insecure, unnecessary or deprecated as part of OpenBSD 5.6.
In response to Heartbleed, the heartbeat functionality was one of the first features to be removed.
Support for obsolete platforms (Classic Mac OS, NetWare, OS/2, 16-bit Windows) were removed.
Support for some older platforms (OpenVMS) was removed.
Support for platforms that do not exist, such as big-endian i386 and amd64.
Support for old compilers.
The IBM 4758, Broadcom ubsec, Sureware, Nuron, GOST, GMP, CSwift, CHIL, CAPI, Atalla and AEP engines were removed due to irrelevance of hardware or dependency on non-free libraries.
The OpenSSL PRNG was removed (and replaced with ChaCha20-based implementation of arc4random).
Preprocessor macros that have been deemed unnecessary or insecure or had already been deprecated in OpenSSL for a long time (e.g. des_old.h).
Older unneeded files for assembly language, C, and Perl (e.g. EGD).
MD2, SEED functionality.
SSL 3.0, SHA-0, DTLS1_BAD_VER
The Dual EC DRBG algorithm, which is suspected of having a back door, was cut along with support for the FIPS 140-2 standard that required it. Unused protocols and insecure algorithms have also been removed, including the support for FIPS 140-2, MD4/MD5 J-PAKE, and SRP.
Bug backlog
One of the complaints of OpenSSL was the number of open bugs reported in the bug tracker that had gone unfixed for years. Older bugs are now being fixed in LibreSSL.
See also
Comparison of TLS implementations
Comparison of cryptography libraries
OpenSSH
wolfSSH
References
External links
LibreSSL and source code (OpenGrok)
2014 software
C (programming language) libraries
Cryptographic software
Free security software
Free software programmed in C
OpenBSD
Software forks
Transport Layer Security implementation | LibreSSL | Mathematics | 2,095 |
18,877,021 | https://en.wikipedia.org/wiki/Vav%20%28protein%29 | Vav is a family of proteins involved in cell signalling. They act as guanine nucleotide exchange factors (GEFs) for small G proteins of the Rho family. GEF activity is mediated via module of tandem DH-PH domains. Vav proteins also appear to exhibit GEF-independent functions. Although it was originally thought that Vav proteins would only be present in multicellular organisms, Vav family proteins have been observed in Choanoflagellates.
Function
Some functions of the Vav protein are important for the immune system. Specifically the ability of Vav to change the cytoskeletal structure of lymphocytes, which is particularly used to "aim" cytokines towards bound pathogens or cells.
In humans there are three Vav proteins:
Vav1
Vav2
Vav3
References
Protein families | Vav (protein) | Chemistry,Biology | 174 |
23,597,540 | https://en.wikipedia.org/wiki/Tank%20on%20the%20Moon | Tank on the Moon is a French 2007 documentary film about the development, launch, and operation of the Soviet Moon exploration rovers, Lunokhod 1 and Lunokhod 2 in the period from 1970 to 1973. The film uses historical footage from American, Russian and French archives featuring Leonid Brezhnev, Yuri Gagarin, Lyndon Johnson, John F. Kennedy, Nikita Khrushchev, Sergei Korolev, Alexei Kosygin, Alexei Leonov, Sam Rayburn and many other contemporary figures. A special emphasis is placed on the Lunokhods' chief designer, Alexander Kemurdzhian.
Summary
During the 1960s, the United States and the Soviet Union were engaged in a feverish technological competition, popularly known as the space race, to be the first to land a human on the Moon. The United States won the race, but less is known about a "secret chapter" from the Cold War era. Overshadowed by the Apollo Moon landings and largely ignored in the West at the time, the Soviets were also exploring the Moon with a series of successful robotic Moon missions. The Soviets never sent humans to the Moon, but they successfully guided two freely-roving robots by remote control from the Earth. For 16 months between 1970 and 1973, these Lunokhods traveled more than over the Moon's surface. Although the results of the Lunokhod program were well publicized, details of the program were kept in utmost secrecy for two decades, until the secret Soviet space archives concerning this program were finally declassified.
With these archives, along with the recollections by surviving participants in the Lunokhod program, and archived news films, the full story of the Soviet lunar-roving robots is revealed; the innovative development, the difficult deployment, the spectacular technological achievements, and the legacy passed down to the new generation of planetary robotic rovers.
Production and video release
The following companies and television stations helped produce and broadcast this film; Zed (Paris), with Corona Films (Saint-Petersburg), France 5, Channel 5 (Russia), CBC/RDI (Canada), SVT (Sweden), RTBF (Belgium), NHK (Japan). Louis Friedman, executive director of the Planetary Society appears in the film as a commentator. The first North American broadcast was on The Science Channel February 12, 2008.
This video documentary was produced in French, Brazilian Portuguese and English languages. An English language DVD was released in 2008, however the video is currently available only by digital download.
Awards
International Film Festival of Toulon (France) Archives Prize (Jacques Henri Blake Award)
MEDIMED (Spain) Official Selection
WORLDMEDIA Festival (Germany) Intermedia Globe Gold Award: Best Film (Politics Category)
See also
Lunokhod programme
Soviet space program
References
External links
ZED Television - Tank on the Moon HD
Science Channel - Tank on the Moon
2007 documentary films
Documentary films about outer space
Films about space programs
French documentary films
Lunokhod programme
Missions to the Moon
Lunar rovers
Works about the Soviet Union
Space program of the Soviet Union
Science and technology in the Soviet Union
2000s French films | Tank on the Moon | Astronomy | 633 |
2,957,626 | https://en.wikipedia.org/wiki/Pimaric%20acid | Pimaric acid is a carboxylic acid that is classified as a resin acid. It is a major component of the rosin obtained from pine trees.
When heated above 100 °C, pimaric acid converts to abietic acid, which it usually accompanies in mixtures like rosin.
It is soluble in alcohols, acetone, and ethers. The compound is colorless, but almost invariably samples are yellow or brown owing to air oxidation. As a mixture with abietic acid, it is often hydrogenated, esterified, or otherwise modified to produce materials of commerce.
See also
Isopimaric acid
References
Carboxylic acids
Diterpenes
Phenanthrenes
Vinyl compounds | Pimaric acid | Chemistry | 149 |
2,182,309 | https://en.wikipedia.org/wiki/Humphry%20Bowen |
Humphry John Moule Bowen (22 June 1929 – 9 August 2001) was a British botanist and chemist.
Early life and education
Bowen was born in Oxford, son of the chemist Edmund Bowen and Edith Bowen (nee Moule). He attended the Dragon School, gaining a scholarship to Rugby School and then a demyship to Magdalen College, Oxford. He won the Gibbs Prize in 1949 and completed a DPhil in chemistry at Oxford University in 1953 before starting his professional career as a chemist. Bowen was also a proficient amateur actor in his early years, appearing with a young Ronnie Barker at Oxford.
Research career
His first post was with the Atomic Energy Research Establishment (AERE) near the village of Harwell where he lived, working at the Wantage Research Laboratory, then in Berkshire. His early work started an interest in radioisotopes and trace elements that he maintained throughout his working life. While at AERE, he spent several months in 1956 attending the British nuclear tests at Maralinga in Australia to study the environmental effects of radiation.
Bowen realized that the calibration of different instruments intended to measure trace elements was an important issue that needed addressing. His solution was to produce a good supply of a material which later become known as Bowen's Kale. This was a dried, crushed chomogenate of the plant kale, that was stable and consistent enough to be distributed as a research calibration standard - probably the first successful example of such a standard.
In 1964, he was appointed as a lecturer in the chemistry department at the University of Reading. Later he was promoted to Reader in analytical chemistry in 1974. At Reading, Bowen undertook consultancy for Dunlop, investigating potential uses for their products. When the Torrey Canyon oil disaster occurred in 1967, he realized that it might be possible to use foam booms to block the oil from spreading in the English Channel. His original experiments were conducted in a small bucket in his laboratory. Although not entirely successful in reality at the time due to the rough seas, this lateral thinking combined his interest in chemistry with his love of nature and has since been effectively deployed to protect ports and harbours against encroaching oil slicks. Bowen wrote a number of professional books in the field of chemistry, including two editions of Trace elements in Biochemistry (1966 and 1976).
In 1968, Bowen noted that the paint used for yellow line road markings can contain chromate pigment, which may cause urban pollution as it deteriorates. He pointed out that hexavalent chromium in dust can cause dermatitis ulceration on the skin, inflammation of the nasal mucosa and larynx, and lung cancer.
From 1951 onwards, Bowen was a long-serving member of the Botanical Society of the British Isles (BSBI). He was meetings secretary for a period and the official recorder of plants for the counties of Berkshire and Dorset, producing Floras for both counties. He retired to Winterborne Kingston in Dorset at the end of his life. He was also one of the leading contributors of botanical data for the Flora of Oxfordshire. He acted as an expert botanical guide on tours around Europe, especially Greece and Turkey.
Humphry Bowen donated a large collection of lichens from Berkshire and Oxfordshire to the Museum of Reading in the 1970s. He established the Bowen Cup at the University of Reading in 1988, an annual prize for the student in the Department of Chemistry at the University who achieves the top marks in Part II Analytical Chemistry.
See also
Bowen's son, Jonathan Bowen, a computer scientist.
George Claridge Druce, the Victorian botanist who also wrote floras for more than one county.
Tottles.
Bibliography
H. J. M. Bowen, Trace Elements in Biochemistry. Academic Press, 1966.
H. J. M. Bowen, Properties of Solids and their Structures. McGraw-Hill, 1967.
H. J. M. Bowen, Environmental Chemistry of the Elements. Academic Press, 1979. .
References
External links
1929 births
2001 deaths
Scientists from Oxford
People educated at The Dragon School
People educated at Rugby School
Alumni of Magdalen College, Oxford
English nature writers
English botanists
English chemists
English science writers
Analytical chemists
Tour guides
Academics of the University of Reading | Humphry Bowen | Chemistry | 861 |
43,734,151 | https://en.wikipedia.org/wiki/Sony%20Xperia%20Z3%20Compact | The Sony Xperia Z3 Compact is an Android smartphone produced by Sony. As part of the Z Series, the Z3 Compact was unveiled during a press conference in IFA 2014 on 4 September 2014 and belongs to Sony's handset lineup of the second half of 2014, which includes the flagship Xperia Z3 and the entry-level Xperia C3. Like the preceding Z1 Compact (as no Z2 Compact was produced), the Z3 Compact is water-proof with an IP rating of IP65 and IP68. The phone features a new display, a Qualcomm Snapdragon 801 processor, and has the ability to record 4K videos.
Design
The Z3 Compact is designed with what Sony describes as "omni-balance", which is focused on balance and symmetry. Instead of a metal frame, the phone has curved edges and translucent plastic with tempered glass on the front and back. Measuring 8.6 mm thick, the device is thicker than the Z3, but is lighter, weighing at 129 g, due to its smaller size.
Specifications
Hardware
The Sony Xperia Z3 Compact features a 4.6 inch BRAVIA IPS Triluminos display with a resolution of 720 by 1280 pixels (HD) with a pixel density of 319 ppi. With Live LED technology, it combines red and green phosphorus with blue LEDs to produce brighter and more uniform light without over-saturation, allowing the display to reproduce colors that are more vibrant and brighter.
It comes with a 20.7 megapixel camera, the same as the Z3, with an Exmor RS sensor and an ISO rating of 12800, designed to improve image quality in low light conditions. The camera features Sony G Lens, which is aimed at giving a wider frame for taking photos, and is also capable of filming in 4K.
On the inside, the Xperia Z3 Compact features a quad-core Qualcomm Snapdragon 801 processor (a tweaked version of the Snapdragon 800) clocked at 2.5 GHz with a high capacity 2600 mAh battery, 2 GB of RAM and 16 GB of storage, about 11 GB of which are available for the user and supporting a microSD card.
Software
The Z3 Compact initially ran Android 4.4.4 KitKat with Sony's custom launcher and some additional applications, such as Sony's media applications (Walkman, Album and Movies). The Z3 Compact can play PlayStation 4 games via Remote Play.
Sony began an upgrade for both the Z3 Compact and the Z3 to Android 5.0 "Lollipop" and announced upgrades for other Xperia Z devices in March 2015. Starting in July 2015, Sony released an Android 5.1.1 update for the Z2, Z3 and Z3 Compact, with the other Xperia Z devices following shortly after. On 6 October 2015, Sony confirmed that the Xperia Z3 Compact will be updated to Android 6.0 Marshmallow. Sony released the official Android 6.0.1 Marshmallow update in April 2016.
The bootloader can be unlocked on both Z3 Compact and Z3 in order to install a custom ROM, but will void the warranty and delete the DRM keys, disabling the camera noise reduction feature and access to Sony Entertainment Network.
Reception
Critical reception
The Z3 Compact received generally positive reviews, with PC Advisor describing it as a great little smartphone offering everything available on the full-size Z3, with a design that is thinner and lighter with a larger display than the previous Z1 Compact. The Verge praised the Z3 Compact's small size while retaining the excellent battery life and performance specs of the Z3 flagship, drawing an analogy between the iPhone 6 and iPhone 6 Plus which shared most of their hardware despite the size difference, the only caveat was that the Z3 Compact had a plastic frame compared to the full-size Z3's metal frame. The Register praised the phone for its outstanding battery life.
Issues
The Z3 Compact has been widely reported to develop two serious issues:
The touchscreen starts to fail at the top and bottom edges; this becomes critical when the virtual back/home/menu navigation buttons stop working. A workaround from the Sony support forum uses Minimal ADB to reduce the usable screen height (adb shell wm overscan 0,30,0,30); another workaround is to use third party virtual button software. As an emergency measure, a mouse can be connected using an on-the-go USB cable.
With Android 6.0.1 (Marshmallow), Bluetooth blocks deep sleep mode and the battery drains quickly. Turning off Bluetooth is a work-around.
References
External links
Android (operating system) devices
Mobile phones introduced in 2014
Digital audio players
Mobile phones with 4K video recording
Discontinued flagship smartphones
Sony smartphones | Sony Xperia Z3 Compact | Technology | 1,007 |
41,262,001 | https://en.wikipedia.org/wiki/Luna%20Ring | Luna Ring is a speculative engineering project which consists of a series of solar generators, disposed around the equator of the Moon, that could send the generated electric energy back to the Earth via microwaves from the near side of the Moon. The project was proposed by Japanese construction firm Shimizu Corporation, after the 2011 Tōhoku earthquake and tsunami destroyed the Fukushima Daiichi Nuclear Power Plant, creating public opposition against nuclear electric energy. Until then, Japan had relied heavily on nuclear power.
Construction
The construction of a concrete ring on the Moon's equator to support the solar panels would be performed by robots that would be teleguided from Earth. Then, the solar panels would be placed on the concrete layer, and connected to microwave and laser transmitting stations. The energy sent to Earth that way could be captured by receiving stations all through the day. The fact that the ring would surround the entire moon would mean that at least half of it would always be lit by the sun, resulting in constant electric production.
In 2013, Shimizu Corporation stated that the construction of the Luna Ring could start as early as 2035. However, the practical drawbacks of putting such a technically vast and complex project into place could hamper its construction, even if it could pave the way for simpler projects in clean energy production.
See also
Space-based solar power
References
External links
Official page of the Luna Ring concept on the Shimizu Corporation website.
Science and technology in Japan
Megastructures
Photovoltaics
Space technology
Electric power
Solar power
Solar power and space
Space-based economy | Luna Ring | Physics,Astronomy,Technology,Engineering | 319 |
70,784,361 | https://en.wikipedia.org/wiki/Broadcast%20lens | Broadcast lens in television industry are lenses used for broadcasting in television studio or on the location/field. Main manufacturers of broadcast lenses are Canon and Fujifilm's Fujinon brand. Broadcast lenses can be box-shaped, which are heavier and for use in limited range or classically shaped, lighter and for portable use.
Types
Lens are generally classified into three types:
Studio zoom lenses, used mainly in the television broadcasting studio.
Field zoom lenses, used for relay broadcasting of sports and other type of live events.
Electronic news-gathering/Electronic field production (ENG/EFP) lenses, used for production of news and on-location events.
Features
Typically broadcast lenses have:
Less focus breathing
Variable focal lengths (18–35 mm)
Zoom which maintains focus as the focal length changes (parfocal lens)
Aspherical lens with fast and large lens aperture
Servomotor control of zoom, focus and aperture via remote control handles
Built-in image stabilization
Multi-group zoom lens system
See also
Cine lens
Zoom lens
Telephoto lens
Wide-angle lens
Camera lens
References
Television terminology
Broadcast engineering | Broadcast lens | Engineering | 223 |
4,694,434 | https://en.wikipedia.org/wiki/Boolean%20model%20of%20information%20retrieval | The (standard) Boolean model of information retrieval (BIR) is a classical information retrieval (IR) model and, at the same time, the first and most-adopted one. The BIR is based on Boolean logic and classical set theory in that both the documents to be searched and the user's query are conceived as sets of terms (a bag-of-words model). Retrieval is based on whether or not the documents contain the query terms and whether they satisfy the boolean conditions described by the query.
Definitions
An index term is a word or expression, which may be stemmed, describing or characterizing a document, such as a keyword given for a journal article. Letbe the set of all such index terms.
A document is any subset of . Letbe the set of all documents.
is a series of words or small phrases (index terms). Each of those words or small phrases are named , where is the number of the term in the series/list. You can think of as "Terms" and as "index term n".
The words or small phrases (index terms ) can exist in documents. These documents then form a series/list where each individual documents are called . These documents () can contain words or small phrases (index terms ) such as could contain the terms and from . There is an example of this in the following section.
Index terms generally want to represent words which have more meaning to them and corresponds to what the content of an article or document could talk about. Terms like "the" and "like" would appear in nearly all documents whereas "Bayesian" would only be a small fraction of documents. Therefor, rarer terms like "Bayesian" are a better choice to be selected in the sets. This relates to Entropy (information theory). There are multiple types of operations that can be applied to index terms used in queries to make them more generic and more relevant. One such is Stemming.
A query is a Boolean expression in normal form:where is true for when . (Equivalently, could be expressed in disjunctive normal form.)
Any queries are a selection of index terms ( or ) picked from a set of terms which are combined using Boolean operators to form a set of conditions.
These conditions are then applied to a set of documents which contain the same index terms () from the set .
We seek to find the set of documents that satisfy . This operation is called retrieval and consists of the following two steps:
1. For each in , find the set of documents that satisfy :2. Then the set of documents that satisfy Q is given by:Where means OR and means AND as Boolean operators.
Example
Let the set of original (real) documents be, for example
where
= "Bayes' principle: The principle that, in estimating a parameter, one should initially assume that each possible value has equal probability (a uniform prior distribution)."
= "Bayesian decision theory: A mathematical theory of decision-making which presumes utility and probability functions, and according to which the act to be chosen is the Bayes act, i.e. the one with highest subjective expected utility. If one had unlimited time and calculating power with which to make every decision, this procedure would be the best way to make any decision."
= "Bayesian epistemology: A philosophical theory which holds that the epistemic status of a proposition (i.e. how well proven or well established it is) is best measured by a probability and that the proper way to revise this probability is given by Bayesian conditionalisation or similar procedures. A Bayesian epistemologist would use probability to define, and explore the relationship between, concepts such as epistemic status, support or explanatory power."
Let the set of terms be:
Then, the set of documents is as follows:
where
Let the query be ("probability" AND "decision-making"):
Then to retrieve the relevant documents:
Firstly, the following sets and of documents are obtained (retrieved):Where corresponds to the documents which contain the term "probability" and contain the term "decision-making".
Finally, the following documents are retrieved in response to : Where the query looks for documents that are contained in both sets using the intersection operator.
This means that the original document is the answer to .
If there is more than one document with the same representation (the same subset of index terms ), every such document is retrieved. Such documents are indistinguishable in the BIR (in other words, equivalent).
Advantages
Clean formalism
Easy to implement
Intuitive concept
If the resulting document set is either too small or too big, it is directly clear which operators will produce respectively a bigger or smaller set.
It gives (expert) users a sense of control over the system. It is immediately clear why a document has been retrieved given a query.
Disadvantages
Exact matching may retrieve too few or too many documents
Hard to translate a query into a Boolean expression
Ineffective for Search-Resistant Concepts
All terms are equally weighted
More like data retrieval than information retrieval
Retrieval based on binary decision criteria with no notion of partial matching
No ranking of the documents is provided (absence of a grading scale)
Information need has to be translated into a Boolean expression, which most users find awkward
The Boolean queries formulated by the users are most often too simplistic
The model frequently returns either too few or too many documents in response to a user query
Data structures and algorithms
From a pure formal mathematical point of view, the BIR is straightforward. From a practical point of view, however, several further problems should be solved that relate to algorithms and data structures, such as, for example, the choice of terms (manual or automatic selection or both), stemming, hash tables, inverted file structure, and so on.
Hash sets
Another possibility is to use hash sets. Each document is represented by a hash table which contains every single term of that document. Since hash table size increases and decreases in real time with the addition and removal of terms, each document will occupy much less space in memory. However, it will have a slowdown in performance because the operations are more complex than with bit vectors. On the worst-case performance can degrade from O(n) to O(n2). On the average case, the performance slowdown will not be that much worse than bit vectors and the space usage is much more efficient.
Signature file
Each document can be summarized by Bloom filter representing the set of words in that document, stored in a fixed-length bitstring, called a signature.
The signature file contains one such superimposed code bitstring for every document in the collection.
Each query can also be summarized by a Bloom filter representing the set of words in the query, stored in a bitstring of the same fixed length.
The query bitstring is tested against each signature.
The signature file approached is used in BitFunnel.
Inverted file
An inverted index file contains two parts:
a vocabulary containing all the terms used in the collection,
and for each distinct term an inverted index that lists every document that mentions that term.
References
Mathematical modeling
Information retrieval techniques | Boolean model of information retrieval | Mathematics | 1,472 |
27,122,321 | https://en.wikipedia.org/wiki/Synaptic%20tagging | Synaptic tagging, or the synaptic tagging hypothesis, has been proposed to explain how neural signaling at a particular synapse creates a target for subsequent plasticity-related product (PRP) trafficking essential for sustained LTP and LTD. Although the molecular identity of the tags remains unknown, it has been established that they form as a result of high or low frequency stimulation, interact with incoming PRPs, and have a limited lifespan.
Further investigations have suggested that plasticity-related products include mRNA and proteins from both the soma and dendritic shaft that must be captured by molecules within the dendritic spine to achieve persistent LTP and LTD. This idea was articulated in the synaptic tag-and-capture hypothesis. Overall, synaptic tagging elaborates on the molecular underpinnings of how L-LTP is generated and leads to memory formation.
History
Frey, a researcher at the Leibniz Institute for Neurobiology (later at the Medical College of Georgia and the Lund University), and Morris, a researcher at the University of Edinburgh, laid the groundwork for the synaptic tagging hypothesis, stating:
"We propose that LTP initiates the creation of a short-lasting protein-synthesis-independent 'synaptic tag' at the potentiated synapse which sequesters the relevant protein(s) to establish late LTP. In support of this idea, we now show that weak tetanic stimulation, which ordinarily leads only to early LTP, or repeated tetanization in the presence of protein-synthesis inhibitors, each results in protein-synthesis-dependent late LTP, provided repeated tetanization has already been applied at another input to the same population of neurons. The synaptic tag decays in less than three hours. These findings indicate that the persistence of LTP depends not only on local events during its induction, but also on the prior activity of the neuron."
L-LTP inducing stimulus induces two independent processes including a dendritic biological tag that identifies the synapse as having been stimulated, and a genomic cascade that produces new mRNAs and proteins (plasticity products). While weak stimulation also tags synapses, it does not produce the cascade. Proteins produced in the cascade are characteristically promiscuous, in that they will attach to any recently tagged synapse. However, as Frey and Morris discovered, the tag is temporary and will disappear if no protein presents itself for capture. Therefore, the tag and protein production must overlap if L-LTP is to be induced by the high-frequency stimulation.
The experiment performed by Frey and Morris involved the stimulation of two different sets of Schaffer collateral fibers that synapsed on same population of CA1 cells. They then recorded field EPSP associated with each stimulus on either S1 or S2 pathways to produce E-LTP and L-LTP on different synapses within the same neuron, based on the intensity of the stimulus. Results showed 1) that E-LTP produced by weak stimulation could be turned into L-LTP if a strong S2 stimulus was delivered before or after and 2)that the ability to convert E-LTP to L-LTP decreased as the interval between the two stimulations increased, creating temporal dependence. When they blocked protein synthesis prior to the delivery of strong S2 stimulation, the conversion to L-LTP was prevented, showing importance of translating the mRNAs produced by the genomic cascade.
Subsequent research has identified an additional property of synaptic tagging that involves associations between late LTP and LTD. This phenomenon was first identified by Sajikumar and Frey in 2004 and is now referred to as "cross-tagging". It involves late-associative interactions between LTP and LTD induced in sets of independent synaptic inputs: late-LTP induced in one set of synaptic inputs can transform early-LTD into late-LTD in another set of inputs. The opposite effect also occurs: early LTP induced in the first synapse can be transformed into late LTP if followed by a late LTD-inducing stimulus in an independent synapse. This phenomenon is seen because the synthesis of nonspecific plasticity related proteins (PRPs) by late-LTP or -LTD in the first synapse is sufficient to transform early-LTD/LTP to late-LTD/LTP in the second synapse after synaptic tags have been set.
Blitzer and his research team proposed a modification to the theory in 2005, stating that the proteins captured by the synaptic tag are actually local proteins that are translated from mRNAs located in the dendrites. This means that mRNAs are not a product of genomic cascade initiated by strong stimulus, but rather, is delivered as a result of continual basal transcription. They proposed that even weakly stimulated synapses that were tagged can accept proteins produced from a strong stimulation nearby despite lacking the genomic cascade.
mRNA trafficking to the dendritic spine and cytoskeleton
Synaptic tagging/ tag-and-capture theory potentially addresses the significant problem of explaining how mRNA, proteins, and other molecules may be specifically trafficked to certain dendritic spines during late phase LTP. It has long been known that the late phase of LTP depends on protein synthesis within the particular dendritic spine, as proven by injecting anisomycin into a dendritic spine and observing the resulting absence of late LTP. To achieve translation within the dendritic spine, neurons must synthesize the mRNA in the nucleus, package it within a ribonucleoprotein complex, initiate transport, prevent translation during transport, and ultimately deliver the RNP complex to the appropriate dendritic spine. These processes span a number of disciplines and synaptic tagging/tag-and-capture cannot explain them all; nevertheless, synaptic tagging likely plays an important role in directing mRNA trafficking to the appropriate dendritic spine and signaling the mRNA-RNP complex to dissociate and enter the dendritic spine.
A cell's identity and the identities of subcellular structures are largely determined by RNA transcripts. Considering this premise, it follows that cellular transcription, trafficking, and translation of mRNA undergo modification at a number of different junctures. Beginning with transcription, mRNA molecules are potentially modified via alternate splicing of exons and introns. The alternate splicing mechanisms allow cells to produce a diverse set of proteins from a single gene within the genome. Recent developments in next-generation sequencing have allowed for greater understanding of the diversity eukaryotic cells achieve through splice variants.
Transcribed mRNA must reach the intended dendritic spine for the spine to express L-LTP. Neurons may transport mRNA to specific dendritic spines in a package along with a transport ribonucleoprotein (RNP) complex; the transport RNP complex is a subtype of an RNA granule. Granules containing two proteins of known importance to synaptic plasticity, CaMKII (Calmodulin-dependent Kinase II) and the immediate early gene Arc, have been identified to associate with a type of the motor protein kinesin, KIF5. Furthermore, there is evidence that polyadenylated mRNA associates with microtubules in mammalian neurons, at least in vitro. Since mRNA transcripts undergo polyadenylation prior to export from the nucleus, this suggests that the mRNA essential for late-phase LTP may travel along the microtubules within the dendritic shaft prior to reaching the dendritic spine.
Once the RNA/RNP complex arrives via motor protein to an area within the vicinity of the specific dendritic spine, it must somehow get “captured” by a process within the dendritic spine. This process likely involves the synaptic tag created by synaptic stimulation of sufficient strength. Synaptic tagging may result in capture of the RNA/RNP complex via any number of possible mechanisms such as:
The synaptic tag triggers transient microtubule entry to within the dendritic spine. Recent research has shown that microtubules can transiently enter dendritic spines in an activity-dependent manner. []
The synaptic tag triggers the dissociation of the cargo from motor protein and somehow guides it to dynamically formed microfilaments
Local protein synthesis
Since the 1980s, it has become more and more clear that the dendrites contain the ribosomes, proteins, and RNA components to achieve local and autonomous protein translation. Many mRNAs shown to be localized in the dendrites encode proteins known to be involved in LTP, including AMPA receptor and CaMKII subunits, and cytoskeleton-related proteins MAP2 and Arc.
Researchers provided evidence of local synthesis, by examining the distribution of Arc mRNA after selective stimulation of certain synapses of a hippocampal cell. They found that Arc mRNA was localized at the activated synapses, and Arc protein appeared there simultaneously. This suggests that the mRNA was translated locally.
These mRNA transcripts are translated in a cap-dependent manner, meaning they use a "cap" anchoring point to facilitate ribosome attachment to the 5' untranslated region. Eukaryotic initiation factor 4 group (eIF4) members recruit ribosomal subunits to the mRNA terminus, and assembly of the eIF4F initiation complex is a target of translational control: phosphorylation of eIF4F exposes the cap for rapid reloading, quickening the rate-limiting step of translation. It is suggested that eIF4F complex formation is regulated during LTP to increase local translation. In addition, excessive eIF4F complex destabilizes LTP.
Researchers have identified sequences within the mRNA that determine its final destination - called localization elements (LEs), zipcodes, and targeting elements (TEs). These are recognized by RNA binding proteins, of which some potential candidates are MARTA and ZBP1. They recognize the TEs, and this interaction results in formation of ribonucleotide protein (RNP) complexes, which travel along cytoskeleton filaments to the spine with the help of motor proteins. Dendritic TEs have been identified in the untranslated region of several mRNAs, like MAP2 and alphaCaMKII.
Possible tag models
Synaptic tagging is likely to involve the acquisition of molecular maintenance mechanisms by a synapse that would then allow for the conservation of synaptic changes. There are several proposed processes through which synaptic tagging functions. One model suggests that the tag allows for local protein synthesis at the specified synapse that then leads to modifications in synaptic strength. One example of this suggested mechanism involves the anchoring of PKMzeta mRNA to the tagged synapse. This anchor would then restrict the activity of translated PKMzeta, an important plasticity related protein, to this location. A different model proposes that short-term synaptic changes induced by the stimulus are themselves the tag; subsequently delivered or translated protein products act to strengthen this change. For example, the removal of AMPA receptors due to low-frequency stimulation leading to LTD is stabilized by a new protein product that would be inactive at synapses where internalization had not occurred. The tag could also be a latent memory trace, as another model suggests. The activity of proteins would then be required for the memory trace to lead to sustained changes in synaptic strength. According to this model, changes induced by the latent memory trace, such as the growth of new filipodia, are themselves the tag. These tags require protein products for stabilization, synapse formation, and synapse stabilization. Finally, another model proposes that the required molecular products get directed into the appropriate dendritic branches and then find the specific synapses under efficacy modification, by following Ca++ microconcentration gradients through voltage-gated Ca++ channels.
Behavioral tagging
While the concept of the synaptic tagging hypothesis mainly resulted from experiments applying stimulation to synapses, a similar model can be established considering the process of learning in a broader - behavioral - sense. Fabricio Ballarini and colleagues developed this behavioral tagging model by testing spatial object recognition, contextual conditioning, and conditioned taste aversion in rats with weak training. The applied training normally only results in alterations of short-term memory. However, they paired this weak training with a separate, arbitrary behavioral event that is assumed to induce protein synthesis. When the two behavioral events were coupled within a certain time frame, the weak training was sufficient to elicit task-related changes in long-term memory. The researchers believed that the weak training lead to a "learning tag". During the subsequent task, the cleavage of proteins resulted in the formation of long-term memory for this tag. The behavioral tagging model corresponds to the synaptic tagging model. A weak stimulation establishes E-LTP that may serve as the tag used in converting the weak potentiation to the stronger, more persistent L-LTP, once the high-intensity stimulation is applied.
References
Genetics
1997 neologisms
Neuroscience | Synaptic tagging | Biology | 2,738 |
1,276,911 | https://en.wikipedia.org/wiki/Mekong%20River%20Commission | The Mekong River Commission (MRC) is an "...inter-governmental organisation that works directly with the governments of Cambodia, Laos, Thailand, and Vietnam to jointly manage the shared water resources and the sustainable development of the Mekong River". Its mission is "To promote and coordinate sustainable management and development of water and related resources for the countries' mutual benefit and the people's well-being".
History
Mekong Committee (1957–1978)
The origins of the Mekong Committee are linked to the legacy of (de)colonialism in Indochina and subsequent geopolitical developments. The political, social, and economic conditions of the Mekong River basin countries evolved dramatically since the 1950s, when the Mekong represented the "only large river left in the world, besides the Amazon, which remained virtually unexploited." The impetus for the creation of the Mekong cooperative regime progressed in tandem with the drive for the development of the lower Mekong, following the 1954 Geneva Conference which granted Cambodia, Laos, and Vietnam independence from France. A 1957 United Nations Economic Commission for Asia and the Far East (ECAFE) report, Development of Water Resources in the Lower Mekong Basin, recommended development to the tune of 90,000 km2 of irrigation and 13.7 gigawatts (GW) from five dams. Based largely on the recommendations of ECAFE, the "Committee for Coordination on the Lower Mekong Basin" (known as the Mekong Committee) was established in September 1957 with the adoption of the Statute for the Committee for Coordination of Investigations into the Lower Mekong Basin. ECAFE's Bureau of Flood Control had prioritized the Mekong—of the 18 international waterways within its jurisdiction—in the hopes of creating a precedent for cooperation elsewhere. and "one of the UN's earliest spin-offs", as the organization functioned under the aegis of the UN, with its Executive Agent (EA) chosen from the career staff of the United Nations Development Programme (UNDP).
The US government—which feared that poverty in the basin would contribute to the strength of communist movements—proved one of the most vocal international backers of the committee, with the U.S. Bureau of Reclamation conducting a seminal 1956 study on the basin's potential. Another 1962 study by U.S. geographer Gilbert F. White, Economic and Social Aspects of Lower Mekong Development, proved extremely influential, resulting in the postponement of (in White's own estimation) the construction of the (still unrealized) mainstream Pa Mong Dam, which would have displaced a quarter-million people. The influence of the United States in the committee's formation can also been seen in development studies of General Raymond Wheeler, the former Chief of the Army Corps of Engineers, the role of C. Hart Schaaf as the Mekong Committee's Executive Agent from 1959 to 1969, and President Lyndon Johnson’s promotion of the committee as having the potential to "dwarf even our own T.V.A." However, US financial support was terminated in 1975 and did not resume for decades due to embargoes against Cambodia (until 1992) and Vietnam (until 1994), followed by periods of trade restrictions. However, Makim argues that the committee was "largely unaffected by formal or informal U.S. preferences" given the ambivalence of some riparians about US technical support, in particular Cambodia's rejection of some specific types of assistance. However, the fact remains that "international development agencies have always paid the bills for the Mekong regime," with European (especially Scandinavian) nations picking up the slack left by the United States, and then (to a lesser extent) Japan.
The Mekong Committee was a forceful advocate for large-scale dams and other projects, primarily preoccupied with facilitating projects. For example, the 1970 Indicative Basin Plan called for 30,000 km2 of irrigation by the year 2000 (up from 2,130 km2) as well as 87 short-term tributary development projects and 17 long-term development projects on the mainstream. The Indicative Basin Plan was crafted largely in response to criticisms of the committee's "piecemeal" approach and declining political support of the organization; for example, the committee had received no funds from Thailand, normally the biggest contributor, during the 1970 fiscal year. The completion of all 17 projects was never intended; rather the list was meant to serve as a "menu" for international donors, who were to select 9 or 10 of the projects. While a few of the short-term projects were implemented, none of the long-term projects prevailed in the political climate of the ensuing decade, which included the end of the Vietnam War in 1975. Several tributary dams were constructed, but only one—the Nam Ngum Dam (completed 1971), in Laos—outside of Thailand, whose electricity was sold to Thailand. According to Makim, Nam Ngum was the "only truly intergovernmental project achieved" by the committee.
This period was also marked by efforts to expand the jurisdiction and mandate of the committee between 1958 and 1975, which did not receive the consent of all four riparians. However, these efforts culminated, in January 1975, in the adoption of a 35-article Joint Declaration of Principles for Utilization of the Waters of the Mekong Basin by the sixty-eighth session of the Mekong Committee, prohibiting the "unilateral appropriation" without "prior approval" and "extra-basin diversion" without unanimous consent. However, no committee sessions were held in 1976 or 1977, as no plenipotentiary members had been appointed by Cambodia, Laos, or Vietnam—all of which experienced regime change in 1975.
Interim Mekong Committee (1978–1995)
The rise of the xenophobic and paranoid Khmer Rouge government in Cambodia made Cambodia's continued participation unsustainable, so in April 1977 the other three riparians agreed to the Declaration Concerning the Interim Mekong Committee, which resulted in the establishment of the Interim Mekong Committee in January 1978. The weakened interim organization was only able to study large-scale projects and implement a few small-scale projects in Thailand and Laos, where the Dutch Government through the IMC funded fisheries and agricultural development projects along the Nam Ngum, as well as port facilities at Keng Kabao near Savannakhet; the institutional role of the organization shifted nonetheless largely to data collection. The 1987 Revised Indicative Basin Plan—the high-water mark of the Interim Committee's activity—scaled back the ambitions of the 1970 plan, envisioning a cascade of smaller dams along the Mekong's mainstream, divided into 29 projects, 26 of which were strictly national in scope. The Revised Indicative Basin Plan can also be seen as laying the groundwork for Cambodia's readmission. The Supreme National Council of Cambodia did request readmission in June 1991.
Cambodia's readmission was largely a side-show which masked the true issue facing the riparians: that the rapid economic growth experienced in Thailand relative to its neighbors had made even the modest sovereignty limitations imposed by Mekong agreements seem undesirable in Bangkok. Thailand and the other three riparians (led by Vietnam, the most powerful of the remaining three states) were locked in disagreement over whether Cambodia should be readmitted under the terms of the 1957 Statute (and more importantly, the 1975 Joint Declaration), with Thailand preferring to negotiate an entirely new framework to allow its planned Kong-Chi-Moon Project (and others) to proceed without a Vietnamese veto. Article 10 of the Joint Declaration, requiring unanimous consent for all mainstream development and inter-basin diversion proved to be the main sticking point of Cambodia's readmission, with Thailand perhaps prepared to walk away from the regime altogether. The conflict came to a head in April 1992 when Thailand forced the executive agent of the committee, Chuck Lankester, to resign and leave the country after barring the secretariat from the March 1992 meeting. This prompted a series of meetings organized by the UNDP (which was terrified that the regime in which it had invested so much might disappear), culminating in the April 1995 Agreement on the Cooperation for the Sustainable Development of the Mekong River Basin signed by Cambodia, Laos, Thailand, and Vietnam in Chiang Rai, Thailand, creating the Mekong River Commission (MRC).
Since the dramatic confrontation of 1992, several seemingly overlapping organizations were created, including the Asian Development Bank's Greater Mekong Subregion (ADB-GMS, 1992), Japan's Forum of Comprehensive Development of Indochina (FCDI, 1993), the Quadripite Economic Cooperation (QEC, 1993), the Association of South East Asian Nations and Japan's Ministry of International Trade and Industry's Working Group on Economic Cooperation in Indochina and Burma (AEM-MITI, 1994), and (almost finalized) Myanmar and Singapore's ASEAN-Mekong Basin Development Cooperation (ASEAN-ME, 1996).
Mekong River Commission (1995–present)
The MRC has evolved since 1995. Some of the "thorny issues" set aside during the negotiation of the agreement were at least partially resolved by the implementation of subsequent programmes such as the Water Utilization Programme (WUP) agreed to in 1999 and committed to implementation by 2005. The commission's hierarchical structure has been repeatedly tweaked, as in July 2000 when the MRC Secretariat was restructured. The 2001 Work Programme has largely come to be viewed as a shift "from a project-oriented focus to an emphasis on better management and preservation of existing resources." On paper, the Work Programme represents a rejection of the ambitious development schemes embodied by the 1970 and 1987 Indicative Basin Plans (calling for no mainstream dams) and a shift to a holistic rather than programmatic approach. In part, these changes represent a response to criticism of the MRC's failure to undertake a "regional-scale project" or even a region-level focus.
2001 also saw a major shift in the MRC—at least on paper—when it committed to a role as a "learning organization" with an emphasis on "the livelihoods of the people in the Mekong region." In the same year its annual report emphasized the importance of "bottoms-up" solutions and the "voice of the people directly affected." Similarly, the 2001 MRC Hydropower Development Strategy explicitly disavowed the "promotion of specific projects" in favor of "basin-wide issues." In part, these shifts mark a retreat from past project failures and recognition that the MRC faces multiple, and often more lucrative, competitors in the project arena.
Governance
The MRC is governed by its four member countries through the Joint Committee and the council. Members of the Joint Committee are usually senior civil servants heading government departments. There is one member from each country. The Joint Committee meets two to three times a year to approve budgets and strategic plans. Members of the council are cabinet ministers. The Council meets once a year.
Technical and administrative support is provided by the MRC Secretariat. The secretariat is based in Vientiane, Laos, with over 120 staff including scientists, administrators, and technical staff. A chief executive officer manages the secretariat.
In April 2010, the Mekong River Commission convened a summit in Hua Hin, Thailand. All six riparian nations were in attendance, including China, Burma (Myanmar), Laos, Thailand, Cambodia, and Vietnam.
Leadership
From its conception until 1995 the organization was under the leadership of an "executive agent". Since then it has had CEO's.
C. Hart Schaaf, Executive Agent, 1959 – November 1969
Willem van der Oord, Executive Agent, December 1969 – June 1980
Bernt Bernander, Executive Agent, July 1980 – 1983
Galal Magdi, Executive Agent, 1983 – 1987
Chuck Lankester, Executive Agent, 1988 – 1990
Jan Kamp, Executive Agent, 1990 – 1995
Yasunobu Matoba, CEO, 1995 – August 1999
Jörn Kristensen, CEO, October 1999 – 2004
Olivier Cogels, CEO, July 2004 – 2007
Jeremy Bird, CEO, 2008 – 2010
Hans Guttman, CEO, 2011
Pham Tuan Phan, CEO 2016–2018
An Pich Hatda, CEO, 2019 -
Relations with the People's Republic of China and Burma
The Mekong River Commission and its predecessors have never included PR China (which was not a member of the United Nations in 1957) or Burma (which does not significantly rely on or tap the Mekong), whose territory contains the upper Basin of the Mekong. Part of a joint initiative by the US agency for International Development (USAID) and NASA, SERVIR Mekong project, with five countries, Thailand, Cambodia, Laos, and Vietnam including Myanmar which aims to tap into the latest technologies to help the Mekong River region protect its vital ecosystem. Although China contributes only 16–18 percent of the Mekong's overall water volume, the glacial melt waters of the Tibetan plateau take on increasing importance during the dry season. The ability of upstream nations to undermine downstream cooperation was perhaps best symbolized by an April 1995 ceremonial boat trip from Thailand to Vietnam—to celebrate the signing of the 1995 Agreement—which ran aground mid-river as a result of China filling the reservoir of the Manwan Dam. Although China and Burma became "dialogue partners" of the MRC in 1996 and slowly but steadily escalated their (non-binding) participation in its various forums, it is at present unthinkable that either would join the MRC in the near future.
In April 2002, China began providing daily water level data to the MRC during the flood season. Critics noted that the emphasis on "flood control" rather than dry season flows represented an important omission given the concerns prioritized by the Mekong regime. In July 2003, MRC CEO Joern Kristensen reported that China had agreed to scale back its plans to blast rapids by implementing only phase one (of three) of its Upper Mekong Navigation Improvement Project; however, China's future intentions in this area are far from certain. One area in which China has been particularly reticent is in providing information about the operation of its dams, rather than just flow data, including refusing to join emergency meetings in 2004. Only in 2005 did China agreed to hold technical discussions directly with the MRC. On 2 June 2005, at the invitation of the Chinese Ministry of Foreign Affairs and the Ministry of Water Resources, MRC CEO Dr. Olivier Cogels and a delegation of the secretariat's senior staff made the first official visit to Beijing to hold technical consultations under the framework of cooperation between China and the MRC, within the scope of the Mekong Programme. The delegation identified a number of potential areas of cooperation with the Ministry of Foreign Affairs, the Ministry of Water Resources, and the Ministry of Communication, Information and Transport. These discussions resulted in China supplying the MRC (beginning in 2007) with 24-hour water level and 12-hour rainfall data for flood forecasts in exchange for monthly flow data from the MRC Secretariat. The incentives for China to enter into cooperative regimes on the Mekong are substantially reduced by the alternative of the Salween River as a commercial outlet for China's Yunnan province, made considerably more attractive by requiring negotiation solely with Burma, rather than with four different riparians. News media and official sources often portray China's joining the commission as a panacea for resolving the overdevelopment of the Mekong. However, there is no indication that China's joining the MRC would provide downstream riparians with any real capacity to challenge China's development plans, given the dramatic power imbalances exhibited by these countries' relations with China.
The MRC has been hesitant to fully register concerns about Chinese upstream hydro-development. For example, in a letter to the Bangkok Post, MRC CEO Dr. Olivier Cogels in fact argued that Chinese dams would increase the river's dry season volume as their purpose was electricity generation and not irrigation. While such dams certainly could increase dry season flows, the only certainty about future Chinese reservoir policies seems to be that they will be crafted outside of downstream cooperation regimes. Public statements from MRC leaders in the same vein as Cogels' comments have—to some—earned the MRC a reputation of being complicit in allowing "China's dam-building machine float downstream."
See also
SERVIR Mekong Project
LMC (Lancang-Mekong Cooperation)
References
Bibliography
Backer, Ellen Bruzelius. 2007. "The Mekong River Commission: Does It Work, and How Does the Mekong Basin’s Geography Influence Its Effectiveness?" Südostasien aktuell: Journal of Current Southeast Asian Affairs, p. 31–55.
Baker, Chris. 2007, February 24. "What is Vientiane?" Bangkok Post.
Bakker, Karen. 1999. "The politics of hydropower: developing the Mekong." Political Geography, 18: 209–232.
Cogels, Olivier. 2007, January 9. "Mekong hydropower development is good." Bangkok Post.
Dore, John. 2003. "The governance of increasing Mekong regionalism." In Social Challenges for the Mekong Region. Eds. Mingsarn Koasa-ard and John Dore. Bangkok: Chiang Mai University.
Ghosh, Nirmal. 2007, November 15. "Mekong dams 'will displace 75,000 people'; Environmental groups urge international donors to review their support for project." Straits Times.
Hirsch, P. 2003. "The Politics of Fisheries Knowledge in the Mekong River Basin." NSW 2006 Australian Mekong Resource Center.
Jacobs, Jeffrey W. 1995. "Mekong Committee History and Lessons for River Basin Development." The Geographical Journal, 161(2): 135–148.
Jacobs, Jeffrey W. 1998. "The United States and the Mekong Project." Water Policy (1): 587–603.
Japan Times. 2007, March 15. "Dark Clouds over Shangri-La."
Kanwanich, Suprandit. 2002, October 6. "At the mercy of the Mekong." Bangkok Post.
Kristensen, Joern. 2002. "Food Security and Development in the Lower Mekong River Basin and the Need For Regional Cooperation: A Challenge for the Mekong River Commission." Defining an Agenda for Poverty Reduction: Proceedings of the First Asia and Pacific Forum on Poverty, Volume 1. Manila: Asian Development Bank.
Lebel, Louis, Garden, Po, and Imamura, Masao. 2005. "The Politics of Scale, Position, and Place I the Governance of Water Resources in the Mekong Region." Ecology and Society, 10(2): 18.
Makim, Abigail. 2002. "Resources for Security and Stability? The Politics of Regional Cooperation on the Mekong, 1957–2001." Journal of Environment & Development, 11(1): 5–52.
Mekong River Commission. 1975. Joint Declaration of Principles for Utilization of the Waters of the Mekong Basin. Bangkok: Mekong Committee.
Mekong River Commission. 1995. Agreement on the co-operation for the sustainable development of the Mekong River Basin.
Mekong River Commission. 2001a. Annual Report 2000. Phnom Penh: Mekong River Commission.
Mekong River Commission. 2001b. "Mekong News: The newsletter of the Mekong River Commission, October–December 2001."
Mekong River Commission. 2007, October 24. "Cross-border cooperation." Water Power & Dam Construction.
Nakayama, Mikiyasu. 2002. "International Collaboration on Water Systems in Asia and the Pacific: A Case in Transition." International Review for Environmental Studies, 3(2): 274–282.
Paul, Delia. 2003, November 17. "Rules on water use are well in place." Bangkok Post.
Pearce, Fred. 2004, April 3. "China drains life from Mekong river." New Scientist, 182: 14.
Pinyorat, Rungrawee C. 2003, June 13. "China vows to limit blasting of rapids." The Nation (Thailand).
Radosevich, George E., and Olson, Douglas C. 1999. "Existing and Emerging Basin Arrangements in Asia." World Bank: Third Workshop on River Basin and Institution Development.
Robertson, Benjamin. 2006, October 19. "Caught in the Ebb." South China Morning Post.
Samabuddhi, Kultida. 2002, November 11. "Commission’s Middleman Role attacked." Bangkok Post.
Sherman, Tom. 2004, May 12. "Mekong commission doesn't seem to care about people affected by its projects." The Nation (Thailand).
Sneddon, Chris. 2002. "Water Conflicts and River Basins: The Contradictions of Comanagement and Scale in Northeast Thailand." Society and Natural Resources, 15: 725–741.
Sneddon, Chris. 2003. "Reconfiguring scale and power: the Khong-Chi-Mun project in northeast Thailand." Environment and Planning, 35: 2229–2250.
Sneddon, Chris, and Fox, Coleen. 2005. "Flood Pulses, International Watercourse Law, and Common Pool Resources: A Case Study of the Mekong Lowlands." Expert Group on Development Issues Research Paper No. 2005/20.
Sneddon, Chris, and Fox, Coleen. 2006. "Rethinking transboundary waters: A critical hydropolitics of the Mekong basin." Political Geography, 25: 181–202.
Sneddon, Chris, and Fox, Coleen. 2007a. "Power, Development, and Institutional Change: Participatory Governance in the Lower Mekong Basin." World Development, 35(12): 2161–2181.
Sneddon, Chris, and Fox, Coleen. 2007b. "Transboundary river basin agreements in the Mekong and Zambezi basins: enhancing environmental security or securitizing the environment?" International Environmental Agreements, 7: 237–261.
Straits Times. 2006, July 11. "When global group therapy nets a result."
Thai News. 2007, November 19. "Southeast Asia: Activists urge MRC to halt dam projects on Mekong River."
The Economist. 1995, November 18. "The Mekong: Dammed if you don’t." 337(7941): 38.
The Nation (Thailand). 2004, May 10. "Senator: locals know best."
Theeravit, Khien. 2003. "Relationships within and between the Mekong Region in the context of globalisation." In Social Challenges for the Mekong Region. Eds. Mingsarn Koasa-ard and John Dore. Bangkok: Chiang Mai University.
Wain, Barry. 2004, August 26. "Mekong River: River at Risk—The Mekong’s Toothless Guardian." Far Eastern Economic Review.
Mekong River
Greater Mekong Subregion
Environment of Southeast Asia
International economic organizations
Intergovernmental environmental organizations
Environmental organizations based in Vietnam
Environmental organisations based in Cambodia
Organizations based in Laos
Environmental organisations based in Myanmar
Dam-related organizations
Environmental organizations established in 1995
1995 establishments in Southeast Asia
Phnom Penh
Vientiane | Mekong River Commission | Engineering | 4,797 |
326,285 | https://en.wikipedia.org/wiki/Paul%20Elvstr%C3%B8m | Paul Bert Elvstrøm (25 February 1928 – 7 December 2016) was a Danish yachtsman and the founder of Elvstrøm Sails. He won four Olympic gold medals and twenty world titles in a range of classes including Snipe, Soling, Star, Flying Dutchman, Finn, 505, and 5.5 Metre. For his achievements, Elvstrøm was chosen as "Danish Sportsman of the Century."
Early life
Paul Elvstrøm was born, north of Copenhagen, in a house overlooking the sound between Denmark and Sweden. His father was a sea captain but died when Elvstrøm was young, and he was brought up by his mother along with a brother and sister. A second brother drowned at the age of 5 when he fell off a seawall near the family home.
Growing up along the Øresund, Elvstrøm quickly became consumed by sailing, which began with crewing in a club fleet of small clinker keelboats. He was soon given an Oslo dinghy by a neighbour who realised Elvstrøm's mother was too poor to be able to buy one.
In his book Elvstrøm Speaks on Yacht Racing he claimed to be ‘word blind’ and could not read or write when he was at school, which may have been due to dyslexia. It is clear that Elvstrøm considered schooling a distraction from sailing: "I was very bad in school," he said, "The only interest I had was in sailing fast…The teacher knew that if I was not at school, I was sailing."
After leaving school he became a member of the Hellerup Sailing Club, where he gained a reputation as an excellent sailer. He was funding himself during this period as a bricklayer, but in 1954 also started cutting sails for club members in his basement.
Innovation
Elvstrøm was noted as a developer of sails and sailing equipment, and later founded Elvstrøm sails. One of his most successful innovations was a new type of self-bailer. The new features were a wedge shaped venturi that closes automatically if the boat grounds or hits an obstruction, and a flap that acts as a non return valve to minimise water coming in if the boat is stationary or moving too slowly for the device to work. Previous automatic bailers would be damaged or destroyed if they met an obstruction, and would let considerable amounts of water in if the boat was moving too slowly.
The Elvstrøm self-bailer is still in production under the Andersen brand and has been widely copied; it is still found on Olympic boats, and other grand prix boats at the leading edge of the sport. In 2016, Dan Ibsen, the executive director of the Royal Danish Yacht Club said, “Today the Elvstrøm Bailer is still the only functional bailer on Olympic dinghies and boats around the world.”
Other innovations include the Elvstrøm Lifejacket, which was the first specifically designed and produced for active sailors.
He also popularised the kicking strap, or boom vang (US). This may take the form of a block and tackle linking a low point on the mast (or an equivalent point on the hull) and the boom close to the mast, which allows the boom to be let out when reaching or running without lifting. This controls the twist of the mainsail from its foot to its head, increasing the sail's power and the boat's speed and controlability. Elvstrøm did not advertise his new invention, leaving his competitors mystified at his superior boat-speed. Investigation of his dinghy revealed nothing as he used to remove the kicking strap before coming ashore.
Among the innovative concepts he brought to sailboat racing was the concept of gates instead of a single windward or leeward mark in large regattas. The leeward gate on a windward-leeward course is commonly used. The windward gate is less often used due to the difficulties in managing right-of-way around the right gate, the subtleties of which are understood mostly by match racers. He has also been instrumental in developing several international yacht racing rules.
Training
Elvstrøm was a very early innovator in training techniques. For example, he used the technique of 'sitting out' or hiking using toe-straps to a greater degree than previously, getting all his body weight from the knees upwards outside the boat, thus providing extra leverage to enable the boat to remain level in stronger winds and hence go faster than his competitors. This technique required great strength and fitness, and so after the 1948 Olympics, in order to improve his physical conditioning in readiness for the 1952 games, Elvstrøm built a training bench with toe-straps in his garage to replicate the sitting-out position in his dinghy. He then proceeded to spend many training hours on dry land sitting out on the bench at home.
“He did take sailing to a level that you had to call it a sport,” said Jesper Bank, a principal at Elvstrøm Sails and a two-time Olympic gold medalist for Denmark. “Before Paul, you would see competitors with pipes in their mouths and wearing skippers’ caps. At that time, they certainly thought he was superhuman.”
According to an obituary by the International Finn Association, "He was a sportsman and the first real sailing athlete. He trained harder and longer than anyone else so that when the day of the race came he was better prepared than anyone else. He was famous for his physical strength and fitness, able to out-hike anyone on the race course.”
Business
In 1954, Elvstrøm established a manufacturing company, Elvstrøm Sails, whose products included masts, booms, and sails. Displaying a keen marketing mind to go along with his engineering nous, the business grew rapidly and by the 1970s Elvstrøm products were seen on boats all around the world.
Today, Elvstrøm Sails is among the world's leading sailmakers, employing around 300 worldwide. Elvstrøm founded his business in the family villa just north of Copenhagen. It grew out of its premises multiple times, and today, Elvstrøm Sails is based in Aabenraa in the south of Denmark.
Personal life
Elvstrøm was married to Anne, who pre-deceased him by three years; together they had four daughters: Pia, Stine, Gitte and Trine.
Elvstrøm continued to sail in his later years until Parkinson's disease began to afflict him. In 2009 he sailed his Dragonfly trimaran — solo — to visit his daughter Gitte and her family on the east coast of Sweden, 600 miles from his home.
Elvstrøm's success and celebrity brought personal stress. At the 1972 Games in Munich, under the pressures of competition and his challenges facing his sail-making business, he suffered a nervous breakdown.
He died on 7 December 2016 at the age of 88, after battling Alzheimers for a few years.
Legacy
As well as being remembered as arguably the greatest sailing racer ever, Elvstrøm was also known to be a model of sportsmanship. He is famous for his philosophy that, "If you, by winning, are losing your friends, you are not winning."
Achievements
Elvstrøm competed in eight Olympic Games from 1948 to 1988, being one of only nine persons ever (the others are sailor Ben Ainslie, swimmers Michael Phelps and Katie Ledecky, wrestlers Kaori Icho and Mijaín López, speed skater Ireen Wüst and athletes Carl Lewis in the long jump and Al Oerter in the discus) to win four consecutive individual gold medals (1948–60), first time in a Firefly, subsequently in Finns. In his last two Olympic games he sailed the Tornado Catamaran class, which, in those days, was normally sailed by two young men, with his daughter Trine Elvstrøm as forward hand.
He is one of only five athletes who have competed in the Olympics over a span of 40 years, along with fencer Ivan Joseph Martin Osiier, sailors Magnus Konow and Durward Knowles and showjumper Ian Millar.
Elvstrøm won medals at the world championships: Finn, 505, Snipe, Flying Dutchman, 5.5 Metre, Star, Soling, Tornado, and Half Ton.
In 1996, Elvstrøm was chosen as "Danish Sportsman of the Century."
In 2007, Elvstrøm was among the first six inductees into the ISAF Sailing Hall of Fame.
Bibliography
Elvstrom, Paul. Expert Dinghy and Keelboat Racing, 1967, Times Books,
Elvstrom, Paul. Elvström Speaks on Yacht Racing, 1970, One-Design & Offshore Yachtsman Magazine,
Elvstrom, Paul. Elvström Speaks -- to His Sailing Friends on His Life and Racing Career, 1970, Nautical Publishing Company,
Paul Elvström Explains the Yacht Racing Rules, First edition 1969, title updated to Paul Elvstrom Explains the Racing Rules of Sailing: 2005–2008 Rules. Updated four-yearly in accordance with racing rules revisions, various authors and publishers.
See also
Elvstrøm 717
List of athletes with the most appearances at Olympic Games
List of multiple Olympic gold medalists in one event
Multiple World champion in sailing
References
External links
Paul Elvström, Sailing's Greatest at Sail-World.com
1928 births
2016 deaths
Marine engineers
Sailmakers
Danish male sailors (sport)
Hellerup Sejlklub sailors
Danish yacht designers
Olympic sailors for Denmark
Olympic gold medalists for Denmark
Olympic medalists in sailing
Medalists at the 1960 Summer Olympics
Medalists at the 1956 Summer Olympics
Medalists at the 1952 Summer Olympics
Medalists at the 1948 Summer Olympics
Sailors at the 1948 Summer Olympics – Firefly
Sailors at the 1952 Summer Olympics – Finn
Sailors at the 1956 Summer Olympics – Finn
Sailors at the 1960 Summer Olympics – Finn
Sailors at the 1968 Summer Olympics – Star
Sailors at the 1972 Summer Olympics – Soling
Sailors at the 1984 Summer Olympics – Tornado
Sailors at the 1988 Summer Olympics – Tornado
World champions in sailing for Denmark
Finn class world champions
Flying Dutchman class world champions
Open Snipe class world champions
Star class world champions
Soling class sailors
Sportspeople from Copenhagen
Soling class world champions
Sportspeople with dyslexia
20th-century Danish sportsmen | Paul Elvstrøm | Engineering | 2,161 |
31,559,837 | https://en.wikipedia.org/wiki/Kirchhoff%27s%20diffraction%20formula | Kirchhoff's diffraction formula (also called Fresnel–Kirchhoff diffraction formula) approximates light intensity and phase in optical diffraction: light fields in the boundary regions of shadows. The approximation
can be used to model light propagation in a wide range of configurations, either analytically or using numerical modelling. It gives an expression for the wave disturbance when a monochromatic spherical wave is the incoming wave of a situation under consideration. This formula is derived by applying the Kirchhoff integral theorem, which uses the Green's second identity to derive the solution to the homogeneous scalar wave equation, to a spherical wave with some approximations.
The Huygens–Fresnel principle is derived by the Fresnel–Kirchhoff diffraction formula.
Derivation of Kirchhoff's diffraction formula
Kirchhoff's integral theorem, sometimes referred to as the Fresnel–Kirchhoff integral theorem, uses Green's second identity to derive the solution of the homogeneous scalar wave equation at an arbitrary spatial position P in terms of the solution of the wave equation and its first order derivative at all points on an arbitrary closed surface as the boundary of some volume including P.
The solution provided by the integral theorem for a monochromatic source is
where is the spatial part of the solution of the homogeneous scalar wave equation (i.e., as the homogeneous scalar wave equation solution), k is the wavenumber, and s is the distance from P to an (infinitesimally small) integral surface element, and denotes differentiation along the integral surface element normal unit vector (i.e., a normal derivative), i.e., . Note that the surface normal or the direction of is toward the inside of the enclosed volume in this integral; if the more usual outer-pointing normal is used, the integral will have the opposite sign. And also note that, in the integral theorem shown here, and P are vector quantities while other terms are scalar quantities.
For the below cases, the following basic assumptions are made.
The distance between a point source of waves and an integral area, the distance between the integral area and an observation point P, and the dimension of opening S are much greater than the wave wavelength .
and are discontinuous at the boundaries of the aperture, called Kirchhoff's boundary conditions. This may be related with another assumption that waves on an aperture (or an open area) is same to the waves that would be present if there was no obstacle for the waves.
Point source
Consider a monochromatic point source at P0, which illuminates an aperture in a screen. The intensity of the wave emitted by a point source falls off as the inverse square of the distance traveled, so the amplitude falls off as the inverse of the distance. The complex amplitude of the disturbance at a distance is given by
where represents the magnitude of the disturbance at the point source.
The disturbance at a spatial position P can be found by applying the Kirchhoff's integral theorem to the closed surface formed by the intersection of a sphere of radius R with the screen. The integration is performed over the areas A1, A2 and A3, giving
To solve the equation, it is assumed that the values of and in the aperture area A1 are the same as when the screen is not present, so at the position Q,
where is the length of the straight line P0Q, and is the angle between a straightly extended version of P0Q and the (inward) normal to the aperture. Note that so is a positive real number on A1.
At Q, we also have
where is the length of the straight line PQ, and is the angle between a straightly extended version of PQ and the (inward) normal to the aperture. Note that so is a negative real number on A1.
Two more following assumptions are made.
In the above normal derivatives, the terms and in the both square brackets are assumed to be negligible compared with the wavenumber , means and are much greater than the wavelength .
Kirchhoff assumes that the values of and on the opaque areas marked by A2 are zero. This implies that and are discontinuous at the edge of the aperture A1. This is not the case, and this is one of the approximations used in deriving the Kirchhoff's diffraction formula. These assumptions are sometimes referred to as Kirchhoff's boundary conditions.
The contribution from the hemisphere A3 to the integral is expected to be zero, and it can be justified by one of the following reasons.
Make the assumption that the source starts to radiate at a particular time, and then make R large enough, so that when the disturbance at P is being considered, no contributions from A3 will have arrived there. Such a wave is no longer monochromatic, since a monochromatic wave must exist at all times, but that assumption is not necessary, and a more formal argument avoiding its use has been derived.
A wave emanated from the aperture A1 is expected to evolve toward a spherical wave as it propagates (Water wave examples of this can be found in many pictures showing a water wave passing through a relatively narrow opening.). So, if R is large enough, then the integral on A3 becomes where and are the distance from the center of the aperture A1 to an integral surface element and the differential solid angle in the spherical coordinate system respectively.
As a result, finally, the integral above, which represents the complex amplitude at P, becomes
This is the Kirchhoff or Fresnel–Kirchhoff diffraction formula.
Equivalence to Huygens–Fresnel principle
The Huygens–Fresnel principle can be derived by integrating over a different closed surface (the boundary of some volume having an observation point P). The area A1 above is replaced by a part of a wavefront (emitted from a P0) at r0, which is the closest to the aperture, and a portion of a cone with a vertex at P0, which is labeled A4 in the right diagram. If the wavefront is positioned such that the wavefront is very close to the edges of the aperture, then the contribution from A4 can be neglected (assumed here). On this new A1, the inward (toward the volume enclosed by the closed integral surface, so toward the right side in the diagram) normal to A1 is along the radial direction from P0, i.e., the direction perpendicular to the wavefront. As a result, the angle and the angle is related with the angle (the angle as defined in Huygens–Fresnel principle) as
The complex amplitude of the wavefront at r0 is given by
So, the diffraction formula becomes
where the integral is done over the part of the wavefront at r0 which is the closest to the aperture in the diagram. This integral leads to the Huygens–Fresnel principle (with the obliquity factor ).
In the derivation of this integral, instead of the geometry depicted in the right diagram, double spheres centered at P0 with the inner sphere radius r0 and an infinite outer sphere radius can be used. In this geometry, the observation point P is located in the volume enclosed by the two spheres so the Fresnel-Kirchhoff diffraction formula is applied on the two spheres. (The surface normal on these integral surfaces are, say again, toward the enclosed volume in the diffraction formula above.) In the formula application, the integral on the outer sphere is zero by a similar reason of the integral on the hemisphere as zero above.
Extended source
Assume that the aperture is illuminated by an extended source wave. The complex amplitude at the aperture is given by U0(r).
It is assumed, as before, that the values of and in the area A1 are the same as when the screen is not present, that the values of and in A2 are zero (Kirchhoff's boundary conditions) and that the contribution from A3 to the integral are also zero. It is also assumed that 1/s is negligible compared with k. We then have
This is the most general form of the Kirchhoff diffraction formula. To solve this equation for an extended source, an additional integration would be required to sum the contributions made by the individual points in the source. If, however, we assume that the light from the source at each point in the aperture has a well-defined direction, which is the case if the distance between the source and the aperture is significantly greater than the wavelength, then we can write
where a(r) is the magnitude of the disturbance at the point r in the aperture. We then have
and thus
Fraunhofer and Fresnel diffraction equations
In spite of the various approximations that were made in arriving at the formula, it is adequate to describe the majority of problems in instrumental optics. This is mainly because the wavelength of light is much smaller than the dimensions of any obstacles encountered. Analytical solutions are not possible for most configurations, but the Fresnel diffraction equation and Fraunhofer diffraction equation, which are approximations of Kirchhoff's formula for the near field and far field, can be applied to a very wide range of optical systems.
One of the important assumptions made in arriving at the Kirchhoff diffraction formula is that r and s are significantly greater than λ. Another approximation can be made, which significantly simplifies the equation further: this is that the distances P0Q and QP are much greater than the dimensions of the aperture. This allows one to make two further approximations:
cos(n, r) − cos(n, s) is replaced with 2cos β, where β is the angle between P0P and the normal to the aperture. The factor 1/rs is replaced with 1/rs, where r and s are the distances from P0 and P to the origin, which is located in the aperture. The complex amplitude then becomes:
Assume that the aperture lies in the xy plane, and the coordinates of P0, P and Q (a general point in the aperture) are (x0, y0, z0), (x, y, z) and (x, y, 0) respectively. We then have:
We can express r and s as follows:
These can be expanded as power series:
The complex amplitude at P can now be expressed as
where f(x, y) includes all the terms in the expressions above for s and r apart from the first term in each expression and can be written in the form
where the ci are constants.
Fraunhofer diffraction
If all the terms in f(x, y) can be neglected except for the terms in x and y, we have the Fraunhofer diffraction equation. If the direction cosines of P0Q and PQ are
The Fraunhofer diffraction equation is then
where C is a constant. This can also be written in the form
where k0 and k are the wave vectors of the waves traveling from P0 to the aperture and from the aperture to P respectively, and r is a point in the aperture.
If the point source is replaced by an extended source whose complex amplitude at the aperture is given by U0(r' ), then the Fraunhofer diffraction equation is:
where a0(r') is, as before, the magnitude of the disturbance at the aperture.
In addition to the approximations made in deriving the Kirchhoff equation, it is assumed that
r and s are significantly greater than the size of the aperture,
second- and higher-order terms in the expression f(x, y) can be neglected.
Fresnel diffraction
When the quadratic terms cannot be neglected but all higher order terms can, the equation becomes the Fresnel diffraction equation. The approximations for the Kirchhoff equation are used, and additional assumptions are:
r and s are significantly greater than the size of the aperture,
third- and higher-order terms in the expression f(x, y) can be neglected.
References
Further reading
Baker, B.B.; Copson, E.T. (1939, 1950). The Mathematical Theory of Huygens' Principle. Oxford.
Waves
Physical optics
Diffraction
Gustav Kirchhoff | Kirchhoff's diffraction formula | Physics,Chemistry,Materials_science | 2,556 |
16,811,582 | https://en.wikipedia.org/wiki/Roll%20program | A roll program or tilt maneuver is an aerodynamic maneuver that alters the attitude of a vertically launched space launch vehicle. It consists of a partial rotation around the vehicle's vertical axis, allowing the vehicle to then pitch to follow the proper azimuth toward orbit.
A roll program is usually completed after the vehicle clears the tower. In the case of many NASA crewed launches, the commander reports the roll to the mission control center which is then acknowledged by the capsule communicator.
Saturn V
The Saturn V's roll program was initiated shortly after launch and was handled by the first stage. It was open-loop: the commands were pre-programmed to occur at a specific time after lift-off, and no closed loop control was used. This made the program simpler to design at the expense of not being able to correct for unforeseen conditions such as high winds. The rocket simply initiated its roll program at the appropriate time after launch, and rolled until an adequate amount of time had passed to ensure that the desired roll angle was achieved.
Roll on the Saturn V was initiated by tilting the engines simultaneously using the roll and pitch servomechanisms, which served to initiate a rolling torque on the vehicle.
Space Shuttle
During the launch of a Space Shuttle, the roll program was simultaneously accompanied by a pitch maneuver and yaw maneuver.
The roll program occurred during a Shuttle launch for the following reasons:
To place the shuttle in a heads down position
Increasing the mass that can be carried into orbit (this was actually the initial reason - a 20% payload increase due to more efficient aerodynamics and moment balancing between the boosters and main engines)
Increasing the orbital altitude
Simplifying the trajectory of a possible Return to Launch site abort maneuver
Improving radio line-of-sight propagation
Orienting the shuttle more parallel toward the ground with the nose to the east
The RAGMOP computer program (Northrop) in 1971–72 discovered a ~20% payload increase by rolling upside down. It went from ~40,000 lb to ~48,000 lb to a 150 NM equatorial orbit without violating any constraints (max Q, 3 G limit, etc.). So the incentive to roll was initially for the payload increase by minimizing drag losses and moment balancing losses by keeping the main engine thrust vectors more parallel to the SRBs.
References
Spaceflight
Rocketry
Aerodynamics | Roll program | Chemistry,Astronomy,Engineering | 480 |
50,694,498 | https://en.wikipedia.org/wiki/Dimepregnen | Dimepregnen (INN, BAN) (developmental code name ST-1411), or 6α,16α-dimethylpregn-4-en-3β-ol-20-one, is a pregnene steroid described as an antiestrogen that was synthesized in 1968 and was never marketed. It is similar in structure to the progestins and progesterone derivatives melengestrol and anagestone.
See also
Anagestone acetate
Medroxyprogesterone acetate
Megestrol acetate
Melengestrol acetate
References
Sterols
Antiestrogens
Antigonadotropins
Ketones
Pregnanes
Progestogens | Dimepregnen | Chemistry | 145 |
34,338,776 | https://en.wikipedia.org/wiki/Sony%20SmartWatch | The Sony SmartWatch is a line of wearable devices developed and marketed by Sony Mobile from 2012 to 2016 through three generations. They connect to Android smartphones and can display information such as Twitter feeds and SMS messages, among other things.
Original
The original Sony SmartWatch, model MN2SW, came with a flexible silicone wristband with multiple colors available. It was introduced at CES 2012 and launched later in March 2012.
Sony SmartWatch 2
The Sony SmartWatch 2, model SW2, was launched in late September 2013.
The SW2 supported working together with any Android 4.0 (and higher) smartphone, unlike Samsung's competing Galaxy Gear smartwatch, which only worked with some of Samsung's own Galaxy handsets. The watch featured an aluminum body and came with the option of a silicone or metal wristband, but could be used with any 24mm wristband. It was 1.65 inches tall by 1.61 inches wide by 0.35 inch thick, weighed 0.8 ounces and sported a transflective LCD screen with a 220x176 resolution. The SW2 connected to the smartphone using Bluetooth, and supported NFC for easy pairing. It was rated IP57 so it could be submersed in water up to a meter for 30 minutes and was dust resistant.
Sony SmartWatch 3
At IFA 2014 the company announced the Sony Smartwatch 3. Its processor switched from previous generations' ARM Cortex-M MCU to an ARM Cortex-A CPU.
As noted by ABI Research, "The SmartWatch 3 has many new features such as waterproof (IP68 rated, not just resistant), improved styling, transition to Android Wear, and introduction of a new wearable platform from Broadcom. ... [It's] based on the Broadcom system-on-chip (SoC) platform which includes a 1.2GHz Quad-core ARM Cortex A7 processor (BCM23550), an improved GPS and ambient light sensor processing SoC (BCM47531) capable of simultaneously tracking five satellite systems (GPS, GLONASS, QZSS, SBAS, and BeiDou), the now popular Wi-Fi 802.11n/BT/NFC/FM quad-combo connectivity chip (BCM43341), and a highly integrated power management IC (BCM59054)."
Several apps are capable of using the Smartwatch 3's GPS, including:
Google MyTracks (since 2014, December)
RunKeeper (since 2014, December)
Endomondo (since 2015, March)
iFit
Ghostracer (can upload to Strava)
Strava (Beta)
Rambler
The watch is also capable of tracking swimming with swim.com and golf swings with vimoGolf.
The Sony SmartWatch 3 will not be upgraded to version 2.0 of Android Wear.
Model comparison
See also
Smartwatch
Pebble (watch)
Moto 360 (2nd generation), a smartwatch by Motorola
Sony Ericsson LiveView
Samsung Galaxy Gear
References
External links
Coverage: The Verge Gizmodo Computerworld Engadget
Smartwatch 3
Sony hardware
Smartwatches | Sony SmartWatch | Technology | 651 |
38,513,558 | https://en.wikipedia.org/wiki/Dualizing%20module | In abstract algebra, a dualizing module, also called a canonical module, is a module over a commutative ring that is analogous to the canonical bundle of a smooth variety. It is used in Grothendieck local duality.
Definition
A dualizing module for a Noetherian ring R is a finitely generated module M such that for any maximal ideal m, the R/m vector space vanishes if n ≠ height(m) and is 1-dimensional if n = height(m).
A dualizing module need not be unique because the tensor product of any dualizing module with a rank 1 projective module is also a dualizing module. However this is the only way in which the dualizing module fails to be unique: given any two dualizing modules, one is isomorphic to the tensor product of the other with a rank 1 projective module.
In particular if the ring is local the dualizing module is unique up to isomorphism.
A Noetherian ring does not necessarily have a dualizing module. Any ring with a dualizing module must be Cohen–Macaulay. Conversely if a Cohen–Macaulay ring is a quotient of a Gorenstein ring then it has a dualizing module. In particular any complete local Cohen–Macaulay ring has a dualizing module. For rings without a dualizing module it is sometimes possible to use the dualizing complex as a substitute.
Examples
If R is a Gorenstein ring, then R considered as a module over itself is a dualizing module.
If R is an Artinian local ring then the Matlis module of R (the injective hull of the residue field) is the dualizing module.
The Artinian local ring R = k[x,y]/(x2,y2,xy) has a unique dualizing module, but it is not isomorphic to R.
The ring Z[] has two non-isomorphic dualizing modules, corresponding to the two classes of invertible ideals.
The local ring k[x,y]/(y2,xy) is not Cohen–Macaulay so does not have a dualizing module.
See also
dualizing sheaf
References
Commutative algebra | Dualizing module | Mathematics | 451 |
743,441 | https://en.wikipedia.org/wiki/Bouguer%20anomaly | In geodesy and geophysics, the Bouguer anomaly (named after Pierre Bouguer) is a gravity anomaly, corrected for the height at which it is measured and the attraction of terrain. The height correction alone gives a free-air gravity anomaly.
Definition
The Bouguer anomaly defined as:
Here,
is the free-air gravity anomaly.
is the Bouguer correction which allows for the gravitational attraction of rocks between the measurement point and sea level;
is a terrain correction which allows for deviations of the surface from an infinite horizontal plane
The free-air anomaly , in its turn, is related to the observed gravity as follows:
where:
is the correction for latitude (because the Earth is not a perfect sphere; see normal gravity);
is the free-air correction.
Reduction
A Bouguer reduction is called simple (or incomplete) if the terrain is approximated by an infinite flat plate called the Bouguer plate. A refined (or complete) Bouguer reduction removes the effects of terrain more precisely. The difference between the two is called the (residual) terrain effect (or (residual) terrain correction) and is due to the differential gravitational effect of the unevenness of the terrain; it is always negative.
Simple reduction
The gravitational acceleration outside a Bouguer plate is perpendicular to the plate and towards it, with magnitude 2πG times the mass per unit area, where is the gravitational constant. It is independent of the distance to the plate (as can be proven most simply with Gauss's law for gravity, but can also be proven directly with Newton's law of gravity). The value of is , so is times the mass per unit area. Using = () we get times the mass per unit area. For mean rock density () this gives
The Bouguer reduction for a Bouguer plate of thickness is
where is the density of the material and is the constant of gravitation. On Earth the effect on gravity of elevation is 0.3086 mGal m−1 decrease when going up, minus the gravity of the Bouguer plate, giving the Bouguer gradient of 0.1967 mGal m−1.
More generally, for a mass distribution with the density depending on one Cartesian coordinate z only, gravity for any z is 2πG times the difference in mass per unit area on either side of this z value. A combination of two parallel infinite if equal mass per unit area plates does not produce any gravity between them.
See also
Gravity map
Notes
References
External links
Bouguer anomalies of Belgium. The blue regions are related to deficit masses in the subsurface
Bouguer gravity anomaly grid for the conterminous US by the [United States Geological Survey].
Bouguer anomaly map of Grahamland F.J. Davey (et al.), British Antarctic Survey, BAS Bulletins 1963-1988
Bouguer anomaly map depicting south-eastern Uruguay's Merín Lagoon anomaly (amplitude greater than +100 mGal), and detail of site.
List of Magnetic and Gravity Maps by State by the [United States Geological Survey].
Geophysics
Gravimetry | Bouguer anomaly | Physics | 640 |
2,920,661 | https://en.wikipedia.org/wiki/Tetraacetylethylenediamine | Tetraacetylethylenediamine, commonly abbreviated as TAED, is an organic compound with the formula (CH3C(O))2NCH2CH2N(C(O)CH3)2. It is a white solid commonly used as a bleach activator in laundry detergents and in the production of paper pulp. TAED is synthesized through the acetylation of ethylenediamine.
Use and mechanism of action
TAED is an important component of laundry detergents that use "active oxygen" bleaching agents. Active oxygen bleaching agents include sodium perborate, sodium percarbonate, sodium perphosphate, sodium persulfate, and urea peroxide. These compounds release hydrogen peroxide during the wash cycle, but the release of hydrogen peroxide is low when these compounds are used in temperatures below . TAED and hydrogen peroxide react to form peroxyacetic acid, a more efficient bleach, allowing lower temperature wash cycles, around . TAED was first used in a commercial laundry detergent in 1978 (Skip by Unilever). Currently, TAED is the main bleach activator used in European laundry detergents and has an estimated annual consumption of 75 kt.
Perhydrolysis
TAED reacts with alkaline peroxide via the process called perhydrolysis releasing of peracetic acid. The first perhydrolysis gives triacetylethylenediamine (TriAED) and the second gives diacetylethylenediamine (DAED):
TAED typically provides only two equivalents of peracetic acid, although four are theoretically possible.
Competing with perhydrolysis, TAED also undergoes some hydrolysis, which is an unproductive pathway.
Preparation
TAED is prepared in a two-stage process from ethylenediamine and acetic anhydride. The process is nearly quantitative.
Properties
Powdered TAED is stabilized by granulation with the aid of the sodium salt of carboxymethylcellulose (Na-CMC), which are sometimes additionally coated blue or green. Despite the relatively low solubility of TAED in cool water, (1 g/L at 20 °C), the granulate dissolves rapidly in the washing liquor.
The peroxyacetic acid formed has bactericidal, virucidal and fungicidal properties, thereby enabling TAED with percarbonate to disinfect and deodorize.
Ecology
Triacetylethylenediamine is mostly non-toxic and easily biodegradable. TAED and its byproduct DAED have low aquatic ecotoxicity. Triacetylethylenediamine shows a very low toxicity in all exposure routes, is practically non-irritating effect on skin and eye, and does not give any indication of skin sensitization. It is not mutagenic and not teratogenic. TAED, TriAED and DAED are all completely biodegradable and substantially removed during wastewater treatment.
References
External links
US Patent 6528470 - Bleaching activator
HERA RiskAssessment
Cleaning product components
Acetamides | Tetraacetylethylenediamine | Technology | 663 |
41,851,667 | https://en.wikipedia.org/wiki/Vent%20DNA%20polymerase | Vent polymerase is a archean thermostable DNA polymerase used for the polymerase chain reaction. It was isolated from the thermophile Thermococcus litoralis.
References
DNA replication
EC 2.7.7
Polymerase chain reaction | Vent DNA polymerase | Chemistry,Biology | 55 |
38,791,208 | https://en.wikipedia.org/wiki/History%20of%20catecholamine%20research | The catecholamines are a group of neurotransmitters composed of the endogenous substances dopamine, noradrenaline (norepinephrine), and adrenaline (epinephrine), as well as numerous artificially synthesized compounds such as isoprenaline - an anti-bradycardiac medication. Their investigation constitutes a major chapter in the history of physiology, biochemistry, and pharmacology. Adrenaline was the first hormone extracted from an endocrine gland and obtained in pure form, before the word hormone was coined. Adrenaline was also the first hormone whose structure and biosynthesis was discovered. Second to acetylcholine, adrenaline and noradrenaline were some of the first neurotransmitters discovered, and the first intercellular biochemical signals to be found in intracellular vesicles. The β-adrenoceptor gene was the first G protein-coupled receptor to be cloned.
Adrenaline in the Adrenal Medulla
Forerunners
British physician and physiologist Henry Hyde Salter (1823–1871) included a chapter on treatment by "stimulants", in a book on asthma which was first published in 1860. He noted the benefits of strong coffee, presumably because it dispelled sleep, which favored asthma. Even more impressive to him, however, was the response to "strong mental emotion": "The cure of asthma by violent emotion is more sudden and complete than by any other remedy whatever; indeed, I know few things more striking and curious in the whole history of therapeutics. The cure takes no time; it is instantaneous, the intersect paroxysm ceases on the instant." The retrospective interpretation is that the "cure" was due to the release of adrenaline from the adrenal glands.
At the same time, the French physician Alfred Vulpian also made discoveries about the adrenal medulla. Material scraped from the adrenal medulla turned green when ferric chloride was added. This did not occur with the adrenal cortex nor with any other tissue. Vulpian even came to the insight that the substance entered "le torrent circulator" ("the circulatory torrent"), as blood from the adrenal veins did give the ferric chloride reaction.
In the early 1890s, in the laboratory of Oswald Schmiedeberg in Strasbourg, the German pharmacologist Carl Jacob (1857–1944) studied the relationship between the adrenal glands and the intestine. Electrical stimulation of the vagus nerve or injection of muscarine elicited peristalsis. This peristalsis was promptly abolished by electrical stimulation of the adrenal glands. The experiment has been called "the first indirect demonstration of the role of the adrenal medulla as an endocrine organ and actually a more sophisticated demonstration of the adrenal medullary function than the classic study of Oliver and Schafer". While this may be true, Jacob did not envisage a chemical signal secreted into the blood to influence distant organs, the actual function of a hormone, but nerves running from the adrenals to the gut, "Hemmungsbahnen für die Darmbewegung".
Oliver and Schäfer 1893–1894
George Oliver was a physician practicing in the spa town of Harrogate in North Yorkshire while Edward Albert Schäfer was Professor of Physiology at University College London. In 1918, he prefixed the surname of his physiology teacher, William Sharpey, to his own to become Edward Albert Sharpey Schafer. The canonical story, told by Henry Hallett Dale, who worked at University College London from 1902 to 1904, runs as follows:
Dr. Oliver, I was told I had a liking and a ′flair′ for the invention of simple appliances, with which observations and experiments could be made on the human subject. Dr Oliver had invented a small instrument with which he claimed to be able to measure, through the unbroken skin, the diameter of a living artery, such as the radial artery at the wrist. He appears to have used his family in his experiments, and a young son was the subject of a series, in which Dr Oliver measured the diameter of the radial artery, and observed the effect upon it of injecting extracts of various animal glands under the skin. … We may picture, then, Professor Schafer, in the old physiological laboratory at University College, … finishing an experiment of some kind, in which he was recording the arterial blood pressure of an anaesthetized dog. … To him enters Dr Oliver, with the story of the experiments on his boy, and, in particular, with the statement that injection under the skin of a glycerin extract from calf's suprarenal gland was followed by a definite narrowing of the radial artery. Professor Schafer is said to have been entirely skeptical, and to have attributed the observation to self-delusion. … He can hardly be blamed, I think; knowing even what we now know about the action of this extract, which of us would be prepared to believe that injecting it under a boy's skin would cause his radial artery to become measurably more slender? Dr Oliver, however, is persistent; he … suggests that, at least, it will do no harm to inject into the circulation, through a vein, a little of the suprarenal extract, which he produces from his pocket. So Professor Schafer makes the injection, expecting a triumphant demonstration of nothing, and finds himself standing ′like some watcher of the skies, when a new planet swims into his ken,′ watching the mercury rise in the manometer with amazing rapidity and to an astounding height.
Despite this tale being reiterated many times, it is not beyond doubt. Dale himself said that it was handed down at University College and showed some surprise that the constriction of the radial artery was measurable. Of Oliver's descendants, none recalled experiments on his son. Dale's report of subcutaneous injections contradicts the concerned parties. Oliver: "During the winter of 1893–4, while prosecuting an inquiry as to … agents that vary the caliber of … arteries … I found that the administration by the mouth of a glycerin extract of the adrenals of the sheep and calf produced a marked constrictive action on the arteries." Schafer: "In the autumn of 1893 there called upon me in my laboratory at University College a gentleman who was personally unknown to me. … I found that my visitor was Dr. George Oliver, [who] was desirous of discussing with me the results which he had been obtaining from the exhibition by the mouth of extracts from certain animal tissues, and the effects which these had in his hands produced upon the blood vessels of man." Systemic effects of orally given adrenaline are highly unlikely, so details of the canonical text may be legend.
On March 10, 1894, Oliver and Schafer presented their findings to the Physiological Society in London. A 47-page account followed a year later, in the style of the time without statistics, but with precise description of many individual experiments and 25 recordings on kymograph smoked drums, showing, besides the blood pressure increase, reflex bradycardia and contraction of the spleen. "It appears to be established as the result of these investigations that the suprarenal capsules are to be regarded although ductless, as strictly secreting glands. The material which they form and which is found, at least in its fully active condition, only in the medulla of the gland, produces striking physiological effects upon the muscular tissue generally, and especially upon that of the heart and arteries. Its action is produced mainly if not entirely by direct action."
The reports created a sensation. Oliver was fast to try adrenal extracts in patients, orally again and rather indiscriminately, from Addison's disease, hypotension ("loss of vasomotor tone"), Diabetes mellitus and Diabetes insipidus to Graves' disease ("exophthalmic goiter"). It seems he adhered to contemporary ideas of organotherapy, believing that powerful substances existed in tissues and ought to be discovered for medicinal use. He immediately went on to extract the pituitary gland and, again with Schafer, discovered vasopressin. In 1903 adrenaline, meanwhile purified, was first used in asthma. The use was based, not on the bronchodilator effect, which was discovered later, but on the vasoconstrictor effect, which was hoped to alleviate the "turgidity of the bronchial mucosa"—presumably vascular congestion and edema. Also as of 1903, adrenaline was added to local anesthetic solutions. The surgeon Heinrich Braun in Leipzig showed that it prolonged the anesthesia at the injection site and simultaneously reduced "systemic" effects elsewhere in the body.
Independent discoverers
A year after Oliver and Schafer, Władysław Szymonowicz (1869–1939) and Napoleon Cybulski of the Jagiellonian University in Kraków reported similar findings and conclusions. They found that blood from the adrenal veins caused hypertension when injected intravenously in a recipient dog, whereas blood from other veins did not, demonstrating that the adrenal pressor substance was in fact secreted into the blood and confirming Vulpian. The Polish authors freely acknowledged the priority of Oliver and Schäfer, and the British authors acknowledged the independence of Szymonowicz and Cybulski. The main difference was in the location of the action: to the periphery by Oliver and Schäfer but, erroneously, to the central nervous system by Szymonowicz and Cybulski.
Another year later, the US-American ophthalmologist William Bates, perhaps motivated like Oliver, instilled adrenal extracts into the eye and found that "the conjunctiva of the globe and lids whitened in a few minutes", correctly explained the effect by vasoconstriction, and administered the extracts in various eye diseases.
Chemistry
In 1897, John Jacob Abel in Baltimore partially purified adrenal extracts to what he called "epinephrin", and Otto von Fürth in Strasbourg to what he called "Suprarenin". The Japanese chemist Jōkichi Takamine, who had set up his own laboratory in New York, invented an isolation procedure and obtained it in pure crystal form in 1901, and arranged for Parke-Davis to market it as "Adrenalin", spelt without the terminal "e". In 1903, natural adrenaline was found to be optically active and levorotary. In 1905 synthesis of the racemate was achieved by Friedrich Stolz at Hoechst AG in Höchst (Frankfurt am Main) and by Henry Drysdale Dakin at the University of Leeds. In 1906 the chemical structure was elucidated by Ernst Joseph Friedmann (1877–1956) in Strasbourg, and in 1908 the dextrorotary enantiomer was shown to be almost inactive by Arthur Robertson Cushney (1866–1926) at the University of Michigan, leading him to conclude that "the 'receptive substance' affected by adrenalin" is able to discriminate between the optical isomers and, hence, itself optically active. Overall, 32 designations have been coined, of which "adrenaline", preferred in the United Kingdom, and "epinephrine", preferred in the United States, persist as generic names in the scientific literature.
Adrenaline as a transmitter
A new chapter was opened when Max Lewandowsky in 1899 in Berlin observed that adrenal extracts acted on the smooth muscle of the eye and orbit of cats—such as the iris dilator muscle and nictitating membrane—in the same way as sympathetic nerve stimulation. The correspondence was extended by John Newport Langley and, under his supervision, Thomas Renton Elliott in Cambridge. In four papers in volume 31, 1904, of the Journal of Physiology Elliott described the similarities organ by organ. His hypothesis stands in the abstract of a presentation to the Physiological Society of May 21, 1904, a little over ten years after Oliver and Schafer's presentation: "Adrenalin does not excite sympathetic ganglia when applied to them directly, as does nicotine. Its effective action is localized at the periphery. I find that even after complete denervation, whether of three days' or ten months' duration, the plain muscle of the dilatator pupillae will respond to adrenalin, and that with greater rapidity and longer persistence than does the iris whose nervous relations are uninjured. Therefore, it cannot be than adrenalin excites any structure derived from, and dependent for its persistence on, the peripheral neurone. ... The point at which the stimulus of the chemical excitant is received, and transformed into what may cause the change of tension of the muscle fiber, is perhaps a mechanism developed out of the muscle cell in response to its union with the synapsing sympathetic fiber, the function of which is to receive and transform the nervous impulse. "Adrenalin" might then be the chemical stimulant liberated on each occasion when the impulse arrives at the periphery." The abstract is the "birth certificate" of chemical neurotransmission. Elliott was never so explicit again. It seems he was discouraged by the lack of a favorable response from his seniors, Langley in particular, and a few years later he left physiological research.
The breakthrough for chemical neurotransmission came when, in 1921, Otto Loewi in Graz demonstrated the "humorale Übertragbarkeit der Herznervenwirkung" in amphibians. Vagusstoff transmitted inhibition from the vagus nerves, and Acceleransstoff transmitted stimulation from the sympathetic nerves to the heart. Loewi took some years to commit himself with respect to the nature of the Stoffe, but in 1926 he was sure that Vagusstoff was acetylcholine, writing in 1936 "I no longer hesitate to identify the Sympathicusstoff with adrenaline."
He was correct in the latter statement. In most amphibian organs including the heart, the concentration of adrenaline far exceeds that of noradrenaline, and adrenaline is indeed the main transmitter. In mammals, however, difficulties arose. In a comprehensive structure-activity study of adrenaline-like compounds, Dale and the chemist George Barger in 1910 found that Elliott's hypothesis assumed a stricter parallelism between the effects of sympathetic nerve impulses and adrenaline than actually existed. For example, sympathetic impulses shared with adrenaline contractile effects in the trigone but not relaxant effects in the fundus of the cat's urinary bladder. In this respect, "amino-ethanol-catechol"—noradrenaline—mimicked sympathetic nerves more closely than adrenaline did. The Harvard Medical School physiologist Walter Bradford Cannon, who had popularized the idea of a sympatho-adrenal system preparing the body for fight and flight, and his colleague Arturo Rosenblueth developed an elaborate but "queer" theory of two sympathins, sympathin E (excitatory) and sympathin I (inhibitory). The Belgian pharmacologist Zénon Bacq as well as Canadian and US-American pharmacologists between 1934 and 1938 suggested that noradrenaline might be the—or at least one—postganglionic sympathetic transmitter. However, nothing definite was brought to light till after the war. In the meantime, Dale created a terminology that has since imprinted the thinking of neuroscientists: nerve cells should be named after their transmitter, i.e. cholinergic if the transmitter was "a substance like acetylcholine", and adrenergic if it was "some substance like adrenaline".
In 1936, the year when Loewi accepted adrenaline as the (amphibian) sympathetic transmitter, Dale and Loewi received the Nobel Prize in Physiology or Medicine "for their discoveries relating to chemical transmission of nerve impulses".
Formation and destruction
In a review of earlier work on catecholamine biosynthesis, German-British biochemist Hermann Blaschko (1900–1993) wrote: "Our modern knowledge of the biosynthetic pathway for the catecholamines begins in 1939, with the publication of a paper by Peter Holtz and his colleagues: they described the presence in the guinea-pig kidneys of an enzyme that they called dopa decarboxylase, because it catalyzed the formation of dopamine and carbon dioxide from the amino acid L-dopa." The paper by Peter Holtz (1902–1970) and his coworkers referred to in that quote originated from the Institute of Pharmacology in Rostock. Already in that same year, both Blaschko at Cambridge and Holtz in Rostock predicted the entire sequence tyrosine → l-DOPA → oxytyramine = dopamine → noradrenaline → adrenaline. Edith Bülbring, who also had fled National Socialist racism in 1933, demonstrated methylation of noradrenaline to adrenaline in adrenal tissue in Oxford in 1949, and Julius Axelrod detected phenylethanolamine N-methyltransferase in Bethesda, Maryland in 1962. The two remaining enzymes, tyrosine hydroxylase and dopamine β-hydroxylase, were also characterized around 1960.
Even before contributing to the formation pathway, Blaschko had discovered a destruction mechanism. An enzyme tyramine oxidase described in 1928 also oxidized dopamine, noradrenaline and adrenaline. It was later named monoamine oxidase. This seemed to clarify the fate of the catecholamines in the body, but in 1956 Blaschko suggested that, because the oxidation was slow, "other mechanisms of inactivation … will be found to play an important part. Here is a gap in our knowledge which remains to be filled." Within a year, Axelrod narrowed the gap by showing that dopamine, noradrenaline and adrenaline were O-methylated by catechol-O-methyl transferase. To fill the gap completely, however, the role of membranes had to be appreciated ().
Noradrenaline
Thanks to Holtz and Blaschko it was clear that animals synthesized noradrenaline. What was needed to attribute a transmitter role to it was proof of its presence in tissues at effective concentrations and not only as a short-lived intermediate. On April 16, 1945, Ulf von Euler of Karolinska Institute in Stockholm, who had already discovered or co-discovered substance P and prostaglandins, submitted to Nature the first of a series of papers that gave this proof. After many bioassays and chemical assays on organ extracts he concluded that mammalian sympathetically innervated tissues as well as, in small amounts, the brain, but not the nerve-free placenta, contained noradrenaline and that noradrenaline was the sympathy of Cannon and Rosenblueth, the "physiological transmitter of adrenergic nerve action in mammals". Overflow of noradrenaline into the venous blood of the cat's spleen upon sympathetic nerve stimulation two years later bore out the conclusion. In amphibian hearts, on the other hand, the transmitter role of adrenaline was confirmed.
The war prevented Peter Holtz and his group in Rostock from being recognized side by side with von Euler as discoverers of the second catecholamine transmitter noradrenaline. Their approach was different. They sought for catecholamines in human urine and found a blood pressure-increasing material Urosympathin that they identified as a mixture of dopamine, noradrenaline and adrenaline. "As to the origin of Urosympathin we would like to suggest the following. Dopamine in urine is the fraction that was not consumed for the synthesis of sympathin E and I. … Sympathin E and I, i.e. noradrenaline and adrenaline, are liberated in the region of the sympathetic nerve endings when these are excited." The manuscript was received by Springer-Verlag in Leipzig on October 8, 1944. On October 15, the printing office in Braunschweig was destroyed by an airstrike. Publication was delayed to volume 204, 1947, of Naunyn-Schmiedebergs Archiv für Pharmakologie und Experimentelle Pathologie. Peter Holtz later used to cite the paper as "Holtz et al. 1944/47" or "Holtz, Credner and Kroneberg 1944/47".
Remembering his and Barger's structure-activity analysis of 1910, Dale wrote in 1953: "Doubtless I ought to have seen that nor-adrenaline might be the main transmitter—that Elliott's theory might be right in principle and faulty only in this detail. ... It is easy, of course, to be wise in the light of facts recently discovered; lacking them I failed to jump to the truth, and I can hardly claim credit for having crawled so near and then stopped short of it."
The next step led to the central nervous system. It was taken by Marthe Vogt, a refugee from Germany who at that time worked with John Henry Gaddum in the Institute of Pharmacology of the University of Edinburgh. "The presence of noradrenaline and adrenaline in the brain has been demonstrated by von Euler (1946) and Holtz (1950). These substances were supposed, undoubtedly correctly, to occur in the cerebral vasomotor nerves. The present work is concerned with the question whether these sympathomimetic amines, besides their role as transmitters at vasomotor endings, play a part in the function of the central nervous tissue itself. In this paper, these amines will be referred to as sympathin, since they were found invariably to occur together, with noradrenaline representing the major component, as is characteristic for the transmitter of the peripheral sympathetic system." Vogt created a detailed map of noradrenaline in the dog brain. Its uneven distribution, not reflecting the distribution of vasomotor nerves, and its persistence after removal of the superior cervical ganglia made it "tempting to assign to the cerebral sympathin a transmitter role like that which we assign to the sympathin found in the sympathetic ganglia and their postganglionic fibers." Her assignment was confirmed, the finishing touch being the visualization of the noradrenaline as well as adrenaline and dopamine pathways in the central nervous system by Annica Dahlström and with the formaldehyde fluorescence method developed by Nils-Åke Hillarp (1916–1965) and Bengt Falck (born 1927) in Sweden and by immunochemistry techniques.
Dopamine
As noradrenaline is an intermediate on the path to adrenaline, dopamine is on the path to noradrenaline (and hence adrenaline.) In 1957 dopamine was identified in the human brain by researcher Katharine Montagu. In 1958/59 Arvid Carlsson and his group in the Pharmacology Department of the University of Lund, including the medical students Åke Bertler and Evald Rosengren, not only found dopamine in the brain, but also—like noradrenaline in Marthe Vogt's exemplary study—in uneven distribution, quite different from the distribution of noradrenaline. This argued for a function beyond an intermediate. The concentration was highest in the corpus striatum, which contained only traces of noradrenaline. Carlsson's group had previously found that reserpine, which was known to cause a Parkinsonism syndrome, depleted dopamine (as well as noradrenaline and serotonin) from the brain. They concluded that "dopamine is concerned with the function of the corpus striatum and thus with the control of motor function". Thus for the first time the reserpine-induced Parkinsonism in laboratory animals and, by implication, Parkinson's disease in humans was related to depletion of striatal dopamine. A year later Oleh Hornykiewicz, who had been introduced to dopamine by Blaschko and was carrying out a color reaction on extracts of human corpus striatum in the Pharmacological Institute of the University of Vienna, saw the brain dopamine deficiency in Parkinson's disease "with his own naked eye: Instead of the pink color given by the comparatively high concentrations of dopamine in the control samples, the reaction vials containing the extracts of the Parkinson's disease striatum showed hardly a tinge of pink discoloration".
In 1970, von Euler and Axelrod were two of three winners of the Nobel Prize in Physiology or Medicine, "for their discoveries concerning the humoral transmitters in the nerve terminals and the mechanism for their storage, release and inactivation", and in 2000 Carlsson was one of three winners who got the prize "for their discoveries concerning signal transduction in the nervous system".
Membrane passage
Membranes play a twofold role for catecholamines: catecholamines must pass through membranes and deliver their chemical message at membrane receptors.
Catecholamines are synthesized inside cells and sequestered in intracellular vesicles. This was first shown by Blaschko and Arnold Welch (1908–2003) in Oxford and by Hillarp and his group in Lund for the adrenal medulla and later for sympathetic nerves and the brain. In addition the vesicles contained adenosine triphosphate (ATP), with a molar noradrenaline:ATP ratio in sympathetic nerve vesicles of 5.2:1 as determined by Hans-Joachim Schümann (1919–1998) and Horst Grobecker (born 1934) in Peter Holtz′ group at the Goethe University Frankfurt. Blaschko and Welch wondered how the catecholamines got out when nervous impulses reached the cells. Exocytosis was not among the possibilities they considered. It required the analogy of the "quantal" release of acetylcholine at the neuromuscular junction shown by Bernard Katz, third winner of the 1970 Nobel Prize in Physiology or Medicin'; the demonstration of the co-release with catecholamines of other vesicle constituents such as ATP and dopamine β-hydroxylase; and the unquestionable electron microscopic images of vesicles fusing with the cell membrane—to establish exocytosis.
Acetylcholine, once released, is degraded in the extracellular space by acetylcholinesterase, which faces that space. In the case the catecholamines, however, the enzymes of degradation monoamine oxidase and catechol-O-methyl transferase, like the enzymes of synthesis, are intracellular. Not metabolism but uptake through cell membranes therefore is the primary means of their clearance from the extracellular space. The mechanisms were deciphered beginning in 1959. Axelrod's group in Bethesda wished to clarify the in vivo fate of catecholamines using radioactively labelled catecholamines of high specific activity, which had just become available. 3H-adrenaline and 3H-noradrenaline given intravenously to cats were partly O-methylated, but another part was taken up in the tissues and stored unchanged. Erich Muscholl (born 1926) in Mainz, who had worked with Marthe Vogt in Edinburgh, wished to know how cocaine sensitized tissues to catecholamines—a fundamental mechanism of action of cocaine discovered by Otto Loewi and Alfred Fröhlich in 1910 in Vienna. Intravenous noradrenaline was taken up into the heart and spleen of rats, and cocaine prevented the uptake, "thus increasing the amount of noradrenaline available for combination with the adrenergic receptors". The uptake of 3H-noradrenaline was severely impaired after sympathectomy, indicating that it occurred mainly into sympathetic nerve terminals. In support of this, Axelrod and Georg Hertting (born 1925) showed that freshly incorporated 3H-noradrenaline was re-released from the cat spleen when the sympathetic nerves were stimulated. A few years later, Leslie Iversen (born 1937) in Cambridge found that other cells also took up catecholamines. He called uptake into noradrenergic neurons, which were cocaine-sensitive, uptake1 and uptake into other cells, which were cocaine-resistant, uptake2. With the reserpine-sensitive uptake from the cytoplasm into the storage vesicles there were thus three catecholamine membrane passage mechanisms. Iversen's book of 1967 "The Uptake and Storage of Noradrenaline in Sympathetic Nerves" was successful, showing the fascination of the field and its rich pharmacology.
With the advent of molecular genetics, the three transport mechanisms have been traced to the proteins and their genes since 1990. They now consist of the plasma membrane noradrenaline transporter (NAT or NET), the classical uptake1, and the analogous dopamine transporter (DAT); the plasma membrane extraneuronal monoamine transporter or organic cation transporter 3 (EMT or SLC22A3), Iversen's uptake2; and the vesicular monoamine transporter (VMAT) with two isoforms. Transporters and intracellular enzymes such as monoamine oxidase operating in series constitute what the pharmacologist Ullrich Trendelenburg at the University of Würzburg called metabolizing systems.
Receptors
Research on the catecholamines was interwoven with research on their receptors. In 1904, Dale became head of the Wellcome Physiological Research Laboratory in London and started research on ergot extracts. The relevance of his communication in 1906 "On some physiological actions of ergot" lies less in the effects of the extracts given alone than in their interaction with adrenaline: they reversed the normal pressor effect of adrenaline to a depressor effect and the normal contraction effect on the early-pregnant cat's uterus to relaxation: adrenaline reversal. The pressor and uterine contraction effects of pituitary extracts, in contrast, remained unchanged, as did the effects of adrenaline on the heart and effects of parasympathetic nerve stimulation. Dale clearly saw the specificity of the "paralytic" (antagonist) effect of ergot for "the so-called myoneural junctions connected with the true sympathetic or thoracic-lumbar division of the autonomic nervous system"—the adrenoceptors. He also saw its specificity for the "myoneural junctions" mediating smooth muscle contraction as opposed to those mediating smooth muscle relaxation. But there he stopped. He did not conceive any close relationship between the smooth muscle-inhibitory and the cardiac sites of action of catecholamines.
Catecholamine receptors persisted in this wavering state for more than forty years. Additional blocking agents were found such as tolazoline in Switzerland and phenoxybenzamine in the United States, but like the ergot alkaloids they blocked only the smooth muscle excitatory receptors. Additional agonists also were synthesized. Outstanding among them became isoprenaline, N-isopropyl-noradrenaline, of Boehringer Ingelheim, studied pharmacologically along with adrenaline and other N-substituted noradrenaline derivatives by Richard Rössler (1897–1945) and Heribert Konzett (1912–2004) in Vienna. The Viennese pharmacologists used their own Konzett-Rössler test to examine bronchodilation. Intravenous injection of pilocarpine to induce bronchospasm was followed by intravenous injection of the agonists. "Arrangement of all amines according to their bronchodilator effect yields a series from the most potent, isopropyl-adrenaline, via the approximately equipotent bodies adrenaline, propyl-adrenaline and butyl-adrenaline, to the weakly active isobutyl-adrenaline." Isoprenaline also exerted marked positive chronotropic and inotropic effects. Boehringer introduced it for use in asthma in 1940. After the war it became available to Germany's former enemies and over the years was traded under about 50 names. In addition to this therapeutic success it was one of the agonists with which Raymond P. Ahlquist solved the "myoneural junction" riddle. "By virtue of this property the reputation of the substance spread all over the world and it became a tool for many investigations on different aspects of pharmacology and therapeutics." The story had a dark side: overdosage caused numerous deaths due to cardiac side effects, an estimated three thousands in the United Kingdom alone.
Ahlquist was head of the Department of Pharmacology of the University of Georgia School of Medicine, now Georgia Regents University. In 1948 he saw what had escaped Dale in 1906. "The adrenotropic receptors have been considered to be of two classes, those whose action results in excitation and those whose action results in inhibition of the effector cells. Experiments described in this paper indicate that although there are two kinds of adrenotropic receptors they cannot be classified simply as excitatory or inhibitory since each kind of receptor may have either action depending on where it is found." Ahlquist chose six agonists, including adrenaline, noradrenaline, α-methylnoradrenaline and isoprenaline, and examined their effects on several organs. He found that the six substances possessed two—and only two—rank orders of potency in these organs. For example, the rank order of potency was "adrenaline > noradrenaline > α-methylnoradrenaline > isoprenaline" in promoting contraction of blood vessels, but "isoprenaline > adrenaline > α-methylnoradrenaline > noradrenaline" in stimulating the heart. The receptor with the first rank order (for example for blood vessel contraction) he called alpha adrenotropic receptor (now α-adrenoceptor or α-adrenergic receptor), while the receptor with the second rank order (for instance for stimulation of the heart, but also for bronchodilation) he called beta adrenotropic receptor (now β-adrenoceptor or β-adrenergic receptor). "This concept of two fundamental types of receptors is directly opposed to the concept of two mediator substances (sympathin E and sympathin I) as propounded by Cannon and Rosenblueth and now widely quoted as 'law' of physiology. ... There is only one adrenergic neuro-hormone, or sympathin, and that sympathin is identical with epinephrine."
The haze surrounding the receptors was thus blown away. Yet, perhaps because Ahlquist dismissed Cannon and Rosenblueth rather harshly, his manuscript was rejected by the Journal of Pharmacology and Experimental Therapeutics and only in a second submission accepted by the American Journal of Physiology.
In retrospect, although Ahlquist was right in his "one transmitter—two receptors" postulate, he erred in the identification of the transmitter with adrenaline. There is an additional qualification. For many responses to sympathetic nerve stimulation, the ATP co-stored with noradrenaline () is a cotransmitter. It acts through purinoceptors. Lastly, Ahlquist failed to adduce the selectivity of all antagonists known at his time for the α-adrenoceptor as an additional argument.
The α,β-terminology initially was slow to spread. This changed with two publications in 1958. In the first, from Lilly Research Laboratories, dichloroisoprenaline selectively blocked some smooth muscle inhibitory effects of adrenaline and isoprenaline; in the second, it blocked cardiac excitatory effects of adrenaline and isoprenaline as well. In the first, which does not mention Ahlquist, dichloroisoprenaline blocked "certain adrenergic inhibitory receptor sites"; but in the second the results "support the postulate of Ahlquist (1948) that the adrenotropic inhibitory receptors and the cardiac chronotropic and inotropic adrenergic receptors are functionally identical, i.e., that both are beta type receptors. … It is suggested that this terminology be extended to the realm of adrenergic blocking drugs, e.g., that blocking drugs be designated according to the receptor for which they have the greatest affinity, as either alpha or beta adrenergic blocking drugs."
Dichloroisoprenaline was the first beta blocker; it retains some intrinsic activity. Pronethalol followed in 1962 and propranolol in 1964, both invented by James Black and his colleagues at Imperial Chemical Industries Pharmaceuticals in England. In 1967, β-adrenoceptors were subdivided into β1 and β2, and a third β type began to be suspected in the late 1970s, above all in adipocytes.
After premonitions for example in the work of the Portuguese pharmacologist Serafim Guimarães, α-adrenoceptor subclassification came in 1971 with the discovery of the self-regulation of noradrenaline release through α-adrenoceptors on noradrenergic synaptic terminals, presynaptic α-autoreceptors. Their existence was initially combated but is now established, for example by the demonstration of their messenger RNA in noradrenergic neurons. They differed from α-receptors on effector cells and in 1974 became the prototype α2-receptors, the long-known smooth muscle contraction-mediating receptors becoming α1.
Even before dopamine was identified as the third catecholamine transmitter, Blaschko suspected it might possess receptors of its own, since Peter Holtz and his group in 1942 had found that small doses of dopamine lowered the blood pressure of rabbits and guinea pigs, whereas adrenaline always increased the blood pressure. Holtz erred in his interpretation, but Blaschko had "no doubt that his observations are of the greatest historical importance, as the first indication of an action of dopamine that characteristically and specifically differs from those of the two other catecholamines". A re-investigation of the blood pressure-lowering effect in dogs in 1964 proposed "specific dopamine receptors for dilation", and at the same time evidence for dopamine receptors distinct from α- and β-adrenoceptors accrued from other experimental approaches.
In 1986, the first gene coding for a catecholamine receptor, the β2-adrenoceptor from hamster lung, was cloned by a group of sixteen scientists, among them Robert Lefkowitz and Brian Kobilka of Duke University in Durham, North Carolina. Genes for all mammalian catecholamine receptors have now been cloned, for the nine adrenoceptors α1A, α1B, α1D, α2A, α2B, α2C, β1, β2 and β3 and the five dopamine receptors D1, D2, D3, D4 and D5. Their fine structure, without agonist or agonist-activated, is being studied at high resolution.
Earl Wilbur Sutherland won the 1971 Nobel Prize in Physiology or Medicine "for his discoveries concerning the mechanisms of the action of hormones", in particular the discovery of cyclic adenosine monophosphate as second messenger in the action of catecholamines at β-adrenoceptors and of glucagon at glucagon receptors, which led on to the discovery of heterotrimeric G proteins. In 1988 James Black was one of three winners of the Nobel Prize in Physiology or Medicine "for their discoveries of important principles for drug treatment", Black's "important principles" being the blockade of β-adrenoceptors and of histamine H2 receptors. In 2012, Robert Lefkowitz and Brian Kobilka shared the Nobel Prize in Chemistry "for studies of G-protein-coupled receptors".
References
Further reading
Zénon M. Bacq: Chemical transmission of nerve impulses. In: M. J. Parnham, J. Bruinvels (Eds.): Discoveries in Pharmacology. Volume 1: Psycho- and Neuropharmacology. Amsterdam: Elsevier, 1983, pp. 49–103. .
Josef Donnerer, Fred Lembeck: "Adrenaline, noradrenaline and dopamine: the catecholamines". In: The Chemical Languages of the Nervous System. Basel, Karger, 2006, p. 150–160.
Paul Trendelenburg: "Adrenalin und adrenalinverwandte Substanzen". In: Arthur Heffter, ed.: Handbuch der experimentellen Pharmakologie volume 2 part 2. Berlin: Julius Springer, 1924, pp. 1130–1293.
Catecholamines
Catecholamine research
Hormones
Neurotransmitters | History of catecholamine research | Chemistry | 8,716 |
2,962,342 | https://en.wikipedia.org/wiki/Security%20operations%20center | A security operations center (SOC) is responsible for protecting an organization against cyber threats. SOC analysts perform round-the-clock monitoring of an organization’s network and investigate any potential security incidents. If a cyberattack is detected, the SOC analysts are responsible for taking any steps necessary to remediate it. It comprises the three building blocks for managing and enhancing an organization's security posture: people, processes, and technology. Thereby, governance and compliance provide a framework, tying together these building blocks. A SOC within a building or facility is a central location from which staff supervises the site using data processing technology. Typically, a SOC is equipped for access monitoring and control of lighting, alarms, and vehicle barriers.
SOC can be either internal or external. In latter case the organization outsources the security services, such monitoring, detection and analysis, from a Managed Security Service Provider (MSSP). This is typical to small organizations which don't have the resources to hire, train, and technically equip cybersecurity analysts.
IT
An information security operations center (ISOC) is a dedicated site where enterprise information systems (web sites, applications, databases, data centers and servers, networks, desktops and other endpoints) are monitored, assessed, and defended.
The United States government
The Transportation Security Administration in the United States has implemented security operations centers for most airports that have federalized security. The primary function of TSA security operations centers is to act as a communication hub for security personnel, law enforcement, airport personnel and various other agencies involved in the daily operations of airports. SOCs are staffed 24-hours a day by SOC watch officers. Security operations center watch officers are trained in all aspects of airport and aviation security and are often required to work abnormal shifts. SOC watch officers also ensure that TSA personnel follow proper protocol in dealing with airport security operations. The SOC is usually the first to be notified of incidents at airports such as the discovery of prohibited items/contraband, weapons, explosives, hazardous materials as well as incidents regarding flight delays, unruly passengers, injuries, damaged equipment and various other types of potential security threats. The SOC in turn relays all information pertaining to these incidents to TSA federal security directors, law enforcement and TSA headquarters.
See also
National SIGINT Operations Centre
References
Security
Surveillance
Security engineering | Security operations center | Engineering | 477 |
634,316 | https://en.wikipedia.org/wiki/Prednisolone | Prednisolone is a corticosteroid, a steroid hormone used to treat certain types of allergies, inflammatory conditions, autoimmune disorders, and cancers, electrolyte imbalances and skin conditions. Some of these conditions include adrenocortical insufficiency, high blood calcium, rheumatoid arthritis, dermatitis, eye inflammation, asthma, multiple sclerosis, and phimosis. It can be taken by mouth, injected into a vein, used topically as a skin cream, or as eye drops. It differs from the similarly named prednisone in having a hydroxyl at the 11th carbon instead of a ketone.
Common side effects with short-term use include nausea, difficulty concentrating, insomnia, increased appetite, and fatigue. More severe side effects include psychiatric problems, which may occur in about 5% of people. Common side effects with long-term use include bone loss, weakness, yeast infections, and easy bruising. While short-term use in the later part of pregnancy is safe, long-term use or use in early pregnancy is occasionally associated with harm to the baby. It is a glucocorticoid made from hydrocortisone (cortisol).
Prednisolone was discovered and approved for medical use in 1955. It is on the World Health Organization's List of Essential Medicines. It is available as a generic drug. In 2022, it was the 136th most commonly prescribed medication in the United States, with more than 4million prescriptions.
Medical uses
When used in low doses, corticosteroids serve as an anti-inflammatory agent. At higher doses, they are considered as immunosuppressants. Corticosteroids inhibit the inflammatory response to a variety of inciting agents and, it is presumed, delay or slow healing. They inhibit edema, fibrin deposition, capillary dilation, leukocyte migration, capillary proliferation, fibroblast proliferation, deposition of collagen, and scar formation associated with inflammation.
Systemic use
Prednisolone is a corticosteroid drug with predominant glucocorticoid and low mineralocorticoid activity, making it useful for the treatment of a wide range of inflammatory and autoimmune conditions such as asthma, uveitis, pyoderma gangrenosum, rheumatoid arthritis, urticaria, angioedema, ulcerative colitis, pericarditis, temporal arteritis, Crohn's disease, Bell's palsy, multiple sclerosis, cluster headaches, vasculitis, acute lymphoblastic leukemia, autoimmune hepatitis, lupus, Kawasaki disease, dermatomyositis, post-myocardial infarction syndrome, and sarcoidosis.
Prednisolone can also be used for allergic reactions ranging from seasonal allergies to drug allergic reactions.
Prednisolone can also be used as an immunosuppressant for organ transplants.
Prednisolone in lower doses can be used in cases of adrenal insufficiency due to Addison's disease.
Topical use
Ophthalmology
Topical prednisolone is mainly used in the ophthalmic pathway as eye drops in numerous eye conditions, including corneal injuries caused by chemicals, burns, and alien objects, inflammation of the eyes, mild to moderate non-infectious allergies, disorders of the eyelid, conjunctiva or sclera, ocular inflammation caused by operation and optic neuritis. Some side effects include glaucoma, blurred vision, eye discomfort, impaired recovery of injured site, scarring of the optic nerve, cataracts, and urticaria. However, their prevalence is not known.
Prednisolone eye drops are contraindicated in individuals who develop hypersensitivity reactions against prednisolone, or individuals with the current conditions, such as tuberculosis of the eye, shingles affecting the eye, raised intraocular pressure, and eye infection caused by fungus.
Prednisolone acetate ophthalmic suspension (eye drops) is prepared as a sterile ophthalmic suspension and used to reduce swelling, redness, itching, and allergic reactions affecting the eye. It has been explored as a treatment option for bacterial keratitis.
Prednisolone eye drops are used in conjunctivitis caused by allergies and bacteria, marginal keratitis, uveitis, endophthalmitis, which is an infection of the eye involving the aqueous humor, Graves' ophthalmopathy, herpes zoster ocular infection, inflammation of the eye after surgery, and corneal injuries caused by chemicals, radiation, thermal burns, or penetration of foreign objects. It is also used in the prevention of myringosclerosis, herpes simplex stromal keratitis. Topical prednisolone can also be used after procedures such as Laser Peripheral Iridotomy for patients with primary angle-closure suspects (PACS) to control inflammations.
Ear drops
In addition, topical prednisolone can also be administered as ear drops.
]
Adverse effects
Adverse reactions from the use of prednisolone include:
Increased appetite, weight gain, nausea, and malaise
Increased risk of infection
Cardiovascular events
Dermatological effects including reddening of the face, bruising/skin discoloration, impaired wound healing, skin atrophy, skin rash, edema, and abnormal hair growth
Hyperglycemia; patients with diabetes may need increased insulin or diabetic therapies
Menstrual abnormalities
Lower response to hormones, especially during stressful instances such as surgery or illness
Change in electrolytes: rise in blood pressure, increased sodium and low potassium, leading to alkalosis
Gastrointestinal system effects: swelling of the stomach lining, reversible increase in liver enzymes, and risk of stomach ulcers
Muscular and skeletal abnormalities, such as muscle weakness/muscle loss, osteoporosis (see steroid-induced osteoporosis), long bone fractures, tendon rupture, and back fractures
Neurological effects, including involuntary movements (convulsions), headaches, and vertigo
Psychosocial behavioral and emotional disturbances with aggression being one of the most common cognitive symptoms, especially with oral use.
Nasal septum perforation and bowel perforation (in some pathologic conditions).
Discontinuing prednisolone after long-term or high-dose use can lead to adrenal insufficiency.
Pregnancy and breastfeeding
Although there are no major human studies of prednisolone use in pregnant women, studies in several animals show that it may cause birth defects including increased likelihood of cleft palate.
Prednisolone is found in the breast milk of mothers taking prednisolone.
Local adverse effects in the eye
When used topically on the eye, the following are potential side effects:
Cataracts: Extended usage of corticosteroids may cause clouding at the back of the lens, also known as posterior subcapsular cataract. This type of cataract reduces the path of light from reaching the eye, which interferes with a person's reading vision. Consumption of prednisolone eye drops post-surgery may also retard the healing process.
Corneal thinning: When corticosteroids are used in the long term, corneal and scleral thinning is also one of its consequences. When not ceased, thinning may ultimately lead to perforation of the cornea.
Glaucoma: Elongated use of corticosteroids has a chance of causing a raised intraocular pressure (IOP), injuring the optic nerve, and weakening visual awareness. Corticosteroids should be used cautiously in patients with concomitant conditions of glaucoma. Doctors track patients' IOP if they are using corticosteroid eye drops for more than 103 days.
Pharmacology
Pharmacodynamics
As a glucocorticoid, the lipophilic structure of prednisolone allows for easy passage through the cell membrane where it then binds to its respective glucocorticoid receptor (GCR) located in the cytoplasm. Upon binding, the formation of the GC/GCR complex causes dissociation of chaperone proteins from the glucocorticoid receptor enabling the GC/GCR complex to translocate inside the nucleus. This process occurs within 20 minutes of binding. Once inside the nucleus, the homodimer GC/GCR complex binds to specific DNA binding sites known as glucocorticoid response elements (GREs) resulting in gene expression or inhibition. Complex binding to positive GREs leads to the synthesis of anti-inflammatory proteins while binding to negative GREs blocks the transcription of inflammatory genes. They inhibit the release of signals that promote inflammation such as nuclear factor-Kappa B (NF-κB), Activator protein 1 (AP-1), nuclear factor of activated T-cells (NFAT), and stimulate anti-inflammatory signals such as the interleukin-10 gene. All of them will collectively cause a sequence of events, including the inhibition of prostaglandin synthesis and additional inflammatory mediators. Glucocorticoids also inhibit neutrophil cell death and demargination. As well as phospholipase A2, which in turn lessens arachidonic acid derivative genesis.
Pharmacokinetics
Prednisolone has a relatively short half-life, ranging 2–4 hours. It also has a large therapeutic window, considering the dosage required to produce a therapeutic effect is a few times higher than what the body naturally produces.
Prednisolone is 70–90% plasma protein bound, it binds to proteins such as albumin.
Both prednisolone phosphate and prednisolone acetate go through ester hydrolysis in the body to form prednisolone. It subsequently undergoes the usual metabolism of prednisolone. Concomitant use of prednisolone and strong CYP3A4 inhibitors such as ketoconazole is shown to cause a rise in plasma prednisolone concentrations by about 50% owing to a diminished clearance.
Prednisolone predominantly undergoes kidney elimination and is excreted in the urine as sulphate and metabolites of glucuronide conjugate.
Prednisone
Prednisone is a prodrug that is activated in the liver. When it enters the body, prednisone is triggered by the liver and body chemicals to turn into its active form, prednisolone.
Chemistry
Prednisolone is a synthetic pregnane corticosteroid closely related to its cognate prednisone, having identical structure save for two fewer hydrogens near C11. It is also known as δ1-cortisol, δ1-hydrocortisone, 1,2-dehydrocortisol, or 1,2-dehydrohydrocortisone, as well as 11β,17α,21-trihydroxypregna-1,4-diene-3,20-dione.
Interactions
Co-administration of prednisolone eye drops with ophthalmic nonsteroidal anti-inflammatory drugs (NSAIDs) may perhaps exacerbate its effects, causing unwanted side effects such as toxicity. The wound healing process may also be hindered.
Drug interactions of prednisolone include other immunosuppressants like azathioprine or ciclosporin, antiplatelet drugs like clopidogrel, anticoagulants like dabigatran or warfarin, or NSAIDs such as aspirin, celecoxib, or ibuprofen.
Contraindications
Special populations
Children
Prolonged use of prednisolone eye drops in children may lead to raised intraocular pressure. While this phenomenon is dose-dependent, it is shown to have a greater effect, especially in children under 6 years of age.
Pregnancy and breastfeeding
Research on animal reproduction has indicated that there is a trace of teratogenicity when doses are reduced by 10 times the human recommended dose. There is no sufficient information on human pregnancy at this moment. Use is only recommended when the potential benefits outweigh the potential risks for the pregnant mother and the fetus.
Prednisolone when delivered systemically can be found in the mother's breast milk, however, there is no data provided for the extent of prednisolone found in the system after administering eye drops. However, the presence of corticosteroids is recorded when they are administered systemically, and it could affect the fetus' growth. Therefore, the use of prednisolone during breastfeeding is not advocated.
Society and culture
Dosage forms
Prednisolone is supplied as oral liquid, oral suspension, oral syrup, oral tablet, and oral disintegrating tablet. It may be a generic medication or supplied as brands Flo-Pred (prednisolone acetate oral suspension), Millipred (oral tablets), Orapred (prednisolone sodium phosphate oral dissolving tablets), Pediapred (prednisolone sodium phosphate oral solution), Veripred 20, Prelone, Hydeltra-T.B.A., Hydeltrasol, Key-Pred, Cotolone, Predicort, Medicort, Predcor, Bubbli-Pred, Omnipred (prednisolone acetate ophthalmic suspension), Pred Mild, Pred Forte, and others.
Athletics
As a glucocorticosteroid, unauthorized or ad hoc use of prednisolone during competition via oral, intravenous, intramuscular, or rectal routes is banned under World Anti-Doping Agency (WADA) anti-doping rules.
Veterinary uses
Prednisolone is used in the treatment of inflammatory and allergic conditions in cats, dogs, horses, small mammals such as ferrets, birds, and reptiles. Its usage in treating inflammation, immune-mediated disease, Addison's disease, and neoplasia is often considered off-label use. Many drugs are commonly prescribed for off-label use in veterinary medicine." Studies in ruminating species, such as alpacas, have shown that oral administration of the drug is associated with a reduced bioavailability compared to intravenous administration; however, levels that are therapeutic in other species can be achieved with oral administration in alpacas.
It is used in a broad spectrum of diseases, for example, inflammation of scleral tissues, cornea, and conjunctiva in dogs. In horses, prednisolone acetate suspensions are priorly used to treat inflammation in the middle layer of the eye, also known as anterior uveitis and equine recurrent uveitis (ERU), which is the leading cause of visual impairment in horses. Prednisolone acetate eye drops are not to be used in other animals such as birds.
Prednisolone acetate eye drops are also prescribed to dogs and cats to lessen swelling, redness, burning, and pain sensations after surgeries of the eye.
Cats with conjunctivitis usually are required to avoid using ophthalmic preparations of corticosteroids and its derivatives. The most typical infections are caused by herpes virus.
References
External links
Drugs developed by AbbVie
CYP3A4 inducers
Glucocorticoids
Human drug metabolites
Mineralocorticoids
Otologicals
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
hy:Պրեդնիզոլոն | Prednisolone | Chemistry | 3,366 |
448,990 | https://en.wikipedia.org/wiki/List%20of%20Pinus%20species | Pinus, the pines, is a genus of approximately 111 extant tree and shrub species. The genus is currently split into two subgenera: subgenus Pinus (hard pines), and subgenus Strobus (soft pines). Each of the subgenera have been further divided into sections based on chloroplast DNA sequencing and whole plastid genomic analysis. Older classifications split the genus into three subgenera – subgenus Pinus, subgenus Strobus, and subgenus Ducampopinus (pinyon, bristlecone and lacebark pines) – based on cone, seed and leaf characteristics. DNA phylogeny has shown that species formerly in subgenus Ducampopinus are members of subgenus Strobus, so Ducampopinus is no longer used.
The species of subgenus Ducampopinus were regarded as intermediate between the other two subgenera. In the modern classification, they are placed into subgenus Strobus, yet they did not fit entirely well in either so they were classified in a third subgenus. In 1888 the Californian botanist John Gill Lemmon placed them in subgenus Pinus. In general, this classification emphasized cone, cone scale, seed, and leaf fascicle and sheath morphology, and species in each subsection were usually recognizable by their general appearance. Pines with one fibrovascular bundle per leaf, (the former subgenera Strobus and Ducampopinus) were known as haploxylon pines, while pines with two fibrovascular bundles per leaf, (subgenus Pinus) were called diploxylon pines. Diploxylon pines tend to have harder timber and a larger amount of resin than the haploxylon pines. The current division into two subgenera (Pinus and Strobus) is supported with rigorous genetic evidence.
Several features are used to distinguish the subgenera, sections, and subsections of pines: the number of leaves (needles) per fascicle, whether the fascicle sheaths are deciduous or persistent, the number of fibrovascular bundles per needle (2 in Pinus or 1 in Strobus), the position of the resin ducts in the needles (internal or external), the presence or shape of the seed wings (absent, rudimentary, articulate, and adnate), and the position of the umbo (dorsal or terminal) and presence of a prickle on the scales of the seed cones.
Both subgenera are thought to have a very ancient divergence from one another, having diverged during the late Jurassic.
Subgenus Pinus
Subgenus Pinus includes the yellow and hard pines. Pines in this subgenus have one to five needles per fascicle and two fibrovascular bundles per needle, and the fascicle sheaths are persistent, except in P. leiophylla and P. lumholtzii. Cone scales are thicker and more rigid than those of subgenus Strobus, and cones either open soon after they mature or are serotinous.
Section Pinus
Section Pinus has two or three needles per fascicle. Cones of all species have thick scales, and all except those of P. pinea open at maturity. Species in this section are native to Europe, Asia, and the Mediterranean, except for P. resinosa in northeastern North America and P. tropicalis in western Cuba.
Subsection Incertae sedis
†P. driftwoodensis – Early Eocene, British Columbia, Canada
Subsection Pinus
All but two species (P. resinosa and P. tropicalis) in Subsection Pinus are native to Eurasia.
P. densata – Sikang pine
P. densiflora – Korean red pine
P. henryi – Henry's pine
P. hwangshanensis – Huangshan pine
P. kesiya – Khasi pine
P. latteri? – Tenasserim pine
P. luchuensis – Luchu pine
P. massoniana – Masson's pine
P. merkusii – Sumatran pine
P. mugo – mountain pine
P. nigra – Austrian pine
†P. prehwangshanensis
P. resinosa – red pine
P. sylvestris – Scots pine
P. tabuliformis – Chinese red pine
P. taiwanensis – Taiwan red pine
P. thunbergii – Japanese black pine
P. tropicalis – tropical pine
P. uncinata
P. yunnanensis – Yunnan pine
Subsection Pinaster
Subsection Pinaster contains species native to the Mediterranean, as well as P. roxburghii from the Himalayas. The scales of its cones lack spines. It is named after P. pinaster.
P. brutia – Turkish pine
P. canariensis – Canary Island pine
P. halepensis – Aleppo pine
P. heldreichii – Bosnian pine
P. pinaster – maritime pine
P. pinea – stone pine
P. roxburghii – chir pine
Section Trifoliae
Section Trifoliae (American hard pines), despite its name (which means "three-leaved"), has two to five needles per fascicle, or rarely eight. The cones of most species open at maturity, but a few are serotinous. All but two American hard pines belong to this section.
Phylogenetic analysis supports ancient divergences within this section, with subsections Australes and Ponderosae having diverged during the mid-Cretaceous.
Subsection Australes
Subsection Australes is native to North and Central America and islands in the Caribbean.
The closed-cone (serotinous) species of California and Baja California, P. attenuata, P. muricata, and P. radiata, are sometimes placed in a separate subsection, Attenuatae.
P. attenuata – knobcone pine
P. caribaea – Caribbean pine
P. cubensis – Cuban pine
P. echinata – shortleaf pine
P. elliottii – slash pine
†P. foisyi – extinct
P. glabra – spruce pine
P. georginae
P. greggii – Gregg's pine
P. herrerae – Herrera's pine
P. jaliscana – Jalisco pine
P. lawsonii – Lawson's pine
P. leiophylla – Chihuahua pine
P. lumholtzii – Lumholtz's pine
P. luzmariae
†P. matthewsii – Pliocene, Yukon Territory, Canada
P. muricata – bishop pine
P. occidentalis – Hispaniolan pine
P. oocarpa – egg-cone pine
P. palustris – longleaf pine
P. patula – patula pine
P. praetermissa – McVaugh's pine
P. pringlei – Pringle's pine
P. pungens – Table Mountain pine
P. radiata – Monterey pine
P. rigida – pitch pine
P. serotina – pond pine
P. taeda – loblolly pine
P. tecunumanii – Tecun Uman pine
P. teocote – ocote pine
P. vallartensis
Subsection Contortae
Subsection Contortae is native to North America and Mexico.
P. banksiana – jack pine
P. clausa – sand pine
P. contorta
P. c. bolanderi – Bolander pine
P. c. contorta – shore pine
P. c. latifolia – lodgepole pine
P. c. murrayana – tamarack pine
P. virginiana – Virginia pine
Subsection Ponderosae
Subsection Ponderosae is native to Central America, Mexico, the western United States, and southwestern Canada, although its former range was possibly much wider as evidenced by upper Miocene fossils belonging to this subsection found in Japan
P. arizonica – Arizona pine
P. cooperi – Cooper's pine
P. coulteri – Coulter pine
P. devoniana – Michoacan pine
P. gordoniana
P. durangensis – Durango pine
P. engelmannii – Apache pine
†P. fujiii
P. hartwegii – Hartweg's pine
P. jeffreyi – Jeffrey pine
†P. johndayensis – Oligocene
P. maximinoi – thinleaf pine
P. montezumae – Montezuma pine
P. ponderosa – ponderosa pine
Pinus ponderosa var. willamettensis - Willamette Valley Ponderosa Pine
P. pseudostrobus – smooth-bark Mexican pine
P. sabiniana – gray pine
P. torreyana – Torrey pine
Subgenus Strobus
Subgenus Strobus includes the white and soft pines. Pines in this subgenus have one to five needles per fascicle and one fibrovascular bundle per needle, and the fascicle sheaths are deciduous, except in P. nelsonii, where they are persistent. Cone scales are thinner and more flexible than those of subgenus Pinus, except in some species like P. maximartinezii, and cones usually open soon after they mature.
Section Parrya
Section Parrya has one to five needles per fascicle. The seeds either have articulate (jointed) wings or no wings at all. In all species except for P. nelsonii, the fascicle sheaths curl back to form a rosette before falling away. The cones have thick scales and release the seeds at maturity. This section is native to the southwestern United States and Mexico.
Subsection Balfourianae
Subsection Balfourianae (bristlecone pines) is native to southwest United States.
P. aristata – Rocky Mountains bristlecone pine
P. balfouriana – foxtail pine
†P. crossii - (Chattian; Creede Formation, Colorado)
P. longaeva – Great Basin bristlecone pine
Subsection Cembroides
Subsection Cembroides (pinyons or piñons) is native to Mexico and the southwestern United States.
P. cembroides – Mexican pinyon
P. culminicola – Potosi pinyon
P. discolor – border pinyon
P. edulis – Colorado pinyon
P. johannis – Johann's pinyon
P. maximartinezii – big-cone pinyon
P. monophylla – single-leaf pinyon
P. orizabensis – Orizaba pinyon
P. pinceana – weeping pinyon
P. quadrifolia – Parry pinyon
P. remota – Texas pinyon or papershell pinyon
P. rzedowskii – Rzedowski's pinyon
Subsection Nelsonianae
Subsection Nelsonianae is native to northeastern Mexico. It consists of the single species with persistent fascicle sheaths.
P. nelsonii – Nelson's pinyon
Section Quinquefoliae
Section Quinquefoliae (white pines), as its name (which means "five-leaved") suggests, has five needles per fascicle except for P. krempfii, which has two, and P. gerardiana and P. bungeana, which have three. All species have cones with thin or thick scales that open at maturity or do not open at all; none are serotinous. Species in this section are found in Eurasia and North America, and one species, P. chiapensis reaches Guatemala.
Subsection Gerardianae
Subsection Gerardianae is native to East Asia. It has three or five needles per fascicle.
P. bungeana – lacebark pine
P. gerardiana – chilgoza pine
P. squamata – Qiaojia pine
Subsection Krempfianae
Subsection Krempfianae is currently native to Vietnam, with a fossil record extending into the Oligocene. It has two needles per fascicle, and they are atypically flattened. The cone scales are thick and have no prickles. Until 2021, the subsection was considered monotypic, when an Oligocene fossil species was described from Yunnan Province, China.
P. krempfii – Krempf's pine
†P. leptokrempfii – Oligocene
Subsection Strobus
Subsection Strobus has five needles per fascicle and thin cone scales with no prickles. Needles tend to be flexible and soft with slightly lighter side underneath. It is native to North and Central America, Europe, and Asia.
P. albicaulis – whitebark pine
P. amamiana – Yakushima white pine
P. armandii – Chinese white pine
P. arunachalensis
P. ayacahuite – Mexican white pine
P. bhutanica – Bhutan white pine
P. cembra – Swiss pine
P. chiapensis – Chiapas pine
P. dabeshanensis – Dabieshan pine
P. dalatensis – Vietnamese white pine
P. fenzeliana – Hainan white pine
P. flexilis – limber pine
P. koraiensis – Korean pine
P. lambertiana – sugar pine
†P. longlingensis – Late Pliocene, Mangbang Formation – Yunnan, China
P. monticola – western white pine
P. morrisonicola – Taiwan white pine
P. parviflora – Japanese white pine
P. hakkodensis – Hakkoda pine
P. peuce – Macedonian pine
P. pumila – Siberian dwarf pine
P. ravii
P. sibirica – Siberian pine
P. strobus – eastern white pine
P. strobiformis – Southwestern white pine (also Chihuahuan)
Pinus stylesii
P. wallichiana – blue pine
P. wangii – Guangdong white pine
Incertae sedis
Species which are not placed in a subgenus at this time.
†Pinus latahensis – Early Eocene, Klondike Mountain Formation, Allenby Formation – Okanagan Highlands Floras
†Pinus macrophylla – Early Eocene, Klondike Mountain Formation, Allenby Formation – Okanagan Highlands Floras
†Pinus peregrinus – Middle Eocene, Golden Valley Formation, North Dakota, US
†Pinus tetrafolia – Early Eocene, Klondike Mountain Formation – Okanagan Highlands Floras
See also
Hybridization in pines (list of pine hybrids)
References
Bibliography
External links
Tree of Life Web – favors classification of Ducampopinus species in Strobus.
NCBI Taxonomy server – files Ducampopinus species above as Strobus.
Pinus
Pinus
Pinus | List of Pinus species | Biology | 3,014 |
36,443,541 | https://en.wikipedia.org/wiki/Chlorociboria%20spiralis | Chlorociboria spiralis is a species of fungus in the family Chlorociboriaceae. It is found in New Zealand.
References
External links
Helotiaceae
Fungi described in 2005
Fungi of New Zealand
Fungus species | Chlorociboria spiralis | Biology | 47 |
593,949 | https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Denmark | The Nomenclature of Territorial Units for Statistics (NUTS) is a geocode standard for referencing the administrative division of Denmark for statistical purposes. The standard is developed and regulated by the European Union. The NUTS standard is instrumental in delivering the European Union's Structural Funds. The NUTS code for Denmark is DK and a hierarchy of three levels is established by Eurostat. Below these is a further levels of geographic organisation - the local administrative unit (LAU). In Denmark, the LAU 1 are municipalities and the LAU 2 are parishes.
Overall
NUTS codes
Local administrative units
Below the NUTS levels, the LAU (Local Administrative Units) levels are:
The LAU codes of Denmark can be downloaded here:
NUTS codes
Before 2003
In the 2003 version, before the counties were abolished, the codes were as follows:
See also
Administrative divisions of Denmark
FIPS region codes of Denmark
ISO 3166-2 codes of Denmark
References
Sources
Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe
Overview map of EU Countries - NUTS level 1
Overview map of EU Countries - Country level
Overview map of EU Countries - NUTS level 1
Correspondence between the NUTS levels and the national administrative units
List of current NUTS codes
Download current NUTS codes (ODS format)
Regions of Denmark, Statoids.com
Denmark
Nuts | NUTS statistical regions of Denmark | Mathematics | 259 |
74,173,318 | https://en.wikipedia.org/wiki/Ambident%20%28chemistry%29 | In chemistry, ambident is a molecule or group that has two alternative and interacting reaction sites, to either of which a bond may be made during a reaction.
Ambident dienophile
Ambident dienophile 57 reacts with DAPC 54 at the cyclobutene π-bond to produce ligand 58; in contrast, the related ambident dienophile 59 reacts with DAPC 54 at the naphthoquinone π-center to produce adduct 60 (lack of shielding of the methylene protons supports the stereochemical assignment).
Ambident Nucleophile
An Ambident nucleophile refers to an anionic nucleophile that exhibits resonance delocalization of its negative charge over two unlike atoms or over two like but non-equivalent atoms. Enolate ions are Ambident Nucleophile.
References
Physical organic chemistry | Ambident (chemistry) | Chemistry | 187 |
1,718,100 | https://en.wikipedia.org/wiki/GPS%20meteorology | GPS meteorology refers to the use of the effect of the atmosphere on the
propagation of the Global Positioning System's (GPS) radio signals to derive
information on the state of the (lower, neutral) atmosphere.
There are currently two main operational techniques in use in GPS meteorology:
GPS limb sounding from orbit, and GPS water vapour monitoring.
Ground-based
As a result, if it is possible to determine the total atmospheric delay by GPS,
one can subtract out the calculated contribution by the well-mixed "dry" gasses
from the measured air pressure at the surface, and obtain a measure for the
absolute water vapour content of the atmosphere, integrated from surface to
space. This is also referred to as "total precipitable water vapour".
What makes it possible to determine the total atmospheric delay, is its known
dependence of the zenith or elevation angle of the satellite. If
is the zenith angle, the propagation path delay is proportional to
. This unique signature makes it possible to solve
separately for the zenith delay in GPS computations also solving for
station coordinates and receiver clock delays.
Nowadays water vapour estimates are generated routinely in real time (latency
measured in hours) by permanent geodetic GPS networks existing in
many parts of the world.
Water vapour is a very important gas for meteorological and climatological
studies, because of the latent heat it carries in transport. Additionally
it is a powerful greenhouse gas. The GPS technique is especially valuable
because it measures absolute water vapour content or partial pressure
rather than relative humidity, which corresponds to water vapour contents that
are strongly dependent on the often not precisely known temperature.
Space-based
One can receive on a low flying satellite the signals from the much higher
orbiting (20 000 km) GPS satellite constellation. As the low flying satellite
orbits the Earth in 1.5 hours, many of the GPS satellites will "rise" and "set"
during the time of the orbit. When they do, their signal will traverse the atmosphere.
A signal delay is produced which grows or decays exponentially with time, just
as the atmospheric density is an exponential function of height above the
Earth's surface. In fact, this so-called limb sounding technique allows
us to determine the scale height, the constant describing the steepness of
this atmospheric density decay. This makes the technique extremely valuable for
climatological studies, as the scale height is directly related to the
temperature in the upper atmosphere, where the limb sounding signals do their
sensing. The technique works best in the lower stratosphere and upper
troposphere; it breaks down close to the Earth surface especially in the
tropics, due to water vapour extinction.
Satellites involved in GPS limb sounding have been: METSAT, OERSTED (Danish),
and several others.
See also
Error analysis for the Global Positioning System
Links and references
GPS meteorology
Meteorological instrumentation and equipment
Global Positioning System | GPS meteorology | Technology,Engineering | 589 |
3,571,350 | https://en.wikipedia.org/wiki/Socle%20%28mathematics%29 | In mathematics, the term socle has several related meanings.
Socle of a group
In the context of group theory, the socle of a group G, denoted soc(G), is the subgroup generated by the minimal normal subgroups of G. It can happen that a group has no minimal non-trivial normal subgroup (that is, every non-trivial normal subgroup properly contains another such subgroup) and in that case the socle is defined to be the subgroup generated by the identity. The socle is a direct product of minimal normal subgroups.
As an example, consider the cyclic group Z12 with generator u, which has two minimal normal subgroups, one generated by u4 (which gives a normal subgroup with 3 elements) and the other by u6 (which gives a normal subgroup with 2 elements). Thus the socle of Z12 is the group generated by u4 and u6, which is just the group generated by u2.
The socle is a characteristic subgroup, and hence a normal subgroup. It is not necessarily transitively normal, however.
If a group G is a finite solvable group, then the socle can be expressed as a product of elementary abelian p-groups. Thus, in this case, it is just a product of copies of Z/pZ for various p, where the same p may occur multiple times in the product.
Socle of a module
In the context of module theory and ring theory the socle of a module M over a ring R is defined to be the sum of the minimal nonzero submodules of M. It can be considered as a dual notion to that of the radical of a module. In set notation,
Equivalently,
The socle of a ring R can refer to one of two sets in the ring. Considering R as a right R-module, soc(RR) is defined, and considering R as a left R-module, soc(RR) is defined. Both of these socles are ring ideals, and it is known they are not necessarily equal.
If M is an Artinian module, soc(M) is itself an essential submodule of M.
In fact, if M is a semiartinian module, then soc(M) is itself an essential submodule of M. Additionally, if M is a non-zero module over a left semi-Artinian ring, then soc(M) is itself an essential submodule of M. This is because any non-zero module over a left semi-Artinian ring is a semiartinian module.
A module is semisimple if and only if soc(M) = M. Rings for which soc(M) = M for all M are precisely semisimple rings.
soc(soc(M)) = soc(M).
M is a finitely cogenerated module if and only if soc(M) is finitely generated and soc(M) is an essential submodule of M.
Since the sum of semisimple modules is semisimple, the socle of a module could also be defined as the unique maximal semisimple submodule.
From the definition of rad(R), it is easy to see that rad(R) annihilates soc(R). If R is a finite-dimensional unital algebra and M a finitely generated R-module then the socle consists precisely of the elements annihilated by the Jacobson radical of R.
Socle of a Lie algebra
In the context of Lie algebras, a socle of a symmetric Lie algebra is the eigenspace of its structural automorphism that corresponds to the eigenvalue −1. (A symmetric Lie algebra decomposes into the direct sum of its socle and cosocle.)
See also
Injective hull
Radical of a module
Cosocle
References
Module theory
Group theory
Functional subgroups | Socle (mathematics) | Mathematics | 805 |
12,170,265 | https://en.wikipedia.org/wiki/Spark%20ionization | Spark ionization (also known as spark source ionization) is a method used to produce gas phase ions from a solid sample. The prepared solid sample is vaporized and partially ionized by an intermittent discharge or spark. This technique is primarily used in the field of mass spectrometry. When incorporated with a mass spectrometer the complete instrument is referred to as a spark ionization mass spectrometer or as a spark source mass spectrometer (SSMS).
History
The use of spark ionization for analysis of impurities in solids was indicated by Dempster's work in 1935. Metals were a class of material that could not be previously ionized by thermal ionization (the method formerly used for ionizing solid sample). Spark ion sources were not commercially produced until after 1954 when Hannay demonstrated its capability for analysis of trace impurities (sub-part per million detection sensitivity) in semiconducting materials. The prototype spark source instrument was the MS7 mass spectrometer produced by Metropolitan-Vickers Electrical Company, Ltd. in 1959. Commercial production of spark source instruments continued throughout the 50s, 60s, and 70s, but they were phased out when other trace element detection techniques with improved resolution and accuracy were invented (circa 1960s). Successors of the spark ion source for trace element analysis are the laser ion source, glow discharge ion source, and inductively coupled plasma ion source. Today, very few laboratories use spark ionization worldwide.
How it works
The spark ion source consists of a vacuum chamber containing the electrodes, which is called the spark housing. The tips of the electrodes are composed of or containing the sample and are electrically connected to the power supply. Extraction electrodes create an electric field that accelerate the generated ions through the exit slit.
Ion sources
For spark ionization, there exist two ion sources: the low-voltage direct-current (DC) arc source and the high-voltage radio-frequency (rf) spark source. The arc source has better reproducibility and the ions produced have a narrower energy spread compared to the spark source; however, the spark source has the ability to ionize both conducting and non-conducting samples while the arc source can only ionize conducting samples.
In the low-voltage DC arc source, a high voltage is applied to the two conducting electrodes to initiate the spark, followed by application of a low-voltage direct current to maintain an arc between the spark gap. The duration of the arc is usually only a few hundred microseconds to prevent overheating of the electrodes, and it repeated 50-100 times per second. This method can only be used to ionize conducting samples, e.g. metals.
The high-voltage rf spark source is the one that was used in commercial SSMS instruments due to its ability to ionize both conducting and non-conducting materials. Typically, samples are physically incorporated into two conductive electrodes between which an intermittent (1 MHz) high-voltage (50-100 kV using a Tesla transformer) electric spark is produced, ionizing the material at the tips of the pin-shaped electrodes. When the pulsed current is applied to the electrodes under ultra-high vacuum, a spark discharge plasma occurs in the spark gap in which ions are generated via electron impact. Within the discharge plasma, the sample evaporates, atomizes, and ionizes via electron impact. The total ion current may be optimized by adjusting the distance between the electrodes. This mode of ionization can be used to ionize conducting, semi-conducting, and non-conducting samples.
Sample preparation
Conducting and semi-conducting samples may be directly analyzed after being formed into electrodes. Non-conductive samples are first powdered, mixed with a conducting powder (usually high purity graphite or silver), homogenized, and then formed into electrodes. Even liquids can be analyzed if they are frozen or after impregnating a conducting powder. Sample homogeneity is important for reproducibility.
Spark source mass spectrometry (SSMS)
The rf spark source creates ions with a wide energy spread (2-3 kV), which necessitates a double focusing mass analyzer. Mass analyzers are typically Mattauch-Herzog geometry, which achieve velocity and directional focusing onto a plane with either photosensitive plates for ion detection or linear channeltron detector arrays. SSMS has several unique features that make it a useful technique for various applications. Merits of SSMS include high sensitivity with detection limits in the ppb range, simultaneous detection of all elements in a sample, and simple sample preparation. However, the rf spark ion current is discontinuous and erratic, which results in fair resolution and accuracy when standards are not implemented. Other drawbacks include expensive equipment, long analysis time, and the need for highly trained personnel to analyze the spectrum.
Applications of SSMS
Spark source mass spectrometry has been used for trace analysis and multielement analysis applications for highly conducting, semiconducting, and nonconducting materials. Some examples of SSMS applications are the trace element analysis of high-purity materials, multielement analysis of elements in technical alloys, geochemical and cosmochemical samples, biological samples, industrial stream samples, and radioactive material.
References
Ion source | Spark ionization | Physics | 1,084 |
5,480,302 | https://en.wikipedia.org/wiki/Differentiation%20in%20Fr%C3%A9chet%20spaces | In mathematics, in particular in functional analysis and nonlinear analysis, it is possible to define the derivative of a function between two Fréchet spaces. This notion of differentiation, as it is Gateaux derivative between Fréchet spaces, is significantly weaker than the derivative in a Banach space, even between general topological vector spaces. Nevertheless, it is the weakest notion of differentiation for which many of the familiar theorems from calculus hold. In particular, the chain rule is true. With some additional constraints on the Fréchet spaces and functions involved, there is an analog of the inverse function theorem called the Nash–Moser inverse function theorem, having wide applications in nonlinear analysis and differential geometry.
Mathematical details
Formally, the definition of differentiation is identical to the Gateaux derivative. Specifically, let and be Fréchet spaces, be an open set, and be a function. The directional derivative of in the direction is defined by
if the limit exists. One says that is continuously differentiable, or if the limit exists for all and the mapping
is a continuous map.
Higher order derivatives are defined inductively via
A function is said to be if is continuous. It is or smooth if it is for every
Properties
Let and be Fréchet spaces. Suppose that is an open subset of is an open subset of and are a pair of functions. Then the following properties hold:
Fundamental theorem of calculus. If the line segment from to lies entirely within then
The chain rule. For all and
Linearity. is linear in More generally, if is then is multilinear in the 's.
Taylor's theorem with remainder. Suppose that the line segment between and lies entirely within If is then where the remainder term is given by
Commutativity of directional derivatives. If is then for every permutation σ of
The proofs of many of these properties rely fundamentally on the fact that it is possible to define the Riemann integral of continuous curves in a Fréchet space.
Smooth mappings
Surprisingly, a mapping between open subset of Fréchet spaces is smooth (infinitely often differentiable) if it maps smooth curves to smooth curves; see Convenient analysis.
Moreover, smooth curves in spaces of smooth functions are just smooth functions of one variable more.
Consequences in differential geometry
The existence of a chain rule allows for the definition of a manifold modeled on a Fréchet space: a Fréchet manifold. Furthermore, the linearity of the derivative implies that there is an analog of the tangent bundle for Fréchet manifolds.
Tame Fréchet spaces
Frequently the Fréchet spaces that arise in practical applications of the derivative enjoy an additional property: they are tame. Roughly speaking, a tame Fréchet space is one which is almost a Banach space. On tame spaces, it is possible to define a preferred class of mappings, known as tame maps. On the category of tame spaces under tame maps, the underlying topology is strong enough to support a fully fledged theory of differential topology. Within this context, many more techniques from calculus hold. In particular, there are versions of the inverse and implicit function theorems.
See also
References
Banach spaces
Differential calculus
Euclidean geometry
Functions and mappings
Generalizations of the derivative
Topological vector spaces | Differentiation in Fréchet spaces | Mathematics | 650 |
2,727,218 | https://en.wikipedia.org/wiki/Drug%20recall | A drug recall removes a prescription or over-the-counter drug from the market. Drug recalls in the United States are made by the FDA or the creators of the drug when certain criteria are met. When a drug recall is made, the drug is removed from the market and potential legal action can be taken depending on the severity of the drug recall.
Drug recalls are classified in the US by the FDA in three different categories. Class I recalls are the most severe and indicate that exposure and/or consumption of the drug will lead to adverse health effects or death. Class II recalls refer to drugs that induce temporary and/or medically reversible health effects. Class III recalls occur when adverse health effects are not likely to occur when consuming the drug or being exposed to it.
There are also market withdrawals and medical device safety alerts'. Market withdrawals occur when a product has a minor violation that does not require FDA legal action. Medical device safety alerts occur when there are unreasonable safety risks associated with using a product.
Examples in the United States
A more comprehensive list of drug recalls worldwide can be found here: List of withdrawn drugs.
Mrs. Winslow’s Soothing Syrup
Mrs. Winslow's Soothing Syrup introduced as a soothing agent for both humans and animals, but was primarily advertised to help soothe teething babies. Though not directly affiliated with the FDA, Mrs. Winslow's Soothing Syrup was denounced by the American Medical Association in 1911 via their article titled "Baby Killers." The syrup was sold until as late as 1930 in the United Kingdom.
Diethylstilbestrol (DES)
In 1971, Diethylstilbestrol (DES) was recalled from the market. It was intended to be used to prevent prenatal problems during pregnancy. Women who took DES were shown to have a greater chance of having breast cancer. It is estimated that 5 to 10 million persons were exposed to DES until its recall in 1971. Both mothers and second generation daughters are confirmed to have adverse side effects from DES.
Daughters of DES Mothers are more than twice as likely to form breast cancer and are 2.4 times as likely to be infertile.
Sons of DES mothers have displayed side effects like genital abnormalities, non-cancerous Epididymal Cysts, and infertility.
The Third Generation of people exposed to DES are just now entering into an age where reproductive problems and abnormalities can be studied. No viable results currently exist.
Reasons for drug recall
The FDA will issue different levels of recall depending on the severity of the effects. From most to least severe, there is Class I, Class II, and Class III (defined above). There is also market withdrawal which occurs when a drug does not violate FDA regulation but has a known, minor default. The producer must either fix the default or take the drug off the market.
Drugs and medical devices are typically recalled if the product is defective, contaminated, contains a foreign object, fails to meet specifications, mislabeled, or misbranded. Misbranding was the most common reason for pharmaceutical recalls in 2015, accounting for 42%.
Drug Recalls by Class in the United States
This graph charts the rise in drug recalls by class in the United States from 2004 to 2014 with data from Regulatory Affairs Professionals Society
Recall process
The recall process in the United States follows three approximate phases. Distinct difficulties arise depending on the type of drug being recalled.
Drug recalls can be initiated by the producing firm or the FDA, and those launched by the FDA can be either mandatory or voluntary. This is applicable not just to drugs but all products covered under the FDA.
Notification and response
A firm submitting a recall to the FDA must provide all relevant information about the specific drug, including but not limited to: product name, use, description, and at least two samples of product (including packaging, instructions, inserts, etc.).
The firm must explain the problem they found with the product, how they found this problem, and the reason the problem occurred. For example, if the firm finds a leaking pipe near a product assembly line and tests for batches of the drug produced on that line are positive for contamination, they would submit that as the reason to how they believe their products came to be affected. After submitting a field report, the potential risks will be assessed.
Processing and tracking
In processing the recall, a Health Hazard Assessment will be conducted by the FDA to determine the recall class (defined above). Level, notification, instructions, mechanics, impacts on economy, and individual consumer must all be considered in determining recall strategy. Level of recall refers to which part of the distribution chain to which the recall is extended (wholesale, retail, pharmacy, medical user, etc.). Notification is the way consumers are alerted to the recall. In cases of a severe health hazard, a press release must be promptly issued. The FDA recommends a written notification, so consumers will have lasting documentation. There are guidelines for notification depending on type; these types include: mail, phone, facsimile, e-mail, media. Instructions and mechanics are information provided to the consumer regarding appropriate action for the recall. The instructions include if the product is to be returned, and if so, where and how they should return the product. It is important to consider the recalled drug’s place in the market, should the recall lead to market shortages.
Compliance and reporting
The FDA will conduct an Effectiveness Check to determine the success of the recall. The drug will either undergo controlled destruction or reconditioning (i.e. relabeling with the correct label). Status reports are conducted throughout the recall to determine effectiveness.
The root cause of the recall must be addressed and corrected to prevent future occurrences. After all corrective action is acknowledged and carried out, the FDA can terminate the recall.
Drug type
OTC, prescription, and compounded drugs (drugs tailored to a specific patient) each pose unique challenges to the recall process.
Over the counter drugs are widely distributed and there is no direct link between company and consumer. Recalls are typically only advertised online and in the media, so consumers are subject to their own awareness. Lot numbers indicated on the packing allow only those affected to participate in recall.
Prescription drug recalls are made simpler because they follow supply chain: the manufacturer notifies the pharmacy who notifies the patient. However, since there is not a lot/batch number on packaging, recalls must rely on date ranges (date the prescription was filled) whose inaccuracy may lead to higher costs.
Compounded drugs are simple to recall because there is a direct link to patient. Despite the seeming simplicity, the offending component is typically identified across multiple drug classifications, expanding the recall.
Changes in United States government policy
Although incomplete, this list highlights some changes in United States government policy that have had an effect on drug recalls.
National Childhood Vaccine Injury Act
The National Childhood Vaccine Injury Act of 1986 recognized the threat of injury and death that vaccines can pose. It allowed for financial compensation of the family should such threats come to light, and it increased vaccination safety precautions. If the federal compensation is not sufficient or not granted, this act allowed patients to take legal action for vaccine injuries. This is relevant to drug recalls because a vaccine producer is responsible for reparative damages if their vaccine causes injury and was not recalled.
FDA Modernization Act of 1997
The Food and Drug Administration Modernization Act of 1997 was passed in order to streamline the FDA to meet standards of efficiency expected by the 21st century. In regards to drugs, the act lowered the regulatory obligations of pharmaceutical companies, allowing them to rely on one clinical trial for approval. It is still the assumption, however, that two trials are necessary to determine safety and effectiveness.
In addition to lower regulatory hurdles, the act allowed for the advertisement of “off label” uses. The effects of this could be unnecessary overuse of the product by consumers and larger profits for the firm. Apropos to medical devices, private for-profit firms were allowed to review the products instead of the FDA.
21st Century Cures Act
The 21st Century Cures Act would allow for faster approval of certain drugs, which could result in additional recalls. It passed both houses of Congress and was signed into law by US President Barack Obama on December 13, 2016.
In 2015, 45 new drugs were passed by the FDA, which is more than double than the approval rate 10 years ago. The 21st Century Cures Act could make this number a trend rather than aberration by expediting approval through lower standards, much like the FDA Modernization Act of 1997. The rationale behind the act is that urgency trumps risk for “breakthrough” medical devices. The act would allow producers to submit data other than official clinical trials for consideration, such as case histories. It would also allow reviews to be done by third parties instead of the FDA. Debates stem from the fact that approval could be based on anecdotal rather than scientific evidence.
This act is debated due to the FDA’s seemingly close relations with medical device producers. The two industries collaborated to write proposals for lobbying for the legislation of this act. The FDA is supposed to be neutral in its actions, but representatives from Johnson & Johnson, St Jude Medical, and CVRx Inc. (large medical device suppliers) were all in attendance for the collaborative meetings.
See also
List of withdrawn drugs
Contamination control
References
External links
U.S. Food & Drug administration (FDA) — Enforcement Report Index
FDA — Recalls, Market Withdrawals and Safety Alerts
FDA — Recalls, Market Withdrawals and Safety Alerts Archive
FDA Center For Drug Evaluation & Research (CDER)
National Patient Safety Agency (UK)
Australian Therapeutic Goods Administration (TGA) — Product recalls
Product recalls | Drug recall | Chemistry | 1,969 |
15,193,662 | https://en.wikipedia.org/wiki/Rac%20%28GTPase%29 | Rac is a subfamily of the Rho family of GTPases, small (~21 kDa) signaling G proteins (more specifically a GTPase). Just as other G proteins, Rac acts as a molecular switch, remaining inactive while bound to guanosine diphosphate (GDP) and activated once guanine nucleotide exchange factors (GEFs) remove GDP, permitting guanosine triphosphate (GTP) to bind. When bound to GTP, Rac is activated. In its activated state, Rac participates in the regulation of cell movement, through its involvement in structural changes to the actin cytoskeleton. By changing the cytoskeletal dynamics within the cell, Rac-GTPases are able to facilitate the recruitment of neutrophils to the infected tissues, and to regulate degranulation of azurophil and integrin-dependent phagocytosis.
Activated Rac also regulates the effector functions of the target proteins involved in downstream signaling. As an essential subunit of NOX2 (NADPH oxidase enzyme complex), Rac is required for ROS (reactive oxygen species) production involved in the formation of NETs (neutrophil extracellular traps, thus, facilitating the pathogen and debris clearance by neutrophils, and the reduction of inflammation.
The abnormal activities of Rac including its hyperactivation, resistance to degradation, and abnormal localization of its signaling protein components were found to facilitate the development of cancerous cells and resist to anticancer treatment.
Recent experiments on Drosophila suggest that Rac could be involved in mediating the process of forgetting. Hyperactivation of Rac increases memory decay whereas its inhibition prevents interference-induced forgetting and slows down passive memory decay.
Classification
The Rho family of GTPases includes Rac, Rho, and Cdc42 small G-protein groups. Rac comprises Rac1, Rac2, Rac3, and RhoG subgroups.
The extensive cross-talk within these groups of GTPase provides a significant impact on the biological responses of the cell, influencing the activity of the cell cycle machinery. Ras cooperates with Cdc42 to regulate Elk1 phosphorylation and transcriptional activity of SRF. Ras also cooperates with Rho and Ras to activate other downstream signaling pathways.
References
G proteins
EC 3.6.5 | Rac (GTPase) | Chemistry | 500 |
48,080,791 | https://en.wikipedia.org/wiki/Consortium%20for%20Computing%20Sciences%20in%20Colleges | The Consortium for Computing Sciences in Colleges (CCSC) is a nonprofit organization divided into ten regions that roughly match geographical areas in the United States. The purpose of the consortium is to: "promote, support and improve computing curricula in colleges and universities; encompass regional constituencies devoted to this purpose; and promote a national liaison among local, regional and national organizations devoted to this purpose." Predominantly these colleges and universities are oriented toward teaching, rather than research.
Regions
CCSC regions include:
Central Plains
Eastern
Midsouth
Midwest
Northeastern
Northwestern
Rocky Mountain
South Central
Southeastern
Southwestern
Conferences
Conferences are typically held annually by region, and include presentation of peer-reviewed papers, as well as student papers, posters, and programming contests, workshops, and special sessions for innovative assignments and approaches in the area of computer science technology and education.
Journal
CCSC publishes the Journal of Computing Sciences in Colleges, containing the proceedings of each annual regional conference. The journal is distributed to approximately 600 faculty members from 350 colleges and universities.
See also
Association for Computing Machinery (ACM)
ACM Special Interest Group on Computer Science Education (SIGCSE)
Computer science
Google for Education
National Center for Women & Information Technology
Upsilon Pi Epsilon
References
External links
Computer science education
Information technology organizations based in North America
Educational organizations established in 1986
1986 establishments in the United States | Consortium for Computing Sciences in Colleges | Technology | 266 |
2,977,884 | https://en.wikipedia.org/wiki/Conjugation%20of%20isometries%20in%20Euclidean%20space | In a group, the conjugate by g of h is ghg−1.
Translation
If h is a translation, then its conjugation by an isometry can be described as applying the isometry to the translation:
the conjugation of a translation by a translation is the first translation
the conjugation of a translation by a rotation is a translation by a rotated translation vector
the conjugation of a translation by a reflection is a translation by a reflected translation vector
Thus the conjugacy class within the Euclidean group E(n) of a translation is the set of all translations by the same distance.
The smallest subgroup of the Euclidean group containing all translations by a given distance is the set of all translations. So, this is the conjugate closure of a singleton containing a translation.
Thus E(n) is a direct product of the orthogonal group O(n) and the subgroup of translations T, and O(n) is isomorphic with the quotient group of E(n) by T:
O(n) E(n) / T
Thus there is a partition of the Euclidean group with in each subset one isometries that keeps the origins fixed, and its combination with all translations.
Each isometry is given by an orthogonal matrix A in O(n) and a vector b:
and each subset in the quotient group is given by the matrix A only.
Similarly, for the special orthogonal group SO(n) we have
SO(n) E+(n) / T
Inversion
The conjugate of the inversion in a point by a translation is the inversion in the translated point, etc.
Thus the conjugacy class within the Euclidean group E(n) of inversion in a point is the set of inversions in all points.
Since a combination of two inversions is a translation, the conjugate closure of a singleton containing inversion in a point is the set of all translations and the inversions in all points. This is the generalized dihedral group dih (Rn).
Similarly { I, −I } is a normal subgroup of O(n), and we have:
E(n) / dih (Rn) O(n) / { I, −I }
For odd n we also have:
O(n) SO(n) × { I, −I }
and hence not only
O(n) / SO(n) { I, −I }
but also:
O(n) / { I, −I } SO(n)
For even n we have:
E+(n) / dih (Rn) SO(n) / { I, −I }
Rotation
In 3D, the conjugate by a translation of a rotation about an axis is the corresponding rotation about the translated axis. Such a conjugation produces the screw displacement known to express an arbitrary Euclidean motion according to Chasles' theorem.
The conjugacy class within the Euclidean group E(3) of a rotation about an axis is a rotation by the same angle about any axis.
The conjugate closure of a singleton containing a rotation in 3D is E+(3).
In 2D it is different in the case of a k-fold rotation: the conjugate closure contains k rotations (including the identity) combined with all translations.
E(2) has quotient group O(2) / Ck and E+(2) has quotient group SO(2) / Ck . For k = 2 this was already covered above.
Reflection
The conjugates of a reflection are reflections with a translated, rotated, and reflected mirror plane. The conjugate closure of a singleton containing a reflection is the whole E(n).
Rotoreflection
The left and also the right coset of a reflection in a plane combined with a rotation by a given angle about a perpendicular axis is the set of all combinations of a reflection in the same or a parallel plane, combined with a rotation by the same angle about the same or a parallel axis, preserving orientation
Isometry groups
Two isometry groups are said to be equal up to conjugacy with respect to affine transformations if there is an affine transformation such that all elements of one group are obtained by taking the conjugates by that affine transformation of all elements of the other group. This applies for example for the symmetry groups of two patterns which are both of a particular wallpaper group type. If we would just consider conjugacy with respect to isometries, we would not allow for scaling, and in the case of a parallelogrammatic lattice, change of shape of the parallelogram. Note however that the conjugate with respect to an affine transformation of an isometry is in general not an isometry, although volume (in 2D: area) and orientation are preserved.
Cyclic groups
Cyclic groups are Abelian, so the conjugate by every element of every element is the latter.
Zmn / Zm Zn.
Zmn is the direct product of Zm and Zn if and only if m and n are coprime. Thus e.g. Z12 is the direct product of Z3 and Z4, but not of Z6 and Z2.
Dihedral groups
Consider the 2D isometry point group Dn. The conjugates of a rotation are the same and the inverse rotation. The conjugates of a reflection are the reflections rotated by any multiple of the full rotation unit. For odd n these are all reflections, for even n half of them.
This group, and more generally, abstract group Dihn, has the normal subgroup Zm for all divisors m of n, including n itself.
Additionally, Dih2n has two normal subgroups isomorphic with Dihn. They both contain the same group elements forming the group Zn, but each has additionally one of the two conjugacy classes of Dih2n \ Z2n.
In fact:
Dihmn / Zn Dihn
Dih2n / Dihn Z2
Dih4n+2 Dih2n+1 × Z2
References
Euclidean symmetries
Group theory | Conjugation of isometries in Euclidean space | Physics,Mathematics | 1,268 |
12,671,952 | https://en.wikipedia.org/wiki/Clofoctol | Clofoctol is a bacteriostatic antibiotic. It is used in the treatment of respiratory tract and ear, nose and throat infections caused by Gram-positive bacteria.
It has been marketed in France till 2005 under the trade name Octofene and in Italy as Gramplus.
It is only functional against Gram-positive bacteria.
It penetrates into human lung tissue.
A French company, Apteeus had been developing clofoctol as a potential therapy against SARS-CoV-2 in 2020-2021, but eventually the repurposing of the drug was abandoned, due to a lack of volunteers. A mouse study showed repurposed drug clofoctol blocks SARS-CoV-2 replication.
References
Antibiotics
Phenols
Chloroarenes | Clofoctol | Biology | 166 |
66,590,304 | https://en.wikipedia.org/wiki/Su%E2%80%93Schrieffer%E2%80%93Heeger%20model | In condensed matter physics, the Su–Schrieffer–Heeger (SSH) model or SSH chain is a one-dimensional lattice model that presents topological features. It was devised by Wu-Pei Su, John Robert Schrieffer, and Alan J. Heeger in 1979, to describe the increase of electrical conductivity of polyacetylene polymer chain when doped, based on the existence of solitonic defects. It is a quantum mechanical tight binding approach, that describes the hopping of spinless electrons in a chain with two alternating types of bonds. Electrons in a given site can only hop to adjacent sites.
Depending on the ratio between the hopping energies of the two possible bonds, the system can be either in metallic phase (conductive) or in an insulating phase. The finite SSH chain can behave as a topological insulator, depending on the boundary conditions at the edges of the chain. For the finite chain, there exists an insulating phase, that is topologically non-trivial and allows for the existence of edge states that are localized at the boundaries.
Description
The model describes a half-filled one-dimensional lattice, with two sites per unit cell, A and B, which correspond to a single electron per unit cell. In this configuration each electron can either hop inside the unit cell or hop to an adjacent cell through nearest neighbor sites. As with any 1D model, with two sites per cell, there will be two bands in the dispersion relation (usually called optical and acoustic bands). If the bands do not touch, there is a band gap. If the gap lies at the Fermi level, then the system is considered to be an insulator.
The tight binding Hamiltonian in a chain with N sites can be written as
where h.c. denotes the Hermitian conjugate, v is the energy required to hop from a site A to B inside the unit cell, and w is the energy required to hop between unit cells. Here the Fermi energy is fixed to zero.
Bulk solution
The dispersion relation for the bulk can be obtained through a Fourier transform. Taking periodic boundary conditions , where , we pass to k-space by doing
,
which results in the following Hamiltonian
where the eigenenergies are easily calculated as
and the corresponding eigenstates are
where
The eigenenergies are symmetrical under swap of , and the dispersion relation is mostly gapped (insulator) except when (metal). By analyzing the energies, the problem is apparently symmetric about , the has the same dispersion as . Nevertheless, not all properties of the system are symmetrical, for example the eigenvectors are very different under swap of . It can be shown for example that the Berry connection
integrated over the Brillouin zone , produces different winding numbers:
showing that the two insulating phases, and , are topologically different (small changes in v and w change but not over the Brillouin zone). The winding number remains undefined for the metallic case . This difference in topology means that one cannot pass from an insulating phase to another without closing the gap (passing by the metallic phase). This phenomenon is called a topological phase transition.
Finite chain solution and edge states
The physical consequences of having different winding number become more apparent for a finite chain with an even number of lattice sites. It is much harder to diagonalize the Hamiltonian analytically in the finite case due to the lack of translational symmetry.
Dimerized cases
There exist two limiting cases for the finite chain, either or . In both of these cases, the chain is clearly an insulator as the chain is broken into dimers (dimerized). However one of the two cases would consist of dimers, while the other case would consist of dimers and two unpaired sites at the edges of the chain. In the latter case, as there is no on-site energy, if an electron finds itself on any of the two edge sites, its energy would be zero. So either the case or the case would necessarily have two eigenstates with zero energy, while the other case would not have zero-energy eigenstates. Contrary to the bulk case, the two limiting cases are not symmetrical in their spectrum.
Intermediate values
By plotting the eigenstates of the finite chain as function of position, one can show that there are two distinct kinds of states. For non-zero eigenenergies, the corresponding wavefunctions would be delocalized all along the chain while the zero energy eigenstates would portray localized amplitudes at the edge sites. The latter are called edge states. Even if the eigenenergies lie in the gap, the edge states are localized and correspond to an insulating phase.
By plotting the spectrum as a function of for a fixed value of , the spectrum is divided into two insulating regions divided by the metallic intersection at . The spectrum would be gapped in both insulating regions, but one of the regions would show zero energy eigenstates and the other region would not, corresponding to the dimerized cases. The existence of edge states in one region and not in the other demonstrate the difference between insulating phases and it is this sharp transition at that correspond to a topological phase transition.
Correspondence between finite and bulk solutions
The bulk case allows to predict which insulating region would present edge states, depending on the value of the winding number in the bulk case. For the region where the winding number is in the bulk, the corresponding finite chain with an even number of sites would present edge states, while for the region where the winding number is in the bulk case, the corresponding finite chain would not. This relation between winding numbers in the bulk and edge states in the finite chain is called the bulk-edge correspondence.
See also
Kitaev chain
Peierls transition
References
Condensed matter physics | Su–Schrieffer–Heeger model | Physics,Chemistry,Materials_science,Engineering | 1,196 |
76,602,254 | https://en.wikipedia.org/wiki/Oxford%20Forest%20Conservation%20Area | The Oxford Forest Conservation Area is a protected forest area of located in foothills near the township of Oxford in North Canterbury, New Zealand. The area is also an accredited International Dark Sky Park.
The forest is a remnant of extensive beech and podocarp forests that previously covered inland parts of North Canterbury. Species present in the forest include mountain beech and examples of the podocarps rimu, mataī, kahikatea, and tōtara. The forest is mainly black beech (Nothofagus solandri) at lower altitudes, with mountain beech (Nothofagus cliffortioides) above . From around 1851 to 1909, logging took place in the Oxford Forest and the nearby Woodside Forest property. Several fires in the late 19th century destroyed much of the forest, and logging ceased in 1915. Some areas of beech forest regenerated following a major fire in 1898. Sheep were grazed in some places from 1914, but grazing reduced after the 1930s, allowing more land to revert to beech. By 1973, the area was being managed as a forest park, with increasing areas of regenerating beech and plantations of exotic species.
The Oxford Forest Conservation Area is classified as stewardship land, under section 25 of the Conservation Act 1987. It includes walking and mountain biking tracks and is a recreational hunting area. The conservation area includes Mount Oxford, with a height of .
International Dark Sky Park
In 2024, the conservation area was designated by DarkSky International as New Zealand's second International Dark Sky Park. Readings of night sky luminance in the park have a median value of 21.45 mag/arcsec2 (corresponding to Bortle scale 3), and in places are as dark as 21.76 and 21.80 mag/arcsec2 (Bortle scale 1).
The application for designation was prepared by the Oxford Dark Sky Group, with member organisations including the Department of Conservation, the Waimakariri District Council, local schools, the Oxford Promotions Action Committee, community groups and sports clubs. The accreditation of the Oxford Forest Conservation Area is an initial step towards a larger dark-sky preserve. There are plans to reduce light pollution from the township of Oxford and extend the area of the dark-sky preserve by ten times, with the conservation area as the central dark core.
References
External links
Mt Oxford Conservation Area Short Walks at Visit Waimakariri
Oxford Dark Sky Incorporated at NZBN
Dark-sky preserves in New Zealand
Forests of New Zealand
Protected areas of the Canterbury Region
Waimakariri District | Oxford Forest Conservation Area | Astronomy | 513 |
374,372 | https://en.wikipedia.org/wiki/Noria | A noria (, nā‘ūra, plural nawāʿīr, from , nā‘orā, lit. "growler") is a hydropowered scoop wheel used to lift water into a small aqueduct, either for the purpose of irrigation or to supply water to cities and villages.
Name and meaning
Etymology
The English word noria is derived via Spanish noria from Arabic nā‘ūra (ناعورة), which comes from the Arabic verb meaning to "groan" or "grunt", in reference to the sound it made when turning.
Noria versus saqiyah
The term noria is commonly used for devices which use the power of moving water to turn the wheel. For devices powered by animals, the usual term is saqiyah or saqiya. Other types of similar devices are grouped under the name of chain pumps. However, the names of traditional water-raising devices used in the Middle East, India, Spain and other areas are often used loosely and overlappingly, or vary depending on region. Al-Jazari's book on mechanical devices, for example, groups the water-driven wheel and several other types of water-lifting devices under the general term saqiya. In Spain, by contrast, the term noria is used for both types of wheels, whether powered by animals or water current.
Function
The noria performs the function of moving water from a lower elevation to a higher elevation, using the energy derived from the flow of a river. It consists of a large, narrow undershot water wheel whose rim is made up of a series of containers or compartments which lift water from the river to an aqueduct at the top of the wheel. Its concept is similar to the modern hydraulic ram, which also uses the power of flowing water to pump some of the water out of the river.
Traditional norias may have pots, buckets or tubes attached directly to the periphery of the wheel, in effect sakias powered by flowing water rather than by animals or motors. For some the buckets themselves form the driving surfaces, for most the buckets are separate to the water wheels and attached on one side. More modern types can be built up compartments. All types are configured to discharge the lifted water sideways to a channel. For a modern noria in Steffisburg, Switzerland, the designers have uniquely connected the two functional wheels not directly but via a pair of cog wheels. This allows individual variation of speeds, diameters, and water levels.
Unlike the water wheels found in watermills, a noria does not provide mechanical power to any other process. A few historical norias were hybrids, consisting of waterwheels assisted secondarily by animal power. There is at least one known instance where a noria feeds seawater into a saltern.
History
Paddle-driven water-lifting wheels had appeared in ancient Egypt by the 4th century BC. According to John Peter Oleson, both the compartmented wheel and the hydraulic noria appeared in Egypt by the 4th century BC, with the saqiyah being invented there a century later. This is supported by archeological finds in the Faiyum, where the oldest archeological evidence of a water wheel has been found, in the form of a saqiyah dating back to the 3rd century BC. A papyrus dating to the 2nd century BC also found in the Faiyum mentions a water wheel used for irrigation, a 2nd-century BC fresco found at Alexandria depicts a compartmented saqiyah, and the writings of Callixenus of Rhodes mention the use of a saqiyah in the Ptolemaic Kingdom during the reign of Pharaoh Ptolemy IV Philopator in the late 3rd century BC.
The undershot water wheel and overshot water wheel, both animal- and water-driven, and with either a compartmented body (Latin tympanum) or a compartmented rim, were used by Hellenistic engineers between the 3rd and 2nd century BC. In 1st century BC, Roman architect Vitruvius described the function of the noria. Around 300, the Romans replaced the wooden compartments with separate, attached ceramic pots that were tied to the outside of an open-framed wheel, thereby creating the noria.
During the Islamic Golden Age, norias were adopted from classical antiquity by Muslim engineers, who made improvements to the noria. For example, the flywheel mechanism used to smooth out the delivery of power from a driving device to a driven machine, was invented by ibn Bassal (fl. 1038–1075) of al-Andalus, who pioneered the use of the flywheel in the noria and saqiyah. In 1206, Ismail al-Jazari introduced the use of the carank in the noria and saqiya, and the concept of minimizing intermittency was implied for the purpose of maximising their efficiency.
Muslim engineers used norias to discharge water into aqueducts which carried the water to towns and fields. The norias of Hama, for example, were in diameter and are still used in modern times (although currently only serving aesthetic purposes). The largest wheel has 120 water collection compartments and could raise more than 95 litres of water per minute. In the 10th century, Muhammad ibn Zakariya al-Razi's Al-Hawi describes a noria in Iraq that could lift as much as 153,000 litres per hour, or 2550 litres per minute. This is comparable to the output of modern norias in East Asia, which can lift up to 288,000 litres per hour, or 4800 litres per minute.
In the late 13th century the Marinid sultan Abu Yaqub Yusuf built an enormous noria, sometimes referred to as the "Grand Noria", in order to provide water for the vast Mosara Garden he created in Fez, Morocco. Its construction began in 1286 and was finished the next year. The noria, designed by an Andalusian engineer named Ibn al-Hajj, measured 26 metres in diameter and 2 metres wide. The wheel was made of wood but covered in copper, fitted into a stone structure adjoined to a nearby city gate. After the decline of the Marinids both the gardens and the noria fell into neglect; the wheel of the noria reportedly disappeared in 1888, leaving only remains of the stone base.
Numerous norias were also built in Al-Andalus, during the Islamic period of the Iberian Peninsula (8th-15th centuries), and continued to be built by Christian Spanish engineers afterwards. The most famous are the Albolafia in Cordoba (of uncertain date, partly reconstructed today), along the Guadalquivir River, and a former noria in Toledo, along the Tagus River. According to al-Idrisi, the Toledo noria was especially large and could raise water from the river to an aqueduct over 40 meters above it, which then supplied water to the city. Norias and similar devices were also used on vast scale in some parts of Spain for agricultural purposes. The rice plantations of Valencia were said to have 8000 norias, while Mallorca had over 4000 animal-driven saqiyas which were in use up until the beginning of the 20th century. The Alcantarilla Noria near Murcia, a noria built in the 15th century under Spanish Christian rule, is one of the better-known examples to have survived to the present-day.
References
Sources
External links
Spanish norias in the Region of Murcia
Photos of the norias of Hama in Syria
Aqueducts
Pumps
Irrigation
Articles containing video clips
Ancient Egyptian technology
Egyptian inventions
Watermills
Water wheels | Noria | Physics,Chemistry | 1,560 |
2,009,410 | https://en.wikipedia.org/wiki/Lithium%20acetate | Lithium acetate (CH3COOLi) is a salt of lithium and acetic acid. It is often abbreviated as LiOAc.
Uses
Lithium acetate is used in the laboratory as buffer for gel electrophoresis of DNA and RNA. It has a lower electrical conductivity and can be run at higher speeds than can gels made from TAE buffer (5-30V/cm as compared to 5-10V/cm). At a given voltage, the heat generation and thus the gel temperature is much lower than with TAE buffers, therefore the voltage can be increased to speed up electrophoresis so that a gel run takes only a fraction of the usual time. Downstream applications, such as isolation of DNA from a gel slice or Southern blot analysis, work as expected when using lithium acetate gels.
Lithium boric acid or sodium boric acid are usually preferable to lithium acetate or TAE when analyzing smaller fragments of DNA (less than 500 bp) due to the higher resolution of borate-based buffers in this size range as compared to acetate buffers.
Lithium acetate is also used to permeabilize the cell wall of yeast for use in DNA transformation. It is believed that the beneficial effect of LiOAc is caused by its chaotropic effect; denaturing DNA, RNA and proteins.
References
Acetates
Lithium salts
Organolithium compounds | Lithium acetate | Chemistry | 288 |
76,556,946 | https://en.wikipedia.org/wiki/Neodymium%28II%29%20fluoride | Neodymium(II) fluoride is an inorganic compound with the chemical formula NdF2. It can be obtained by shock compression of neodymium(III) fluoride and neodymium at 1000 °C and above 200 kbar. It can also be obtained by reacting in a eutectic system of neodymium and lithium fluoride-neodymium(III) fluoride at 1100 °C.
References
External reading
Neodymium compounds
Fluorides
Lanthanide halides | Neodymium(II) fluoride | Chemistry | 104 |
35,233,688 | https://en.wikipedia.org/wiki/Potential%20cultural%20impact%20of%20extraterrestrial%20contact | The cultural impact of extraterrestrial contact is the corpus of changes to terrestrial science, technology, religion, politics, and ecosystems resulting from contact with an extraterrestrial civilization. This concept is closely related to the search for extraterrestrial intelligence (SETI), which attempts to locate intelligent life as opposed to analyzing the implications of contact with that life.
The potential changes from extraterrestrial contact could vary greatly in magnitude and type, based on the extraterrestrial civilization's level of technological advancement, degree of benevolence or malevolence, and level of mutual comprehension between itself and humanity. The medium through which humanity is contacted, be it electromagnetic radiation, direct physical interaction, extraterrestrial artifact, or otherwise, may also influence the results of contact. Incorporating these factors, various systems have been created to assess the implications of extraterrestrial contact.
The implications of extraterrestrial contact, particularly with a technologically superior civilization, have often been likened to the meeting of two vastly different human cultures on Earth, a historical precedent being the Columbian Exchange. Such meetings have generally led to the destruction of the civilization receiving contact (as opposed to the "contactor", which initiates contact), and therefore destruction of human civilization is a possible outcome. Extraterrestrial contact is also analogous to the numerous encounters between non-human native and invasive species occupying the same ecological niche. However, the absence of verified public contact to date means tragic consequences are still largely speculative.
Background
Search for extraterrestrial intelligence
To detect extraterrestrial civilizations with radio telescopes, one must identify an artificial, coherent signal against a background of various natural phenomena that also produce radio waves. Telescopes capable of this include the Allen Telescope Array in Hat Creek, California and the new Five hundred meter Aperture Spherical Telescope in China and formerly the now demolished Arecibo Observatory in Puerto Rico. Various programs to detect extraterrestrial intelligence have had government funding in the past. Project Cyclops was commissioned by NASA in the 1970s to investigate the most effective way to search for signals from intelligent extraterrestrial sources, but the report's recommendations were set aside in favor of the much more modest approach of Messaging to Extra-Terrestrial Intelligence (METI), the sending of messages that intelligent extraterrestrial beings might intercept. NASA then drastically reduced funding for SETI programs, which have since turned to private donations to continue their search.
With the discovery in the late 20th and early 21st centuries of numerous extrasolar planets, some of which may be habitable, governments have once more become interested in funding new programs. In 2006 the European Space Agency launched COROT, the first spacecraft dedicated to the search for exoplanets, and in 2009 NASA launched the Kepler space observatory for the same purpose. By February 2013 Kepler had detected 105 of the confirmed exoplanets, and one of them, Kepler-22b, is potentially habitable. After it was discovered, the SETI Institute resumed the search for an intelligent extraterrestrial civilization, focusing on Keplers candidate planets, with funding from the United States Air Force.
Newly discovered planets, particularly ones that are potentially habitable, have enabled SETI and METI programs to refocus projects for communication with extraterrestrial intelligence. In 2009 A Message From Earth (AMFE) was sent toward the Gliese 581 planetary system, which contains two potentially habitable planets, the confirmed Gliese 581d and the more habitable but unconfirmed Gliese 581g. In the SETILive project, which began in 2012, human volunteers analyze data from the Allen Telescope Array to search for possible alien signals that computers might miss because of terrestrial radio interference. The data for the study is obtained by observing Kepler target stars with the radio telescope.
In addition to radio-based methods, some projects, such as SEVENDIP (Search for Extraterrestrial Visible Emissions from Nearby Developed Intelligent Populations) at the University of California, Berkeley, are using other regions of the electromagnetic spectrum to search for extraterrestrial signals. Various other projects are not searching for coherent signals, but want to rather use electromagnetic radiation to find other evidence of extraterrestrial intelligence, such as megascale astroengineering projects.
Several signals, such as the Wow! signal, have been detected in the history of the search for extraterrestrial intelligence, but none have yet been confirmed as being of intelligent origin.
Impact assessment
The implications of extraterrestrial contact depend on the method of discovery, the nature of the extraterrestrial beings, and their location relative to the Earth. Considering these factors, the Rio scale has been devised in order to provide a more quantitative picture of the results of extraterrestrial contact. More specifically, the scale gauges whether communication was conducted through radio, the information content of any messages, and whether discovery arose from a deliberately beamed message (and if so, whether the detection was the result of a specialized SETI effort or through general astronomical observations) or by the detection of occurrences such as radiation leakage from astroengineering installations. The question of whether or not a purported extraterrestrial signal has been confirmed as authentic, and with what degree of confidence, will also influence the impact of the contact. The Rio scale was modified in 2011 to include a consideration of whether contact was achieved through an interstellar message or through a physical extraterrestrial artifact, with a suggestion that the definition of artifact be expanded to include "technosignatures", including all indications of intelligent extraterrestrial life other than the interstellar radio messages sought by traditional SETI programs.
A study by astronomer Steven J. Dick at the United States Naval Observatory considered the cultural impact of extraterrestrial contact by analyzing events of similar significance in the history of science. The study argues that the impact would be most strongly influenced by the information content of the message received, if any. It distinguishes short-term and long-term impact. Seeing radio-based contact as a more plausible scenario than a visit from extraterrestrial spacecraft, the study rejects the commonly stated analogy of European colonization of the Americas as an accurate model for information-only contact, preferring events of profound scientific significance, such as the Copernican and Darwinian revolutions, as more predictive of how humanity might be impacted by extraterrestrial contact.
The physical distance between the two civilizations has also been used to assess the cultural impact of extraterrestrial contact. Historical examples show that the greater the distance, the less the contacted civilization perceives a threat to itself and its culture. Therefore, contact occurring within the Solar System, and especially in the immediate vicinity of Earth, is likely to be the most disruptive and negative for humanity. On a smaller scale, people close to the epicenter of contact would experience a greater effect than would those living farther away, and a contact having multiple epicenters would cause a greater shock than one with a single epicenter. Space scientists Martin Dominik and John Zarnecki state that in the absence of any data on the nature of extraterrestrial intelligence, one must predict the cultural impact of extraterrestrial contact on the basis of generalizations encompassing all life and of analogies with history.
The beliefs of the general public about the effect of extraterrestrial contact have also been studied. A poll of United States and Chinese university students in 2000 provides factor analysis of responses to questions about, inter alia, the participants' belief that extraterrestrial life exists in the Universe, that such life may be intelligent, and that humans will eventually make contact with it. The study shows significant weighted correlations between participants' belief that extraterrestrial contact may either conflict with or enrich their personal religious beliefs and how conservative such religious beliefs are. The more conservative the respondents, the more harmful they considered extraterrestrial contact to be. Other significant correlation patterns indicate that students took the view that the search for extraterrestrial intelligence may be futile or even harmful.
Psychologists Douglas Vakoch and Yuh-shiow Lee conducted a survey to assess people's reactions to receiving a message from extraterrestrials, including their judgments about likelihood that extraterrestrials would be malevolent. "People who view the world as a hostile place are more likely to think extraterrestrials will be hostile," Vakoch told USA Today.
Post-detection protocols
Various protocols have been drawn up detailing a course of action for scientists and governments after extraterrestrial contact. Post-detection protocols must address three issues: what to do in the first weeks after receiving a message from an extraterrestrial source; whether or not to send a reply; and analyzing the long-term consequences of the message received. No post-detection protocol, however, is binding under national or international law, and Dominik and Zarnecki consider the protocols likely to be ignored if contact occurs.
One of the first post-detection protocols, the "Declaration of Principles for Activities Following the Detection of Extraterrestrial Intelligence", was created by the SETI Permanent Committee of the International Academy of Astronautics (IAA). It was later approved by the Board of Trustees of the IAA and by the International Institute of Space Law, and still later by the International Astronomical Union (IAU), the Committee on Space Research, the International Union of Radio Science, and others. It was subsequently endorsed by most researchers involved in the search for extraterrestrial intelligence, including the SETI Institute.
The Declaration of Principles contains the following broad provisions:
Any person or organization detecting a signal should try to verify that it is likely to be of intelligent origin before announcing it.
The discoverer of a signal should, for the purposes of independent verification, communicate with other signatories of the Declaration before making a public announcement, and should also inform their national authorities.
Once a given astronomical observation has been determined to be a credible extraterrestrial signal, the astronomical community should be informed through the Central Bureau for Astronomical Telegrams of the IAU. The Secretary-General of the United Nations and various other global scientific unions should also be informed.
Following confirmation of an observation's extraterrestrial origin, news of the discovery should be made public. The discoverer has the right to make the first public announcement.
All data confirming the discovery should be published to the international scientific community and stored in an accessible form as permanently as possible.
Should evidence for extraterrestrial intelligence take the form of electromagnetic signals, the Secretary-General of the International Telecommunication Union (ITU) should be contacted, and may request in the next ITU Weekly Circular to minimize terrestrial use of the electromagnetic frequency bands in which the signal was detected.
Neither the discoverer nor anyone else should respond to an observed extraterrestrial intelligence; doing so requires international agreement under separate procedures.
The SETI Permanent Committee of the IAA and Commission 51 of the IAU should continually review procedures regarding detection of extraterrestrial intelligence and management of data related to such discoveries. A committee comprising members from various international scientific unions, and other bodies designated by the committee, should regulate continued SETI research.
A separate "Proposed Agreement on the Sending of Communications to Extraterrestrial Intelligence" was subsequently created. It proposes an international commission, membership of which would be open to all interested nations, to be constituted on detection of extraterrestrial intelligence. This commission would decide whether to send a message to the extraterrestrial intelligence, and if so, would determine the contents of the message on the basis of principles such as justice, respect for cultural diversity, honesty, and respect for property and territory. The draft proposes to forbid the sending of any message by an individual nation or organization without the permission of the commission, and suggests that, if the detected intelligence poses a danger to human civilization, the United Nations Security Council should authorize any message to extraterrestrial intelligence. However, this proposal, like all others, has not been incorporated into national or international law.
Paul Davies, a member of the SETI Post-Detection Taskgroup, has stated that post-detection protocols, calling for international consultation before taking any major steps regarding the detection, are unlikely to be followed by astronomers, who would put the advancement of their careers over the word of a protocol that is not part of national or international law.
Contact scenarios and considerations
Scientific literature and science fiction have put forward various models of the ways in which extraterrestrial and human civilizations might interact. Their predictions range widely, from sophisticated civilizations that could advance human civilization in many areas to imperial powers that might draw upon the forces necessary to subjugate humanity. Some theories suggest that an extraterrestrial civilization could be advanced enough to dispense with biology, living instead inside of advanced computers.
The implications of discovery depend heavily on the level of aggressiveness of the civilization interacting with humanity, its ethics, and how much human and extraterrestrial biologies have in common. These factors may govern the quantity and type of dialogue that can take place.
The question of whether contact is via signals from distant places or via probes or extraterrestrials in Earth's vicinity (or both) will also govern the magnitude of the long-term implications of contact.
In the case of communication using electromagnetic signals, the long silence between the reception of one message and another would mean that the content of any message would particularly affect the consequences of contact , as would the extent of mutual comprehension.
Concerning probes, a study suggested the first interstellar probe to is not likely to be the civilization's earliest (e.g. the ones sent first) but a more advanced one as (at least) the departure speed is thought to (likely) improve for at least some duration per each civilization, which e.g. may have implications for the type of probes to expect and the impacts of any probes sent earlier.
Friendly civilizations
Many writers have speculated on the ways in which a friendly civilization might interact with humankind. Albert Harrison, a professor emeritus of psychology at the University of California, Davis, thought that a highly advanced civilization might teach humanity such things as a physical theory of everything, how to use zero-point energy, or how to travel faster than light. They suggest that collaboration with such a civilization could initially be in the arts and humanities before moving to the hard sciences, and even that artists may spearhead collaboration. Seth D. Baum, of the Global Catastrophic Risk Institute, and others consider that the greater longevity of cooperative civilizations in comparison to uncooperative and aggressive ones might render extraterrestrial civilizations in general more likely to aid humanity. In contrast to these views, Paolo Musso, a member of the SETI Permanent Study Group of the International Academy of Astronautics (IAA) and the Pontifical Academy of Sciences, took the view that extraterrestrial civilizations possess, like humans, a morality driven not entirely by altruism but for individual benefit as well, thus leaving open the possibility that at least some extraterrestrial civilizations are hostile.
Futurist Allen Tough suggests that an extremely advanced extraterrestrial civilization, recalling its own past of war and plunder and knowing that it possesses superweapons that could destroy it, would be likely to try to help humans rather than to destroy them. He identifies three approaches that a friendly civilization might take to help humanity:
Intervention only to avert catastrophe: this would involve occasional limited intervention to stop events that could destroy human civilization completely, such as nuclear war or asteroid impact.
Advice and action with consent: under this approach, the extraterrestrials would be more closely involved in terrestrial affairs, advising world leaders and acting with their consent to protect against danger.
Forcible corrective action: the extraterrestrials could require humanity to reduce major risks against its will, intending to help humans advance to the next stage of civilization.
Tough considers advising and acting only with consent to be a more likely choice than the forceful option. While coercive aid may be possible, and advanced extraterrestrials would recognize their own practices as superior to those of humanity, it may be unlikely that this method would be used in cultural cooperation. Lemarchand suggests that instruction of a civilization in its "technological adolescence", such as humanity, would probably focus on morality and ethics rather than on science and technology, to ensure that the civilization did not destroy itself with technology it was not yet ready to use.
According to Tough, it is unlikely that the avoidance of immediate dangers and prevention of future catastrophes would be conducted through radio, as these tasks would demand constant surveillance and quick action. However, cultural cooperation might take place through radio or a space probe in the Solar System, as radio waves could be used to communicate information about advanced technologies and cultures to humanity.
Even if an ancient and advanced extraterrestrial civilization wished to help humanity, humans could suffer from a loss of identity and confidence due to the technological and cultural prowess of the extraterrestrial civilization. However, a friendly civilization may calibrate its contact with humanity in such a way as to minimize unintended consequences. Michael A. G. Michaud suggests that a friendly and advanced extraterrestrial civilization may even avoid all contact with an emerging intelligent species like humanity, to ensure that the less advanced civilization can develop naturally at its own pace; this is known as the zoo hypothesis.
Hostile civilizations
Science fiction often depicts humans successfully repelling alien invasions, but scientists more often take the view that an extraterrestrial civilization with sufficient power to reach the Earth would be able to destroy human civilization or humanity with minimal effort. Operations that are enormous on a human scale, such as destroying all major population centers on a planet, bombarding a planet with deadly neutron radiation, or even traveling to another planetary system in order to lay waste to it, may be important tools for a hostile civilization.
Deardorff speculates that a small proportion of the intelligent life forms in the galaxy may be aggressive, but the actual aggressiveness or benevolence of the civilizations would cover a wide spectrum, with some civilizations "policing" others. Civilizations may not be homogeneous and contain different factions or subgroups. According to Harrison and Dick, hostile extraterrestrial life may indeed be rare in the Universe, just as belligerent and autocratic nations on Earth have been the ones that lasted for the shortest periods of time, and humanity is seeing a shift away from these characteristics in its own sociopolitical systems. In addition, the causes of war may be diminished greatly for a civilization with access to the galaxy, as there are prodigious quantities of natural resources in space accessible without resort to violence.
SETI researcher Carl Sagan believed that a civilization with the technological prowess needed to reach the stars and come to Earth must have transcended war to be able to avoid self-destruction. Representatives of such a civilization would treat humanity with dignity and respect, and humanity, with its relatively backward technology, would have no choice but to reciprocate. Seth Shostak, an astronomer at the SETI Institute, disagrees, stating that the finite quantity of resources in the galaxy would cultivate aggression in any intelligent species, and that an explorer civilization that would want to contact humanity would be aggressive. Similarly, Ragbir Bhathal claimed that since the laws of evolution would be the same on another habitable planet as they are on Earth, an extremely advanced extraterrestrial civilization may have the motivation to colonize humanity in a similar manner to the European colonization of much of the rest of the world.
Disputing these analyses, David Brin states that while an extraterrestrial civilization may have an imperative to act for no benefit to itself, it would be naïve to suggest that such a trait would be prevalent throughout the galaxy. Brin points to the fact that in many moral systems on Earth, such as the Aztec or Carthaginian one, non-military killing has been accepted and even "exalted" by society, and further mentions that such acts are not confined to humans but can be found throughout the animal kingdom.
Baum et al. speculate that highly advanced civilizations are unlikely to come to Earth to enslave humans, as the achievement of their level of advancement would have required them to solve the problems of labor and resources by other means, such as creating a sustainable environment and using mechanized labor. Moreover, humans may be an unsuitable food source for extraterrestrials because of marked differences in biochemistry. For example, the chirality of molecules used by terrestrial biota may differ from those used by extraterrestrial beings. Douglas Vakoch argues that transmitting intentional signals does not increase the risk of an alien invasion, contrary to concerns raised by British cosmologist Stephen Hawking, because "any civilization that has the ability to travel between the stars can already pick up our accidental radio and TV leakage" at a distance of several hundred light-years. The easiest or most likely artificial signals from Earth to be detectable are brief pulses transmitted by anti-ballistic missile (ABM) early-warning and space-surveillance radars during the Cold War and later astronomical and military radars. Unlike the earliest and conventional radio- and television-broadcasting which has been claimed to be undetectable at short distances, such signals could be detected also from relatively distant receiver stations in certain regions.
Politicians have also commented on the likely human reaction to contact with hostile species. In his 1987 speech to the United Nations General Assembly, Ronald Reagan said, "I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world."
Equally advanced and more advanced civilizations
Robert Freitas speculated in 1978 that the technological advancement and energy usage of a civilization, measured either relative to another civilization or in absolute terms by its rating on the Kardashev scale, may play an important role in the result of extraterrestrial contact. Given the infeasibility of interstellar space flight for civilizations at a technological level similar to that of humanity, interactions between such civilizations would have to take place by radio. Because of the long transit times of radio waves between stars, such interactions would not lead to the establishment of diplomatic relations, nor any significant future interaction at all, between the two civilizations.
According to Freitas, direct contact with civilizations significantly more advanced than humanity would have to take place within the Solar System, as only the more advanced society would have the resources and technology to cross interstellar space. Consequently, such contact could only be with civilizations rated as Type II or higher on the Kardashev scale, as Type I civilizations would be incapable of regular interstellar travel. Freitas expected that such interactions would be carefully planned by the more advanced civilization to avoid mass societal shock for humanity.
However much planning an extraterrestrial civilization may do before contacting humanity, the humans may experience great shock and terror on their arrival, especially as they would lack any understanding of the contacting civilization. Ben Finney compares the situation to that of the tribespeople of New Guinea, an island that was settled fifty thousand years ago during the last glacial period but saw little contact with the outside world until the arrival of European colonial powers in the late 19th and early 20th centuries. The huge difference between the indigenous stone-age society and the Europeans' technical civilization caused unexpected behaviors among the native populations known as cargo cults: to coax the gods into bringing them the technology that the Europeans possessed, the natives created wooden "radio stations" and "airstrips" as a form of sympathetic magic. Finney argues that humanity may misunderstand the true meaning of an extraterrestrial transmission to Earth, much as the people of New Guinea could not understand the source of modern goods and technologies. He concludes that the results of extraterrestrial contact will become known over the long term with rigorous study, rather than as fast, sharp events briefly making newspaper headlines.
Billingham has suggested that a civilization which is far more technologically advanced than humanity is also likely to be culturally and ethically advanced, and would therefore be unlikely to conduct astroengineering projects that would harm human civilization. Such projects could include Dyson spheres, which completely enclose stars and capture all energy coming from them. Even if well within the capability of an advanced civilization and providing an enormous amount of energy, such a project would not be undertaken. For similar reasons, such civilizations would not readily give humanity the knowledge required to build such devices. Nevertheless, the existence of such capabilities would at least show that civilizations have survived "technological adolescence". Despite the caution that such an advanced civilization would exercise in dealing with the less mature human civilization, Sagan imagined that an advanced civilization might send those on Earth an Encyclopædia Galactica describing the sciences and cultures of many extraterrestrial societies.
Whether an advanced extraterrestrial civilization would send humanity a decipherable message is a matter of debate in itself. Sagan argued that a highly advanced extraterrestrial civilization would bear in mind that they were communicating with a relatively primitive one and therefore would try to ensure that the receiving civilization would be able to understand the message. Marvin Minsky believed that aliens might think similarly to humans because of shared constraints, permitting communication. Arguing against this view, astronomer Guillermo Lemarchand stated that an advanced civilization would probably encrypt a message with high information content, such as an Encyclopædia Galactica, in order to ensure that only other ethically advanced civilizations would be able to understand it. Douglas Vakoch assumes it may take some time to decode any message, telling ABC News that "I don't think we're going to understand immediately what they have to say." "There’s going to be a lot of guesswork in trying to interpret another civilization," he told Science Friday, adding that "in some ways, any message we get from an extraterrestrial will be like a cosmic Rorschach ink blot test."
Interstellar groups of civilizations
Given the age of the galaxy, Harrison surmises that "galactic clubs" might exist, groupings of civilizations from across the galaxy. Such clubs could begin as loose confederations or alliances, eventually developing into powerful unions of many civilizations. If humanity could enter into a dialogue with one extraterrestrial civilization, it might be able to join such a galactic club. As more extraterrestrial civilizations, or unions thereof, are found, these could also become assimilated into such a club. Sebastian von Hoerner has suggested that entry into a galactic club may be a way for humanity to handle the culture shock arising from contact with an advanced extraterrestrial civilization.
Whether a broad spectrum of civilizations from many places in the galaxy would even be able to cooperate is disputed by Michaud, who states that civilizations with huge differences in the technologies and resources at their command "may not consider themselves even remotely equal". It is unlikely that humanity would meet the basic requirements for membership at its current low level of technological advancement. A galactic club may, William Hamilton speculates, set extremely high entrance requirements that are unlikely to be met by less advanced civilizations.
When two Canadian astronomers argued that they potentially discovered 234 extraterrestrial civilizations through analysis of the Sloan Digital Sky Survey database, Douglas Vakoch doubted their explanation for their findings, noting that it would be unusual for all of these stars to pulse at exactly the same frequency unless they were part of a coordinated network: "If you take a step back," he said, "that would mean you have 234 independent stars that all decided to transmit the exact same way."
Michaud suggests that an interstellar grouping of civilizations might take the form of an empire, which need not necessarily be a force for evil, but may provide for peace and security throughout its jurisdiction. Owing to the distances between the stars, such an empire would not necessarily maintain control solely by military force, but may rather tolerate local cultures and institutions to the extent that these would not pose a threat to the central imperial authority. Such tolerance may, as has happened historically on Earth, extend to allowing nominal self-rule of specific regions by existing institutions, while maintaining that area as a puppet or client state to accomplish the aims of the imperial power. However, particularly advanced powers may use methods, including faster-than-light travel, to make centralized administration more effective.
In contrast to the belief that an extraterrestrial civilization would want to establish an empire, Ćirković proposes that an extraterrestrial civilization would maintain equilibrium rather than expand outward. In such an equilibrium, a civilization would only colonize a small number of stars, aiming to maximize efficiency rather than to expand massive and unsustainable imperial structures. This contrasts with the classic Kardashev Type III civilization, which has access to the energy output of an entire galaxy and is not subject to any limits on its future expansion. According to this view, advanced civilizations may not resemble the classic examples in science fiction, but might more closely reflect the small, independent Greek city-states, with an emphasis on cultural rather than territorial growth.
Extraterrestrial artifacts
An extraterrestrial civilization may choose to communicate with humanity by means of artifacts or probes rather than by radio, for various reasons. While probes may take a long time to reach the Solar System, once there they would be able to hold a sustained dialogue that would be impossible using radio from hundreds or thousands of light-years away. Radio would be completely unsuitable for surveillance and continued monitoring of a civilization, and should an extraterrestrial civilization wish to perform these activities on humanity, artifacts may be the only option other than to send large, crewed spacecraft to the Solar System.
Although faster-than-light travel has been seriously considered by physicists such as Miguel Alcubierre, Tough speculates that the enormous amount of energy required to achieve such speeds under currently proposed mechanisms means that robotic probes traveling at conventional speeds will still have an advantage for various applications. 2013 research at NASA's Johnson Space Center, however, shows that faster-than-light travel with the Alcubierre drive requires dramatically less energy than previously thought, needing only about 1 tonne of exotic mass-energy to move a spacecraft at 10 times the speed of light, in contrast to previous estimates that stated that only a Jupiter-mass object would contain sufficient energy to power a faster-than-light spacecraft.
According to Tough, an extraterrestrial civilization might want to send various types of information to humanity by means of artifacts, such as an Encyclopædia Galactica, containing the wisdom of countless extraterrestrial cultures, or perhaps an invitation to engage in diplomacy with them. A civilization that sees itself on the brink of decline might use the abilities it still possesses to send probes throughout the galaxy, with its cultures, values, religions, sciences, technologies, and laws, so that these may not die along with the civilization itself.
Freitas finds numerous reasons why interstellar probes may be a preferred method of communication among extraterrestrial civilizations wishing to make contact with Earth. A civilization aiming to learn more about the distribution of life within the galaxy might, he speculates, send probes to a large number of star systems, rather than using radio, as one cannot ensure a response by radio but can (he says) ensure that probes will return to their sender with data on the star systems they survey. Furthermore, probes would enable the surveying of non-intelligent populations, or those not yet capable of space navigation (like humans before the 20th century), as well as intelligent populations that might not wish to provide information about themselves and their planets to extraterrestrial civilizations. In addition, the greater energy required to send living beings rather than a robotic probe would, according to Michaud, be only used for purposes such as a one-way migration.
Freitas points out that probes, unlike the interstellar radio waves commonly targeted by SETI searches, could store information for long, perhaps geological, timescales, and could emit strong radio signals unambiguously recognizable as being of intelligent origin, rather than being dismissed as a UFO or a natural phenomenon. Probes could also modify any signal they send to suit the system they were in, which would be impossible for a radio transmission originating from outside the target star system. Moreover, the use of small robotic probes with widely distributed beacons in individual systems, rather than a small number of powerful, centralized beacons, would provide a security advantage to the civilization using them. Rather than revealing the location of a radio beacon powerful enough to signal the whole galaxy and risk such a powerful device being compromised, decentralized beacons installed on robotic probes need not reveal any information that an extraterrestrial civilization prefers others not to have.
Given the age of the Milky Way galaxy, an ancient extraterrestrial civilization may have existed and sent probes to the Solar System millions or even billions of years before the evolution of Homo sapiens. Thus, a probe sent may have been nonfunctional for millions of years before humans learn of its existence. Such a "dead" probe would not pose an imminent threat to humanity, but would prove that interstellar flight is possible. However, if an active probe were to be discovered, humans would react much more strongly than they would to the discovery of a probe that has long since ceased to function.
Further implications of contact
Theological
The confirmation of extraterrestrial intelligence could have a profound impact on religious doctrines, potentially causing theologians to reinterpret scriptures to accommodate the new discoveries. However, a survey of people with many different religious beliefs indicated that their faith would not be affected by the discovery of extraterrestrial intelligence, and another study, conducted by Ted Peters of the Pacific Lutheran Theological Seminary, shows that most people would not consider their religious beliefs superseded by it. Surveys of religious leaders indicate that only a small percentage are concerned that the existence of extraterrestrial intelligence might fundamentally contradict the views of the adherents of their religion. Gabriel Funes, the chief astronomer of the Vatican Observatory and a papal adviser on science, has stated that the Catholic Church would be likely to welcome extraterrestrial visitors warmly. There are many UFO religions such as Raëlism. Astronomer David Weintraub suggests unambiguous contact would result in more of these kinds of beliefs and communities, saying "There undoubtedly would be people who would find this as an opportunity or an excuse to call attention to themselves for whatever reason and there would be new religions".
Contact with extraterrestrial intelligence would not be completely inconsequential for religion. The Peters study showed that most non-religious people, and a significant minority of religious people, believe that the world could face a religious crisis, even if their own beliefs were unaffected. Contact with extraterrestrial intelligence would be most likely to cause a problem for western religions, in particular traditionalist Christianity, because of the geocentric nature of western faiths. The discovery of extraterrestrial life would not contradict basic conceptions of God, however, and seeing that science has challenged established dogma in the past, for example with the theory of evolution, it is likely that existing religions will adapt similarly to the new circumstances. Douglas Vakoch argues that it is not likely that the discovery of extraterrestrial life will impact religious beliefs. In the view of Musso, a global religious crisis would be unlikely even for Abrahamic faiths, as the studies of himself and others on Christianity, the most "anthropocentric" religion, see no conflict between that religion and the existence of extraterrestrial intelligence. In addition, the cultural and religious values of extraterrestrial species would likely be shared over centuries if contact is to occur by radio, meaning that rather than causing a huge shock to humanity, such information would be viewed much as archaeologists and historians view ancient artifacts and texts.
Funes speculates that a decipherable message from extraterrestrial intelligence could initiate an interstellar exchange of knowledge in various disciplines, including whatever religions an extraterrestrial civilization may host. Billingham further suggests that an extremely advanced and friendly extraterrestrial civilization might put an end to present-day religious conflicts and lead to greater religious toleration worldwide. On the other hand, Jill Tarter puts forward the view that contact with extraterrestrial intelligence might eliminate religion as we know it and introduce humanity to an all-encompassing faith. Vakoch doubts that humans would be inclined to adopt extraterrestrial religions, telling ABC News "I think religion meets very human needs, and unless extraterrestrials can provide a replacement for it, I don't think religion is going to go away," and adding, "if there are incredibly advanced civilizations with a belief in God, I don't think Richard Dawkins will start believing."
Political
According to experts such as Niklas Hedman, executive director of UN Office for Outer Space Affairs, there are "no international agreements or mechanisms in place for how humanity would handle an encounter with extraterrestrial intelligence".
Tim Folger speculates that news of radio contact with an extraterrestrial civilization would prove impossible to suppress and would travel rapidly, though Cold War scientific literature on the subject contradicts this. Media coverage of the discovery would probably die down quickly, though, as scientists began to decipher the message and learn its true impact. Different branches of government (for example legislative, executive, and judiciary) may pursue their own policies, potentially giving rise to power struggles. Even in the event of a single contact with no follow-up, radio contact may prompt fierce disagreements as to which bodies have the authority to represent humanity as a whole. Michaud hypothesizes that the fear arising from direct contact may cause nation-states to put aside their conflicts and work together for the common defense of humanity.
Apart from the question of who would represent the Earth as a whole, contact could create other international problems, such as the degree of involvement of governments foreign to the one whose radio astronomers received the signal. The United Nations discussed various issues of foreign relations immediately before the launch of the Voyager probes, which in 2012 left the Solar System carrying a golden record in case they are found by extraterrestrial intelligence. Among the issues discussed were what messages would best represent humanity, what format they should take, how to convey the cultural history of the Earth, and what international groups should be formed to study extraterrestrial intelligence in greater detail.
According to Luca Codignola of the University of Genoa, contact with a powerful extraterrestrial civilization is comparable to occasions where one powerful civilization destroyed another, such as the arrival of Christopher Columbus and Hernán Cortés into the Americas and the subsequent destruction of the indigenous civilizations and their ways of life. However, the applicability of such a model to contact with extraterrestrial civilizations, and that specific interpretation of the arrival of the European colonists to the Americas, have been disputed. Even so, any large difference between the power of an extraterrestrial civilization and our own could be demoralizing and potentially cause or accelerate the collapse of human society. Being discovered by a "superior" extraterrestrial civilization, and continued contact with it, might have psychological effects that could destroy a civilization, as is claimed to have happened in the past on Earth.
Even in the absence of close contact between humanity and extraterrestrials, high-information messages from an extraterrestrial civilization to humanity have the potential to cause a great cultural shock. Sociologist Donald Tarter has conjectured that knowledge of extraterrestrial culture and theology has the potential to compromise human allegiance to existing organizational structures and institutions. The cultural shock of meeting an extraterrestrial civilization may be spread over decades or even centuries if an extraterrestrial message to humanity is extremely difficult to decipher.
A study suggests there may be a threat from the perception by state actors (or their subsequent actions based on this perception) that other state-level actors could seek to gain and achieve an information monopoly on communications with an extraterrestrial intelligence. It recommends transparency and data sharing, further development of postdetection protocols , and better education of policymakers in this space.
Legal
Contact with extraterrestrial civilizations would raise legal questions, such as the rights of the extraterrestrial beings. An extraterrestrial arriving on Earth might only have the protection of animal cruelty statutes. Much as various classes of human being, such as women, children, and indigenous people, were initially denied human rights, so might extraterrestrial beings, who could therefore be legally owned and killed. If such a species were not to be treated as a legal animal, there would arise the challenge of defining the boundary between a legal person and a legal animal, considering the numerous factors that constitute intelligence. Some ethicists are considering "how the rights of a completely unfamiliar alien species would fit into our legal and ethical frameworks" and there is a case for "human rights" to evolve into "sentient rights".
Freitas considers that even if an extraterrestrial being were to be afforded legal personhood, problems of nationality and immigration would arise. An extraterrestrial being would not have a legally recognized earthly citizenship, and drastic legal measures might be required in order to account for the technically illegal immigration of extraterrestrial individuals.
If contact were to take place through electromagnetic signals, these issues would not arise. Rather, issues relating to patent and copyright law regarding who, if anyone, has rights to the information from the extraterrestrial civilization would be the primary legal problem.
Scientific and technological
The scientific and technological impact of extraterrestrial contact through electromagnetic waves would probably be quite small, especially at first. However, if the message contains a large amount of information, deciphering it could give humans access to a galactic heritage perhaps predating the formation of the Solar System, which may greatly advance our technology and science. A possible negative effect could be to demoralize research scientists as they come to know that what they are researching may already be known to another civilization.
On the other hand, extraterrestrial civilizations with malicious intent could send (unfiltered) information that could enable or facilitate human civilization to destroy itself, such as powerful computer viruses, knowledge to build an advanced artificial intelligence or information on how to make extremely potent weapons that humans would not yet be able to use responsibly. While the motives for such an action are unknown, it may require minimal energy use on the part of the extraterrestrials. It may also be possible that such is sent without malicious intent. According to Musso, however, computer viruses in particular will be nearly impossible unless extraterrestrials possess detailed knowledge of human computer architectures, which would only happen if a human message sent to the stars were protected with little thought to security. Even a virtual machine on which extraterrestrials could run computer programs could be designed specifically for the purpose, bearing little relation to computer systems commonly used on Earth. In addition, humans could send messages to extraterrestrials detailing that they do not want access to the Encyclopædia Galactica until they have reached a suitable level of advancement, thus possibly raising chances that harmful impacts of technology from recipient extraterrestrials are mitigated.
Extraterrestrial technology could have profound impacts on the nature of human culture and civilization. Just as television provided a new outlet for a wide variety of political, religious, and social groups, and as the printing press made the Bible available to the common people of Europe, allowing them to interpret it for themselves, so an extraterrestrial technology might change humanity in ways not immediately apparent. Harrison speculates that a knowledge of extraterrestrial technologies could increase the gap between scientific and cultural progress, leading to societal shock and an inability to compensate for negative effects of technology. He gives the example of improvements in agricultural technology during the Industrial Revolution, which displaced thousands of farm laborers until society could retrain them for jobs suited to the new social order. Contact with an extraterrestrial civilization far more advanced than humanity could cause a much greater shock than the Industrial Revolution, or anything previously experienced by humanity.
Michaud suggests that humanity could be impacted by an influx of extraterrestrial science and technology in the same way that medieval European scholars were impacted by the knowledge of Arab scientists. Humanity might at first revere the knowledge as having the potential to advance the human species, and might even feel inferior to the extraterrestrial species, but would gradually grow in arrogance as it gained more and more intimate knowledge of the science, technology, and other cultural developments of an advanced extraterrestrial civilization.
The discovery of extraterrestrial intelligence would have various impacts on biology and astrobiology. The discovery of extraterrestrial life in any form, intelligent or non-intelligent, would give humanity greater insight into the nature of life on Earth and would improve the conception of how the tree of life is organized. Human biologists could possibly learn about extraterrestrial biochemistry and observe how it differs from that found on Earth. This knowledge could help human civilization to learn which aspects of life are common throughout the universe and which are possibly specific to Earth.
Worldviews
Some have argued that confirmed reliable detection of extraterrestrial intelligence or contact may be one of the biggest moments in human history and would have major implications for humanity including its contemporary prevalent worldviews, not just from implications within the fields of theology and science , similar to the paradigm shift away from geocentrism as a dominant element of human worldviews.
Harvard astronomer and lead scientist of The Galileo Project, Avi Loeb, has argued that humanity is not ready to adopt a sense of what he calls "cosmic modesty" and that this could change if the project detects "relics" of more advanced civilizations. Loeb postulates that if we find that we "are not the smartest kid on the cosmic block, it will give us a different perspective" – such as the way we think about our place in the universe, for example with relevance to prevalent religious worldviews, in which humans may often be considered unique or exceptional.
According to Major John R. King, potential sociological consequences of alien contact may include (1) Initial shock and consternation (2) Loss or reduction of ego (3) Modification of human values (4) Decrease in status of [certain] scientists and (5) Reevaluation of religions. The "mediocrity principle" which claims that "there is nothing special about Earth's status or position in the Universe" could present a great challenge to Abrahamic religions, which "teach that human beings are purposefully created by God and occupy a privileged position in relation to other creatures", albeit some have argued that "discovery of life elsewhere in the Universe would not compromise God's love for Earth life" despite there being no "positive affirmation of alien life" in popular religious texts such as the bible and that other civilisations may be "completely unaware of Jesus' story" and may have no such popular story from their own past. There is widespread belief that religions would adapt to contact.
Ethics
Astroethics refers to the contemplation and development of ethical standards for a variety of outer space issues, including questions of how to interact remotely or in close encounters and concerns not only humans' ethics but also ethics of non-human intelligences, including whether they all afford us rights (and which each or overall).
Ecological and biological-warfare impacts
An extraterrestrial civilization might bring to Earth pathogens or invasive life forms that do not harm its own biosphere. Alien pathogens could decimate the human population, which would have no immunity to them, or they might use terrestrial livestock or plants as hosts, causing indirect harm to humans. Invasive organisms brought by extraterrestrial civilizations could cause great ecological harm because of the terrestrial biosphere's lack of defenses against them.
On the other hand, pathogens and invasive species of extraterrestrial origin might differ enough from terrestrial organisms in their biology to have no adverse effects. Furthermore, pathogens and parasites on Earth are generally suited to only a small and exclusive set of environments, to which extraterrestrial pathogens would have had no opportunity to adapt.
If an extraterrestrial civilization bearing malice towards humanity gained sufficient knowledge of terrestrial biology and weaknesses in the immune systems of terrestrial biota, it might be able to create extremely potent biological weapons. Even a civilization without malicious intent could inadvertently cause harm to humanity by not taking account of all the risks of their actions.
According to Baum, even if an extraterrestrial civilization were to communicate using electromagnetic signals alone, it could send humanity information with which humans themselves could create lethal biological weapons.
See also
Archaeology, Anthropology, and Interstellar Communication
Relative species abundance
References
Notes
Further reading
External links
SETI Institute
Cultural Aspects of SETI
Introduction to ExtraTerrestrial Intelligence
Search for extraterrestrial intelligence
Extraterrestrial life
Cultural anthropology
Religion and science
Global culture
Extraterrestrial Contact | Potential cultural impact of extraterrestrial contact | Astronomy,Biology | 10,076 |
15,215,530 | https://en.wikipedia.org/wiki/KCNS3 | Potassium voltage-gated channel subfamily S member 3 (Kv9.3) is a protein that in humans is encoded by the KCNS3 gene. KCNS3 gene belongs to the S subfamily of the potassium channel family. It is highly expressed in pulmonary artery myocytes, placenta, and parvalbumin-containing GABA neurons in brain cortex. In humans, single-nucleotide polymorphisms of the KCNS3 gene are associated with airway hyperresponsiveness, whereas decreased KCNS3 mRNA expression is found in the prefrontal cortex of patients with schizophrenia.
Function
Voltage-gated potassium channels form the largest and most diversified class of ion channels and are present in both excitable and nonexcitable cells. Their main functions are associated with the regulation of the resting membrane potential and the control of the shape and frequency of action potentials. The alpha subunits are of 2 types: those that are functional by themselves and those that are electrically silent but capable of modulating the activity of specific functional alpha subunits. The Kv9.3 protein (encoded by KCNS3 gene) is not functional by itself but can form functional heteromultimers with Kv2.1 (encoded by KCNB1) and Kv2.2 (encoded by KCNB2) (and possibly other members) of the Shab-related subfamily of potassium voltage-gated channel proteins. Heteromeric Kv2.1/Kv9.3 channels form with fixed stoichiometry consisting of three Kv2.1 subunits and one Kv9.3 subunit.
See also
Voltage-gated potassium channel
References
Ion channels | KCNS3 | Chemistry | 340 |
53,654,633 | https://en.wikipedia.org/wiki/Nectar%20spur | A nectar spur is a hollow extension of a part of a flower. The spur may arise from various parts of the flower: the sepals, petals, or hypanthium, and often contain tissues that secrete nectar (nectaries). Nectar spurs are present in many clades across the angiosperms, and are often cited as an example of convergent evolution.
Taxonomic significance
Spur length can be an important diagnostic character for taxonomy, useful in species identification. For example, Yadon's piperia can be distinguished from Platanthera elegans, an extremely similar species in section Piperia (Orchidaceae), by the unusually short length of its spur.
Ecology and evolution
The presence of nectar spurs in a clade of plants is associated with evolutionary processes such as coevolution (two-sided evolution) and pollinator shifts (one-sided evolution). Like variations in floral tube length, variation in nectar spur length has been associated with variation in the lengths of organs on the primary pollinators of the plants, whether being the tongues of moths, the proboscis of flies, or the beaks of hummingbirds. This variation in floral shape can restrict access of pollinators to nectar, limiting the range of potential pollinators.
In a famous historical story, Darwin predicted that the Angraecum sesquipedale, an orchid with an extremely long spur, must be pollinated by a pollinator with an equally long proboscis. The pollinator, the sphinx moth Xanthopan morganii praedicta, was found and described 40 years after Darwin made his prediction.
Nectar spurs have been cited as prime examples of “key innovations” that may promote diversification, and play a part in the adaptive radiation of clades. Columbines (Aquilegia) have been studied in depth for the link between their floral nectar spurs and their rapid evolutionary radiations. However, there has also been some refutation to this idea recently, suggesting that the adaptive radiation of Aquilegia may have been due more to climate and habit than to the varying lengths of the nectar spurs.
Underlying development and genetics
In terms of development, the varying lengths of nectar spurs has been found to be based solely on the anisotropic elongation of cells. However, it still remains to be understood which genes underlie the elongation of cells to form a spur. Are the same genes being co-opted over and over again across the angiosperms to form spurs, or are there several developmental pathways to make a spur?
The genetic basis underlying the development of nectar spurs has been explored in several clades of plant families, such as Linaria and Aquilegia. Studies in model plant Antirrhinum and Arabidopsis identified that type I KNOX SHOOTMERISTEMLESS (STM) genes play a role in the development of spur-like structures. These type I KNOX STM genes also play important roles in the development of the growing tip of the plant, the shoot apical meristem, by controlling cell division and prolonging indeterminate growth. Subsequent gene expression studies confirmed that orthologues of the type I KNOX genes are expressed in the petals of Linaria, a genus of plants with a spur arising from the ventral petal. However, the type I KNOX homologues were not differentially expressed during spur development on the petals of Aquilegia, while certain TCP genes instead were suggested to play a role. These results suggest that nectar spurs may represent a case of convergent evolution on the genetic level, where the nectar spur has developed through different developmental pathways.
List of plants with nectar spurs
The following is an incomplete list of plant clades with nectar spurs.
Orchids: Satyrium, Disa, Angraecum, Aerangis, Neofinetia, Piperia
On petals: Aquilegia, Delphinium, Lentibulariaceae, Viola, Fumarioideae
On sepals: Impatiens
From hypanthium: Tropaeolum
Notes
References
Plant morphology | Nectar spur | Biology | 835 |
1,166,782 | https://en.wikipedia.org/wiki/Embryonic%20diapause | Embryonic diapause (delayed implantation in mammals) is a reproductive strategy used by a number of animal species across different biological classes. In more than 130 types of mammals where this takes place, the process occurs at the blastocyst stage of embryonic development, and is characterized by a dramatic reduction or complete cessation of mitotic activity, arresting most often in the G0 or G1 phase of division.
In placental embryonic diapause, the blastocyst does not immediately implant in the uterus after sexual reproduction has resulted in the zygote, but rather remains in this non-dividing state of dormancy until conditions allow for attachment to the uterine wall to proceed as normal. As a result, the normal gestation period is extended for a species-specific time.
Diapause provides a survival advantage to offspring, because birth or emergence of young can be timed to coincide with the most hospitable conditions, regardless of when mating occurs or length of gestation; any such gain in survival rates of progeny confers an evolutionary advantage.
Evolutionary significance
Organisms which undergo embryonic diapause are able to synchronize the birth of offspring to the most favorable conditions for reproductive success, irrespective of when mating took place. Many different factors can induce embryonic diapause, such as the time of year, temperature, lactation and supply of food.
Embryonic diapause is a relatively widespread phenomenon outside of mammals, with known occurrence in the reproductive cycles of many insects, nematodes, fish, and other non-mammalian vertebrates. It has been observed in approximately 130 mammalian species, which is less than two percent of all species of mammals. These include certain pinnipeds, rodents, bears, armadillos, mustelids (e.g. weasels and badgers), and marsupials (e.g. kangaroos). Some groups only have one species that undergoes embryonic diapause, such as the roe deer in the order Artiodactyla.
Experimental induction of embryonic discontinuous development within species which do not spontaneously undergo embryonic diapause in nature has been achieved; reversible developmental arrest was successfully demonstrated. This may be evidence for the evolutionary significance of this phenomenon, with latent capacity for diapause potentially present in a much wider segment of species than known to occur naturally.
General mechanism
All multicellular organisms, from their conception, begin as a small number of cells and only grow and develop as those cells divide. In organisms which are capable of embryonic diapause, in non-ideal reproductive conditions, there is a cessation of cellular division which prevents the embryo from growing and maturing, delaying the maturation of the embryo until conditions are ideal enough to promote the survival of the offspring, and in some cases, the mother.
Regulation of the cell cycle as it relates to embryonic diapause has been linked to the dacapo gene in the fruit fly, responsible for inhibiting the formation of cyclin E-cdk2 complexes necessary for DNA synthesis. There is also evidence pointing to the upregulation of B cell translocation gene 1 (Btg1) in the mouse embryo during diapause, another known regulator of the cell cycle, responsible for inhibiting transition from G0/G1. Other studies have demonstrated, inversely, the lack of involvement of more common regulators of the cell cycle such as p53 within the placental model of embryonic diapause. While much of the molecular regulation involved in activating dormant blastocysts has been characterized, little widely applicable characterization is available regarding entry into diapause, and the conditions which enable a blastocyst to remain dormant. Once the embryo exits diapause arrest and resumes regular development, no adverse effects are observed.
Specifically within placental embryonic diapause, this cessation is led by the intentional failure of the blastocyst to implant in the uterine wall, which is an essential component in developmental progression in these species. Hormones relating to the failed implantation also contribute to the embryonic arrest.
Types
There are two distinct forms of embryonic diapause, characterized by different conditions of onset. Facultative diapause occurs in response to certain environmental or metabolic stressors, such as drastic changes in temperature, feeding, or lactation. Obligate diapause occurs regularly in the reproductive cycle of the affected species, and is often associated with seasonal changes and photo-period.
Facultative diapause
Facultative diapause is regulated by several factors, including the maternal environment and ovarian competency, the pituitary gland, and metabolic stress and lactation.
With regards to the many other regulators of this form of diapause, in placental mammals, facultative diapause is most often the result of fertilization shortly following the birth of a previous litter, The consequential pups suckling during lactation promotes prolactin to be released. This in turn reduces progesterone secretion from the corpus luteum in a pregnant female. The corpus luteum is a temporary endocrine organ that is formed from the leftover cells from the ovarian follicle in the ovary, once it has released a mature ovum. The main function of the corpus luteum is to secrete progesterone during pregnancy in order to maintain the uterine environment needed. Prolactin acting on the corpus luteum causes the progesterone level to be below optimal concentration and therefore induces embryonic facultative diapause.
Each species that undergoes facultative diapause tends to have a specific developmental stage, that is genetically determined, in which this process is initiated. This form of diapause is most well studied in rodents and marsupials but has been identified in many other species, including non-mammals. It is not clear how well the mechanisms studied for the onset, maintenance and release from facultative diapause in the rodent model apply to these other species.
Obligate diapause
Obligate (adj: by necessity) diapause (a.k.a. seasonal delayed implantation) is a mechanism ensuring the birth of offspring is timed during optimal environmental conditions, to ensure maximal survival. The proposed mechanism is to separate conception and parturition (birth) so that each can occur at the most favourable time of year.
Obligate diapause is activated and deactivated by changes to the number of daylight hours within a day (photoperiod) and hence, occurs within specific seasons. While obligate diapause occurs in a variety of species in different groups, there are significant variations in diapause length. Western spotted skunks (Spilogale gracilis) have a diapause of around 200 days while American minks (Neogale vison) only have a diapause of around fourteen days.
Similarly to facultative diapause, a series of hormonal changes arrest the blastocyst development, prior to implantation, preventing continued growth of the embryo. However, in obligate diapause, the blastocyst shall enter into the dormant state in every reproductive season. This means every blastocyst a mother produces shall enter a period of diapause.
Close regulation of obligate diapause is essential for survival of the mother and offspring. Premature diapause can result in forgone growth and breeding opportunities and late diapause can result in death due to adverse conditions.
Prior to the vernal equinox, the photoperiod is less than 12 hours. This increases the production of melatonin in the pineal gland. Due to the inhibitory relationship between melatonin and prolactin, this increase in melatonin decreases prolactin secretion from the pituitary gland. The decrease in prolactin consequently decreases progesterone production in the corpus luteum, preventing development of the blastocyst. This induces embryonic diapause.
After the vernal equinox, the photoperiod is greater than 12 hours. This decreases the production of melatonin in the pineal gland and, therefore, increases the prolactin and progesterone production in the pituitary gland and corpus luteum respectively.
The increase in prolactin induces expression of the gene Odc (ornithine decarboxylase). The Odc gene produces the ODC protein, a rate-limiting enzyme in the production of the polyamine, putrescine, within the uterine environment. The presence of putrescine may indicate a role in inducing the escape of the embryo from obligate diapause.
Embryonic stem cells
Embryonic stem cells (ESCs) have the potential to allow for further understanding of the mechanisms controlling embryonic diapause. This is because the ESCs and diapausing blastocysts having very similar transcriptome profiles. ESCs are derived from the undifferentiated inner mass cells of blastocysts of an embryo – with the capability of continual proliferation in vitro. ESCs are mostly derived from mouse models, at the point where the ESCs are at optimal efficiency and are able to enter diapause.
Both diapausing blastocysts and ESCs have transcriptome profile similarities, including downregulation of metabolism, biosynthesis and gene expression pathways. These similarities allow for the potential to use ESCs as a cellular model to identify the molecular factors which regulate embryonic diapause.
See also
Weddell seal#Breeding
Notes
References
Further reading
Developmental biology
he:דיאפאוזה
pl:Ciąża przedłużona
fi:Diapaussi | Embryonic diapause | Biology | 2,048 |
44,158 | https://en.wikipedia.org/wiki/Conservative%20force | In physics, a conservative force is a force with the property that the total work done by the force in moving a particle between two points is independent of the path taken. Equivalently, if a particle travels in a closed loop, the total work done (the sum of the force acting along the path multiplied by the displacement) by a conservative force is zero.
A conservative force depends only on the position of the object. If a force is conservative, it is possible to assign a numerical value for the potential at any point and conversely, when an object moves from one location to another, the force changes the potential energy of the object by an amount that does not depend on the path taken, contributing to the mechanical energy and the overall conservation of energy. If the force is not conservative, then defining a scalar potential is not possible, because taking different paths would lead to conflicting potential differences between the start and end points.
Gravitational force is an example of a conservative force, while frictional force is an example of a non-conservative force.
Other examples of conservative forces are: force in elastic spring, electrostatic force between two electric charges, and magnetic force between two magnetic poles. The last two forces are called central forces as they act along the line joining the centres of two charged/magnetized bodies. A central force is conservative if and only if it is spherically symmetric.
For conservative forces,
where is the conservative force, is the potential energy, and is the position.
Informal definition
Informally, a conservative force can be thought of as a force that conserves mechanical energy. Suppose a particle starts at point A, and there is a force F acting on it. Then the particle is moved around by other forces, and eventually ends up at A again. Though the particle may still be moving, at that instant when it passes point A again, it has traveled a closed path. If the net work done by F at this point is 0, then F passes the closed path test. Any force that passes the closed path test for all possible closed paths is classified as a conservative force.
The gravitational force, spring force, magnetic force (according to some definitions, see below) and electric force (at least in a time-independent magnetic field, see Faraday's law of induction for details) are examples of conservative forces, while friction and air drag are classical examples of non-conservative forces.
For non-conservative forces, the mechanical energy that is lost (not conserved) has to go somewhere else, by conservation of energy. Usually the energy is turned into heat, for example the heat generated by friction. In addition to heat, friction also often produces some sound energy. The water drag on a moving boat converts the boat's mechanical energy into not only heat and sound energy, but also wave energy at the edges of its wake. These and other energy losses are irreversible because of the second law of thermodynamics.
Path independence
A direct consequence of the closed path test is that the work done by a conservative force on a particle moving between any two points does not depend on the path taken by the particle.
This is illustrated in the figure to the right: The work done by the gravitational force on an object depends only on its change in height because the gravitational force is conservative. The work done by a conservative force is equal to the negative of change in potential energy during that process. For a proof, imagine two paths 1 and 2, both going from point A to point B. The variation of energy for the particle, taking path 1 from A to B and then path 2 backwards from B to A, is 0; thus, the work is the same in path 1 and 2, i.e., the work is independent of the path followed, as long as it goes from A to B.
For example, if a child slides down a frictionless slide, the work done by the gravitational force on the child from the start of the slide to the end is independent of the shape of the slide; it only depends on the vertical displacement of the child.
Mathematical description
A force field F, defined everywhere in space (or within a simply-connected volume of space), is called a conservative force or conservative vector field if it meets any of these three equivalent conditions:
The curl of F is the zero vector: where in two dimensions this reduces to:
There is zero net work (W) done by the force when moving a particle through a trajectory that starts and ends in the same place:
The force can be written as the negative gradient of a potential, :
The term conservative force comes from the fact that when a conservative force exists, it conserves mechanical energy. The most familiar conservative forces are gravity, the electric force (in a time-independent magnetic field, see Faraday's law), and spring force.
Many forces (particularly those that depend on velocity) are not force fields. In these cases, the above three conditions are not mathematically equivalent. For example, the magnetic force satisfies condition 2 (since the work done by a magnetic field on a charged particle is always zero), but does not satisfy condition 3, and condition 1 is not even defined (the force is not a vector field, so one cannot evaluate its curl). Accordingly, some authors classify the magnetic force as conservative, while others do not. The magnetic force is an unusual case; most velocity-dependent forces, such as friction, do not satisfy any of the three conditions, and therefore are unambiguously nonconservative.
Non-conservative force
Despite conservation of total energy, non-conservative forces can arise in classical physics due to neglected degrees of freedom or from time-dependent potentials. Many non-conservative forces may be perceived as macroscopic effects of small-scale conservative forces. For instance, friction may be treated without violating conservation of energy by considering the motion of individual molecules; however, that means every molecule's motion must be considered rather than handling it through statistical methods. For macroscopic systems the non-conservative approximation is far easier to deal with than millions of degrees of freedom.
Examples of non-conservative forces are friction and non-elastic material stress. Friction has the effect of transferring some of the energy from the large-scale motion of the bodies to small-scale movements in their interior, and therefore appear non-conservative on a large scale. General relativity is non-conservative, as seen in the anomalous precession of Mercury's orbit. However, general relativity does conserve a stress–energy–momentum pseudotensor.
See also
Conservative vector field
Conservative system
References
Force | Conservative force | Physics,Mathematics | 1,339 |
4,753,359 | https://en.wikipedia.org/wiki/Epidemiology%20of%20autism | The epidemiology of autism is the study of the incidence and distribution of autism spectrum disorders (ASD). A 2022 systematic review of global prevalence of autism spectrum disorders found a median prevalence of 1% in children in studies published from 2012 to 2021, with a trend of increasing prevalence over time. However, the study's 1% figure may reflect an underestimate of prevalence in low- and middle-income countries.
ASD averages a 4.3:1 male-to-female ratio in diagnosis, not accounting for ASD in gender diverse populations, which overlap disproportionately with ASD populations. The number of children known to have autism has increased dramatically since the 1980s, at least partly due to changes in diagnostic practice; it is unclear whether prevalence has actually increased; and as-yet-unidentified environmental risk factors cannot be ruled out. In 2020, the Centers for Disease Control's Autism and Developmental Disabilities Monitoring (ADDM) Network reported that approximately 1 in 54 children in the United States (1 in 34 boys, and 1 in 144 girls) are diagnosed with an autism spectrum disorder, based on data collected in 2016. This estimate is a 10% increase from the 1 in 59 rate in 2014, 105% increase from the 1 in 110 rate in 2006 and 176% increase from the 1 in 150 rate in 2000. Diagnostic criteria of ASD has changed significantly since the 1980s; for example, U.S. special-education autism classification was introduced in 1994.
ASD is a complex neurodevelopmental disorder, and although what causes it is still not entirely known, efforts have been made to outline causative mechanisms and how they give rise to the disorder. The risk of developing autism is increased in the presence of various prenatal factors, including advanced paternal age and diabetes in the mother during pregnancy. In rare cases, autism is strongly associated with agents that cause birth defects. It has been shown to be related to genetic disorders and with epilepsy. ASD is believed to be largely inherited, although the genetics of ASD are complex and it is unclear which genes are responsible. ASD is also associated with several intellectual or emotional gifts, which has led to a variety of hypotheses from within evolutionary psychiatry that autistic traits have played a beneficial role over human evolutionary history.
Other proposed causes of autism have been controversial. The vaccine hypothesis has been extensively investigated and shown to be false, lacking any scientific evidence. Andrew Wakefield published a small study in 1998 in the United Kingdom suggesting a causal link between autism and the trivalent MMR vaccine. After data included in the report was shown to be deliberately falsified, the paper was retracted, and Wakefield was struck off the medical register in the United Kingdom.
It is problematic to compare autism rates over the last three decades, as the diagnostic criteria for autism have changed with each revision of the Diagnostic and Statistical Manual (DSM), which outlines which symptoms meet the criteria for an ASD diagnosis. In 1983, the DSM did not recognize PDD-NOS or Asperger's syndrome, and the criteria for autistic disorder (AD) were more restrictive. The previous edition of the DSM, DSM-IV, included autistic disorder, childhood disintegrative disorder, PDD-NOS, and Asperger's syndrome. Due to inconsistencies in diagnosis and how much is still being learnt about autism, the most recent DSM (DSM-5) only has one diagnosis, autism spectrum disorder, which encompasses each of the previous four disorders. According to the new diagnostic criteria for ASD, one must have both struggles in social communication and interaction and restricted repetitive behaviors, interests and activities.
ASD diagnoses continue to be over four times more common among boys (1 in 34) than among girls (1 in 154), and they are reported in all racial, ethnic and socioeconomic groups. Studies have been conducted in several continents (Asia, Europe and North America) that report a prevalence rate of approximately 1 to 2 percent. A 2011 study reported a 2.6 percent prevalence of autism in South Korea.
Frequency
Although incidence rates measure autism prevalence directly, most epidemiological studies report other frequency measures, typically point or period prevalence, or sometimes cumulative incidence. Attention is focused mostly on whether prevalence is increasing with time.
Incidence and prevalence
Epidemiology defines several measures of the frequency of occurrence of a disease or condition:
The incidence rate of a condition is the rate at which new cases occurred per person-year, for example, "2 new cases per 1,000 person-years".
The cumulative incidence is the proportion of a population that became new cases within a specified time period, for example, "1.5 per 1,000 people became new cases during 2006".
The point prevalence of a condition is the proportion of a population that had the condition at a single point in time, for example, "10 cases per 1,000 people at the start of 2006".
The period prevalence is the proportion that had the condition at any time within a stated period, for example, "15 per 1,000 people had cases during 2006".
When studying how conditions are caused, incidence rates are the most appropriate measure of condition frequency as they assess probability directly. However, incidence can be difficult to measure with rarer conditions such as autism. In autism epidemiology, point or period prevalence is more useful than incidence, as the condition starts long before it is diagnosed, bearing in mind genetic elements it is inherent from conception, and the gap between initiation and diagnosis is influenced by many factors unrelated to chance. Research focuses mostly on whether point or period prevalence is increasing with time; cumulative incidence is sometimes used in studies of birth cohorts.
Estimation methods
The three basic approaches used to estimate prevalence differ in cost and in quality of results. The simplest and cheapest method is to count known autism cases from sources such as schools and clinics, and divide by the population. This approach is likely to underestimate prevalence because it does not count children who have not been diagnosed yet, and it is likely to generate skewed statistics because some children have better access to treatment.
The second method improves on the first by having investigators examine student or patient records looking for probable cases, to catch cases that have not been identified yet. The third method, which is arguably the best, screens a large sample of an entire community to identify possible cases, and then evaluates each possible case in more detail with standard diagnostic procedures. This last method typically produces the most reliable, and the highest, prevalence estimates.
Frequency estimates
Estimates of the prevalence of autism vary widely depending on diagnostic criteria, age of children screened, and geographical location. Most recent reviews tend to estimate a prevalence of 1–2 per 1,000 for autism and close to 27.6 per 1,000 for ASD;
PDD-NOS is the vast majority of ASD, Asperger syndrome is about 0.3 per 1,000 and the atypical forms childhood disintegrative disorder and Rett syndrome are much rarer.
A 2006 study of nearly 57,000 British nine- and ten-year-olds reported a prevalence of 3.89 per 1,000 for autism and 11.61 per 1,000 for ASD; these higher figures could be associated with broadening diagnostic criteria. Studies based on more detailed information, such as direct observation rather than examination of medical records, identify higher prevalence; this suggests that published figures may underestimate ASD's true prevalence. A 2009 study of the children in Cambridgeshire, England used different methods to measure prevalence, and estimated that 40% of ASD cases go undiagnosed, with the two least-biased estimates of true prevalence being 11.3 and 15.7 per 1,000.
A 2009 U.S. study based on 2006 data estimated the prevalence of ASD in eight-year-old children to be 9.0 per 1,000 (approximate range 8.6–9.3). A 2009 report based on the 2007 Adult Psychiatric Morbidity Survey by the National Health Service determined that the prevalence of ASD in adults was approximately 1% of the population, with a higher prevalence in males and no significant variation between age groups; these results suggest that prevalence of ASD among adults is similar to that in children and rates of autism are not increasing.
Changes with time
Attention has been focused on whether the prevalence of autism is increasing with time. Earlier prevalence estimates were lower, centering at about 0.5 per 1,000 for autism during the 1960s and 1970s and about 1 per 1,000 in the 1980s, as opposed to today's 23 per 1000.
The number of reported cases of autism increased dramatically in the 1990s and 2000s, prompting ongoing investigations into several potential reasons:
More children may have autism; that is, the true frequency of autism may have increased.
There may be more complete pickup of autism (case finding), as a result of increased awareness and funding. For example, attempts to sue vaccine companies may have increased case-reporting.
The diagnosis may be applied more broadly than before, as a result of the changing definition of the disorder, particularly changes in DSM-III-R and DSM-IV.
An editorial error in the description of the PDD-NOS category of Autism Spectrum Disorders in the DSM-IV, in 1994, inappropriately broadened the PDD-NOS construct. The error was corrected in the DSM-IV-TR, in 2000, reversing the PDD-NOS construct back to the more restrictive diagnostic criteria requirements from the DSM-III-R.
Successively earlier diagnosis in each succeeding cohort of children, including recognition in nursery (preschool), may have affected apparent prevalence but not incidence.
A review of the "rising autism" figures compared to other disabilities in schools shows a corresponding drop in findings of intellectual disability.
The reported increase is largely attributable to changes in diagnostic practices, referral patterns, availability of services, age at diagnosis, and public awareness. A widely cited 2002 pilot study concluded that the observed increase in autism in California cannot be explained by changes in diagnostic criteria, but a 2006 analysis found that special education data poorly measured prevalence because so many cases were undiagnosed, and that the 1994–2003 U.S. increase was associated with declines in other diagnostic categories, indicating that diagnostic substitution had occurred.
A 2007 study that modeled autism incidence found that broadened diagnostic criteria, diagnosis at a younger age, and improved efficiency of case ascertainment, can produce an increase in the frequency of autism ranging up to 29-fold depending on the frequency measure, suggesting that methodological factors may explain the observed increases in autism over time. A small 2008 study found that a significant number (40%) of people diagnosed with pragmatic language impairment as children in previous decades would now be given a diagnosis as autism. A study of all Danish children born in 1994–99 found that children born later were more likely to be diagnosed at a younger age, supporting the argument that apparent increases in autism prevalence were at least partly due to decreases in the age of diagnosis.
A 2009 study of California data found that the reported incidence of autism rose 7- to 8-fold from the early 1990s to 2007, and that changes in diagnostic criteria, inclusion of milder cases, and earlier age of diagnosis probably explain only a 4.25-fold increase; the study did not quantify the effects of wider awareness of autism, increased funding, and expanding support options resulting in parents' greater motivation to seek services. Another 2009 California study found that the reported increases are unlikely to be explained by changes in how qualifying condition codes for autism were recorded.
Several environmental factors have been proposed to support the hypothesis that the actual frequency of autism has increased. These include certain foods, infectious disease, pesticides. There is overwhelming scientific evidence against the MMR hypothesis and no convincing evidence for the thiomersal (or Thimerosal) hypothesis, so these types of risk factors have to be ruled out. Although it is unknown whether autism's frequency has increased, any such increase would suggest directing more attention and funding toward addressing environmental factors instead of continuing to focus on genetics.
COVID-19
The COVID-19 pandemic may have impacted the current number of diagnoses. More assessments for ASD occurred among 4-year-olds than the current 8-year-olds when they were 4 years of age prior to the pandemic. After the pandemic, the rate of current assessments has dropped, leading to possible delayed identification of ASD.
Geographical frequency
Africa
The prevalence of autism in Africa is unknown.
The Americas
The prevalence of autism in the Americas overall is unknown.
Canada
The Canadian government reported in 2019 that 1 in 50 children were diagnosed with autism spectrum disorder. However, preliminary results of an epidemiological study conducted at Montreal Children's Hospital in the 200–2004 school year found a prevalence rate of 0.68% (or 1 per 147).
A 2001 review of the medical research conducted by the Public Health Agency of Canada concluded that there was no link between MMR vaccine and either inflammatory bowel disease or autism. The review noted, "An increase in cases of autism was noted by year of birth from 1979 to 1992; however, no incremental increase in cases was observed after the introduction of MMR vaccination." After the introduction of MMR, "A time trend analysis found no correlation between prevalence of MMR vaccination and the incidence of autism in each birth cohort from 1988 to 1993."
United States
According to a report by the Centers for Disease Control and Prevention in 2020, 1 in 36 children have ASD (27.6 in every 1,000).
The number of diagnosed cases of autism grew dramatically in the U.S. in the 1990s and have continued in the 2000s. For the 2006 surveillance year, identified ASD cases were an estimated 9.0 per 1000 children aged 8 years (95% confidence interval [CI] = 8.6–9.3). These numbers measure what is sometimes called "administrative prevalence", that is, the number of known cases per unit of population, as opposed to the true number of cases. This prevalence estimate rose 57% (95% CI 27%–95%) from 2002 to 2006.
The National Health Interview Survey for 2014–2016 studied 30,502 US children and adolescents and found the weighted prevalence of ASD was 2.47% (24.7 per 1,000); 3.63% in boys and 1.25% in girls. Across the 3-year reporting period, the prevalence was 2.24% in 2014, 2.41% in 2015, and 2.76% in 2016.
The number of new cases of autism spectrum disorder in Caucasian boys is roughly 50% higher than found in Hispanic children, and approximately 30% more likely to occur than in Non-Hispanic white children in the United States.
A further study in 2006 concluded that the apparent rise in administrative prevalence was the result of diagnostic substitution, mostly for findings of intellectual disability and learning disabilities. "Many of the children now being counted in the autism category would probably have been counted in the mental retardation or learning disabilities categories if they were being labeled 10 years ago instead of today", said researcher Paul Shattuck of the Waisman Center at the University of Wisconsin–Madison, in a statement.
A population-based study in Olmsted County, Minnesota county found that the cumulative incidence of autism grew eightfold from the 1980–83 period to the 1995–97 period. The increase occurred after the introduction of broader, more-precise diagnostic criteria, increased service availability, and increased awareness of autism. During the same period, the reported number of autism cases grew 22-fold in the same location, suggesting that counts reported by clinics or schools provide misleading estimates of the true incidence of autism.
Venezuela
A 2008 study in Venezuela reported a prevalence of 1.1 per 1,000 for autism and 1.7 per 1,000 for ASD.
Asia
A journal reports that the median prevalence of ASD among 2–6-year-old children who are reported in China from 2000 upwards was 10.3/10,000.
Hong Kong
A 2008 Hong Kong study reported an ASD incidence rate similar to those reported in Australia and North America, and lower than Europeans. It also reported a prevalence of 1.68 per 1,000 for children under 15 years.
Japan
A 2005 study of a part of Yokohama with a stable population of about 300,000 reported a cumulative incidence to age 7 years of 48 cases of ASD per 10,000 children in 1989, and 86 in 1990. After the vaccination rate of the triple MMR vaccine dropped to near zero and was replaced with MR and M vaccine, the incidence rate grew to 97 and 161 cases per 10,000 children born in 1993 and 1994, respectively, indicating that the combined MMR vaccine did not cause autism. A 2004 Japanese autism association reported that about 360.000 people have typical Kanner-type autism.
Australia
Across all ages, 1 in 70 Australians identify as being autistic (or a person with autism). 1 in 23 children (or 4.36%) aged 7 to 14 years have an autism diagnosis.
Middle East
Israel
A 2009 study reported that the annual incidence rate of Israeli children with a diagnosis of ASD receiving disability benefits rose from zero in 1982–1984 to 190 per million in 2004. It was not known whether these figures reflected true increases or other factors such as changes in diagnostic measures.
Saudi Arabia
Studies of autism frequency have been particularly rare in the Middle East. One rough estimate is that the prevalence of autism in Saudi Arabia is 18 per 10,000, slightly higher than the 13 per 10,000 reported in developed countries.(compared to 168 per 10,000 in the USA)
Europe
Denmark
In 1992, thiomersal-containing vaccines were removed in Denmark. A study at Aarhus University indicated that during the chemical's usage period (up through 1990), there was no trend toward an increase in the incidence of autism. Between 1991 and 2000 the incidence increased, including among children born after the discontinuation of thimerosal.
France
France made autism the national focus for the year 2012 and the Health Ministry estimated the rate of autism in 2012 to have been 0.67%, i.e. 1 in 150.
Eric Fombonne made some studies in the years 1992 and 1997. He found a prevalence of 16 per 10,000 for the global pervasive developmental disorder (PDD).
The INSERM found a prevalence of 27 per 10,000 for the ASD and a prevalence of 9 per 10,000 for the early infantile autism in 2003. Those figures are considered as underrated as the WHO gives figures between 30 and 60 per 10,000. The French Minister of Health gives a prevalence of 4.9 per 10,000 on its website but it counts only early infantile autism.
Germany
A 2008 study in Germany found that inpatient admission rates for children with ASD increased 30% from 2000 to 2005, with the largest rise between 2000 and 2001 and a decline between 2001 and 2003. Inpatient rates for all mental disorders also rose for ages up to 15 years, so that the ratio of ASD to all admissions rose from 1.3% to 1.4%.
Norway
A 2009 study in Norway reported prevalence rates for ASD ranging from 0.21% to 0.87%, depending on assessment method and assumptions about non-response, suggesting that methodological factors explain large variances in prevalence rates in different studies.
United Kingdom
The incidence and changes in incidence with time are unclear in the United Kingdom. The reported autism incidence in the UK rose starting before the first introduction of the MMR vaccine in 1989. However, a perceived link between the two arising from the results of a fraudulent scientific study has caused considerable controversy, despite being subsequently disproved. A 2004 study found that the reported incidence of pervasive developmental disorders in a general practice research database in England and Wales grew steadily during 1988–2001 from 0.11 to 2.98 per 10,000 person-years, and concluded that much of this increase may be due to changes in diagnostic practice.
Genetics
As late as the mid-1970s there was little evidence of a genetic role in autism; evidence from genetic epidemiology studies now suggests that it is one of the most heritable of all psychiatric conditions. The first studies of twins estimated heritability to be more than 90%; in other words, that genetics explains more than 90% of autism cases. When only one identical twin is autistic, the other often has learning or social disabilities. For adult siblings, the risk of having one or more features of the broader autism phenotype might be as high as 30%, much higher than the risk in controls. About 10–15% of autism cases have an identifiable Mendelian (single-gene) condition, chromosome abnormality, or other genetic syndrome, and ASD is associated with several genetic disorders.
Since heritability is less than 100% and symptoms vary markedly among identical twins with autism, environmental factors are most likely a significant cause as well. If some of the risk is due to gene-environment interaction the 90% heritability may be too high; However, in 2017, the largest study, including over three million participants, estimated the heritability at 83%.
Genetic linkage analysis has been inconclusive; many association analyses have had inadequate power. Studies have examined more than 100 candidate genes; many genes must be examined because more than a third of genes are expressed in the brain and there are few clues on which are relevant to autism.
Causative factors
A few studies have found an association between autism and frequent use of acetaminophen (e.g. Tylenol, Paracetamol) by the mother during pregnancy. Autism is also associated with several other prenatal factors, including advanced age in either parent, and diabetes, bleeding, or use of psychiatric drugs in the mother during pregnancy. Autism was found to be indirectly linked to prepregnancy obesity and low weight mothers. It is not known whether mutations that arise spontaneously in autism and other neuropsychiatric disorders come mainly from the mother or the father, or whether the mutations are associated with parental age. However, recent studies have identified advancing paternal age as a significant indicator for ASD. Increased chance of autism has also been linked to rapid "catch-up" growth for children born to mothers who had unhealthy weight at conception.
A large 2008 population study of Swedish parents of children with autism found that the parents were more likely to have been hospitalized for a mental disorder, that schizophrenia was more common among the mothers and fathers, and that depression and personality disorders were more common among the mothers.
It is not known how many siblings of autistic individuals are themselves autistic. Several studies based on clinical samples have given quite different estimates, and these clinical samples differ in important ways from samples taken from the general community.
Autism has also been shown to cluster in urban neighborhoods of high socioeconomic status. One study from California found a three to fourfold increased risk of autism in a small 30 by 40 km region centered on West Hollywood, Los Angeles.
Sex and gender differences
Boys have a higher chance of being diagnosed with autism than girls. The ASD sex ratio averages 4.3:1 and is greatly modified by cognitive impairment: it may be close to 2:1 with intellectual disability and more than 5.5:1 without. Recent studies have found no association with socioeconomic status, and have reported inconsistent results about associations with race or ethnicity.
RORA deficiency may explain some of the difference in frequency between males and females. RORA protein levels are higher in the brains of typically developing females compared to typically developing males, providing females with a buffer against RORA deficiency. This is known as the female protective effect. RORA deficiency has previously been proposed as one factor that may make males more vulnerable to autism.
There is a statistically notable overlap between ASD populations and gender diverse populations.
Other findings
There exists behavioral differences and differences in brain structures between males and females with autism. Females often either mask their symptoms more (called camouflaging) or need to display more prominent symptoms to receive a diagnosis. Males tend to demonstrate common symptoms of autism such as repetitive and restricted behaviors more so than females. This difference is hypothesized to be part of why females are more likely to be underdiagnosed.
Vaccines
A common misconception is that vaccinations are the cause of children developing ASD. This is partly due to the concern of a former ingredient called thimerosal, which is a substance that contains mercury. Scientific literature demonstrates that there is no causal link between thimerosal and ASD. Though the ingredient is not as prevalent in vaccines anymore, there is still concern about the link between autism and vaccinations, but there is no evidence to support this notion.
Environmental chemical exposure
One theory behind autism is exposure to environmental chemicals before the age of two months. Human studies have mainly focused on particulate matter or mercury. Other studies investigated the effects of pollutants in the air or lead. Additionally, studies involving rodent animal models also investigated the effects of chlorpyrifos. Research suggests that environmental chemicals can be targeted and impacted by pollutants. Some systemic reviews have indicated that while significant correlations between mercury exposure and autism have been found, more research is needed.
Related brain structures
Key symptoms of autism spectrum disorder are impaired social and communication abilities and having a narrow scope of interest and repeated behaviors. An impairment of language used to be a key diagnostic factor, but research has led to categorizing this symptom as a specifier. One scoping review has determined multiple brain structures that appear to play a role in language related symptoms in autism spectrum disorder. For example, having a larger sized right inferior frontal gyrus is correlated with those diagnosed with autism, specifically categorized in the language impairment subgroup (but not in those without). Some research yields conflicting results however related to different structures and total language scores, in which a possible factor for this could be age.
As for temporal regions, increased rightward radial diffusivity might have an association with receptive language scores. Research concerning the planum temporale and its role in language has been inconclusive. The cerebellum may also be a factor in whether a person has language impairment or not.
Symptom management
Much research centered around those with ASD that have impaired communication and seeking improvement in social interaction often investigate the use of psychosocial interventions. However, other research looks to pharmacology for treatment. Behavioral management therapy is another potential option. Overall however, treatment options are still in development.
Comorbid conditions
Autism is associated with several other conditions:
Genetic disorders. About 10–15% of autism cases have an identifiable Mendelian (single-gene) condition, chromosome abnormality, or other genetic syndrome, and ASD is associated with several genetic disorders.
Intellectual disability. The fraction of autistic individuals who also meet criteria for intellectual disability has been reported as anywhere from 25% to 70%, a wide variation illustrating the difficulty of assessing autistic intelligence.
Anxiety disorders are common among children with ASD, although there are no firm data. Symptoms include generalized anxiety and separation anxiety, and are likely affected by age, level of cognitive functioning, degree of social impairment, and ASD-specific difficulties. Many anxiety disorders, such as social phobia, are not commonly diagnosed in people with ASD because such symptoms are better explained by ASD itself, and it is often difficult to tell whether symptoms such as compulsive checking are part of ASD or a co-occurring anxiety problem. The prevalence of anxiety disorders in children with ASD has been reported to be anywhere between 11% and 84%.
Epilepsy, with variations in risk of epilepsy due to age, cognitive level, and type of language disorder; 5–38% of children with autism have comorbid epilepsy, and only 16% of these have remission in adulthood.
Several metabolic defects, such as phenylketonuria, are associated with autistic symptoms.
Minor physical anomalies are significantly increased in the autistic population.
Preempted diagnoses. Although the DSM-IV rules out concurrent diagnosis of many other conditions along with autism, the full criteria for ADHD, Tourette syndrome, and other of these conditions are often present and these comorbid diagnoses are increasingly accepted. A 2008 study found that nearly 70% of children with ASD had at least one psychiatric disorder, including nearly 30% with social anxiety disorder and similar proportions with ADHD and oppositional defiant disorder. Childhood-onset schizophrenia, a rare and severe form, is another preempted diagnosis whose symptoms are often present along with the symptoms of autism.
References
Autism
Autism
Autism | Epidemiology of autism | Environmental_science | 5,959 |
48,522,089 | https://en.wikipedia.org/wiki/Hygrophoropsis%20panamensis | Hygrophoropsis panamensis is a species of fungus in the family Hygrophoropsidaceae. Found in Panama, it was described as new to science in 1983 by mycologist Rolf Singer.
References
External links
Hygrophoropsidaceae
Fungi described in 1983
Fungi of Central America
Taxa named by Rolf Singer
Fungus species | Hygrophoropsis panamensis | Biology | 74 |
34,348,533 | https://en.wikipedia.org/wiki/Muhammad%20Iqbal | Sir Muhammad Iqbal (; 9 November 187721 April 1938) was a South Asian Islamic philosopher, poet and politician. His poetry is considered to be among the greatest of the 20th century, and his vision of a cultural and political ideal for the Muslims of British-ruled India is widely regarded as having animated the impulse for the Pakistan Movement. He is commonly referred to by the honourific Allama (, ) and widely considered one of the most important and influential Muslim thinkers and Western religious philosophers of the 20th century.
Born and raised in Sialkot, Punjab, Iqbal completed his BA and MA at the Government College in Lahore. He taught Arabic at the Oriental College in Lahore from 1899 until 1903, during which time he wrote prolifically. Notable among his Urdu poems from this period are "Parinde ki Faryad" (translated as "A Bird's Prayer"), an early contemplation on animal rights, and "Tarana-e-Hindi" (translated as "Anthem of India"), a patriotic poem—both composed for children. In 1905, he departed from India to pursue further education in Europe, first in England and later in Germany. In England, he earned a second BA at Trinity College, Cambridge, and subsequently qualified as a barrister at Lincoln's Inn. In Germany, he obtained a PhD in philosophy at the University of Munich, with his thesis focusing on "The Development of Metaphysics in Persia" in 1908. Upon his return to Lahore in 1908, Iqbal established a law practice but primarily focused on producing scholarly works on politics, economics, history, philosophy, and religion. He is most renowned for his poetic compositions, including "Asrar-e-Khudi," for which he was honored with a British knighthood upon its publication, "Rumuz-e-Bekhudi," and "Bang-e-Dara." His literary works in the Persian language garnered him recognition in Iran, where he is commonly known as Eghbal-e Lahouri (), meaning "Iqbal of Lahore."
An ardent proponent of the political and spiritual revival of the Muslim world, particularly of the Muslims in the Indian subcontinent, the series of lectures Iqbal delivered to this effect were published as The Reconstruction of Religious Thought in Islam in 1930. He was elected to the Punjab Legislative Council in 1927 and held several positions in the All-India Muslim League. In his Allahabad Address, delivered at the League's annual assembly in 1930, he formulated a political framework for the Muslim-majority regions spanning northwestern India, spurring the League's pursuit of the two-nation theory.
In August 1947, nine years after Iqbal's death, the partition of India gave way to the establishment of Pakistan, a newly independent Islamic state in which Iqbal was honoured as the national poet. He is also known in Pakistani society as () and as (). The anniversary of his birth (Yom-e Weladat-e Muḥammad Iqbal), 9 November, is observed as a public holiday in Pakistan.
Biography
Background
Iqbal was born on 9 November 1877 in a Punjabi-Kashmiri family from Sialkot in the Punjab Province of British India (now in Pakistan). His family traced their ancestry back to the Sapru clan of Kashmiri Pandits who were from a south Kashmiri village in Kulgam and converted to Islam in the 15th century. Iqbal's mother-tongue was Punjabi, and he conversed mostly in Punjabi and Urdu in his daily life. In the 19th century, when the Sikh Empire was conquering Kashmir, his grandfather's family migrated to Punjab. Iqbal's grandfather was an eighth cousin of Sir Tej Bahadur Sapru, an important lawyer and freedom fighter who would eventually become an admirer of Iqbal. Iqbal often mentioned and commemorated his Kashmiri lineage in his writings. According to scholar Annemarie Schimmel, Iqbal often wrote about his being "a son of Kashmiri-Brahmans but (being) acquainted with the wisdom of Rumi and Tabrizi."
Iqbal's father, Sheikh Noor Muhammad (died 1930), was a tailor, not formally educated, but a religious man. Iqbal's mother Imam Bibi, a Kashmiri from Sambrial, was described as a polite and humble woman who helped the poor and her neighbours with their problems. She died on 9 November 1914 in Sialkot. Iqbal loved his mother, and on her death he expressed his feelings of pathos in an elegy:
Early education
Iqbal was four years old when he was sent to a mosque to receive instruction in reading the Qur'an. He learned the Arabic language from his teacher, Syed Mir Hassan, the head of the madrasa and professor of Arabic at Scotch Mission College in Sialkot, where he matriculated in 1893. He received an Intermediate level with the Faculty of Arts diploma in 1895. The same year he enrolled at Government College University, where he obtained his Bachelor of Arts in philosophy, English literature and Arabic in 1897, and won the Khan Bahadurddin F.S. Jalaluddin medal for his performance in Arabic. In 1899, he received his Master of Arts degree from the same college and won first place in philosophy in the University of the Punjab.
Marriages
Iqbal married four times under different circumstances.
His first marriage was in 1895 when he was 18 years old. His bride, Karim Bibi, was the daughter of Khan Bahadur Ata Muhammad Khan, a leading civil surgeon and fellow Punjabi-Kashmiri based in Gujrat. Her sister was the mother of director and music composer Khwaja Khurshid Anwar. Their families arranged the marriage, and the couple had two children; a daughter, Miraj Begum (1895–1915), and a son, Aftab Iqbal (1899–1979), who became a barrister. Another son is said to have died after birth in 1901.
Iqbal and Karim Bibi separated somewhere between 1910 and 1913. Despite this, he continued to financially support her till his death.
Iqbal's second marriage took place on 26 August 1910 with the niece of Hakim Noor-ud-Din.
Iqbal's third marriage was with Mukhtar Begum, and it was held in December 1914, shortly after the death of Iqbal's mother the previous November. They had a son, but both the mother and son died shortly after birth in 1924.
Later, Iqbal married Sardar Begum, and they became the parents of a son, Javed Iqbal (1924–2015), who became Senior Justice of the Supreme Court of Pakistan, and a daughter, Muneera Bano (born 1930). One of Muneera's sons is the philanthropist-cum-socialite Yousuf Salahuddin.
Higher education in Europe
Iqbal was influenced by the teachings of Sir Thomas Arnold, his philosophy teacher at Government College Lahore, to pursue higher education in the West. In 1905, he travelled to England for that purpose. While already acquainted with Friedrich Nietzsche and Henri Bergson, Iqbal would discover Rumi slightly before his departure to England, and he would teach the Masnavi to his friend Swami Rama Tirtha, who in return would teach him Sanskrit. Iqbal qualified for a scholarship from Trinity College, University of Cambridge, and obtained a Bachelor of Arts in 1906. This B.A. degree in London, made him eligible, to practice as an advocate, as it was being practised those days. In the same year he was called to the bar as a barrister at Lincoln's Inn. In 1907, Iqbal moved to Germany to pursue his doctoral studies, and earned a Doctor of Philosophy degree from the Ludwig Maximilian University of Munich in 4 November 1907 (Published in 1908 in London). Working under the guidance of Friedrich Hommel, Iqbal's doctoral thesis was entitled The Development of Metaphysics in Persia. Among his fellow students in Munich was Hans-Hasso von Veltheim who later happened to visit Iqbal the day before Iqbal died.
In 1907, he had a close friendship with the writer Atiya Fyzee in both Britain and Germany. Atiya would later publish their correspondence. While Iqbal was in Heidelberg in 1907, his German professor Emma Wegenast taught him about Goethe's Faust, Heine and Nietzsche. He mastered German in three months. A street in Heidelberg has been named in his memory, "Iqbal Ufer". During his study in Europe, Iqbal began to write poetry in Persian. He preferred to write in this language because doing so made it easier to express his thoughts. He would write continuously in Persian throughout his life.
Academic career
Iqbal began his career as a reader of Arabic after completing his Master of Arts degree in 1899, at Oriental College and shortly afterward was selected as a junior professor of philosophy at Government College Lahore, where he had also been a student in the past. He worked there until he left for England in 1905. In 1907 he went to Germany for PhD In 1908, he returned from Germany and joined the same college again as a professor of philosophy and English literature. In the same period Iqbal began practising law at the Chief Court of Lahore, but he soon quit law practice and devoted himself to literary works, becoming an active member of Anjuman-e-Himayat-e-Islam. In 1919, he became the general secretary of the same organization. Iqbal's thoughts in his work primarily focus on the spiritual direction and development of human society, centered around experiences from his travels and stays in Western Europe and the Middle East. He was profoundly influenced by Western philosophers such as Nietzsche, Bergson, and Goethe. He also closely worked with Ibrahim Hisham during his stay at the Aligarh Muslim University.
The poetry and philosophy of Rumi strongly influenced Iqbal. Deeply grounded in religion since childhood, Iqbal began concentrating intensely on the study of Islam, the culture and history of Islamic civilization and its political future, while embracing Rumi as "his guide". Iqbal's works focus on reminding his readers of the past glories of Islamic civilization and delivering the message of a pure, spiritual focus on Islam as a source for socio-political liberation and greatness. Iqbal denounced political divisions within and amongst Muslim nations, and frequently alluded to and spoke in terms of the global Muslim community or the Ummah.
Iqbal's poetry was translated into many European languages in the early part of the 20th century. Iqbal's Asrar-i-Khudi and Javed Nama were translated into English by R. A. Nicholson and A. J. Arberry, respectively.
Legal career
Iqbal was not only a prolific writer but also a known advocate. He appeared before the Lahore High Court in both civil and criminal matters. There are more than 100 reported judgments to his name.
Final years and death
In 1933, after returning from a trip to Spain and Afghanistan, Iqbal suffered from a mysterious throat illness. He spent his final years helping Chaudhry Niaz Ali Khan to establish the Dar ul Islam Trust Institute at a Jamalpur estate near Pathankot, where there were plans to subsidize studies in classical Islam and contemporary social science. He also advocated for an independent Muslim state. Iqbal ceased practising law in 1934 and was granted a pension by the Nawab of Bhopal. In his final years, he frequently visited the Dargah of famous Sufi Ali Hujwiri in Lahore for spiritual guidance. After suffering for months from his illness, Iqbal died in Lahore on 21 April 1938. It is maintained that he breathed his last listening to a kafi of Bulleh Shah. His tomb is located in Hazuri Bagh, the enclosed garden between the entrance of the Badshahi Mosque and the Lahore Fort, and official guards are provided by the Government of Pakistan.
Efforts and influences
Political
Iqbal first became interested in national affairs in his youth. He received considerable recognition from the Punjabi elite after his return from England in 1908, and he was closely associated with Mian Muhammad Shafi. When the All-India Muslim League was expanded to the provincial level, and Shafi received a significant role in the structural organization of the Punjab Muslim League, Iqbal was made one of the first three joint secretaries along with Shaikh Abdul Aziz and Maulvi Mahbub Alam. While dividing his time between law practice and poetry, Iqbal remained active in the Muslim League. He did not support Indian involvement in World War I and stayed in close touch with Muslim political leaders such as Mohammad Ali Jouhar and Muhammad Ali Jinnah. He was a critic of the mainstream Indian National Congress, which he regarded as dominated by Hindus, and was disappointed with the League when, during the 1920s, it was absorbed in factional divides between the pro-British group led by Shafi and the centrist group led by Jinnah. He was active in the Khilafat Movement, and was among the founding fathers of Jamia Millia Islamia which was established at Aligarh in October 1920. He was also given the offer of being the first vice-chancellor of Jamia Millia Islamia by Mahatma Gandhi, which he refused.
In November 1926, with the encouragement of friends and supporters, Iqbal contested the election for a seat in the Punjab Legislative Assembly from the Muslim district of Lahore, and defeated his opponent by a margin of 3,177 votes. He supported the constitutional proposals presented by Jinnah to guarantee Muslim political rights and influence in a coalition with the Congress and worked with Aga Khan and other Muslim leaders to mend the factional divisions and achieve unity in the Muslim League. While in Lahore he was a friend of Abdul Sattar Ranjoor.
Iqbal, Jinnah, and the concept of "Pakistan"
Ideologically separated from Congress Muslim leaders, Iqbal had also been disillusioned with the politicians of the Muslim League, owing to the factional conflict that plagued the League in the 1920s. Discontent with factional leaders like Shafi and Fazl-ur-Rahman, Iqbal came to believe that only Jinnah was a political leader capable of preserving unity and fulfilling the League's objectives of Muslim political empowerment. Building a strong, personal correspondence with Jinnah, Iqbal was influential in convincing Jinnah to end his self-imposed exile in London, return to India and take charge of the League. Iqbal firmly believed that Jinnah was the only leader capable of drawing Indian Muslims to the League and maintaining party unity before the British and the Congress:
While Iqbal espoused the idea of Muslim-majority provinces in 1930, Jinnah would continue to hold talks with the Congress through the decade and only officially embraced the goal of Pakistan in 1940. Some historians postulate that Jinnah always remained hopeful for an agreement with the Congress and never fully desired the partition of India. Iqbal's close correspondence with Jinnah is speculated by some historians as having been responsible for Jinnah's embrace of the idea of Pakistan. Iqbal elucidated to Jinnah his vision of a separate Muslim state in a letter sent on 21 June 1937:
Iqbal, serving as president of the Punjab Muslim League, criticized Jinnah's political actions, including a political agreement with Punjabi leader Sikandar Hyat Khan, whom Iqbal saw as a representative of feudal classes and not committed to Islam as the core political philosophy. Nevertheless, Iqbal worked constantly to encourage Muslim leaders and masses to support Jinnah and the League. Speaking about the political future of Muslims in India, Iqbal said:
Madani–Iqbal debate
A famous debate was held between Iqbal and Hussain Ahmed Madani on the question of nationalism in the late 1930s. Madani's position throughout was to insist on the Islamic legitimacy of embracing a culturally plural, secular democracy as the best and the only realistic future for India's Muslims where Iqbal insisted on a religiously defined, homogeneous Muslim society. Madani and Iqbal both appreciated this point and they never advocated the creation of an absolute 'Islamic State'. They differed only in their first step. According to Madani the first step was the freedom of India for which composite nationalism was necessary. According to Iqbal the first step was the creation of a community of Muslims in the Muslim majority land, i.e. a Muslim India within India.
Revival of Islamic policy
Iqbal's six English lectures were published in Lahore in 1930, and then by the Oxford University Press in 1934 in the book The Reconstruction of Religious Thought in Islam. The lectures had been delivered at Madras, Hyderabad and Aligarh. These lectures dwell on the role of Islam as a religion and as a political and legal philosophy in the modern age. In these lectures Iqbal firmly rejects the political attitudes and conduct of Muslim politicians, whom he saw as morally misguided, attached to power and without any standing with the Muslim masses.
Iqbal expressed fears that not only would secularism weaken the spiritual foundations of Islam and Muslim society but that India's Hindu-majority population would crowd out Muslim heritage, culture, and political influence. In his travels to Egypt, Afghanistan, Iran, and Turkey, he promoted ideas of greater Islamic political co-operation and unity, calling for the shedding of nationalist differences. He also speculated on different political arrangements to guarantee Muslim political power; in a dialogue with Dr. B. R. Ambedkar, Iqbal expressed his desire to see Indian provinces as autonomous units under the direct control of the British government and with no central Indian government. He envisaged autonomous Muslim regions in India. Under a single Indian union, he feared for Muslims, who would suffer in many respects, especially concerning their existentially separate entity as Muslims.
Iqbal was elected president of the Muslim League in 1930 at its session in Allahabad in the United Provinces, as well as for the session in Lahore in 1932. In his presidential address on 29 December 1930 he outlined a vision of an independent state for Muslim-majority provinces in north-western India:
In his speech, Iqbal emphasised that, unlike Christianity, Islam came with "legal concepts" with "civic significance", with its "religious ideals" considered as inseparable from social order: "Therefore, if it means a displacement of the Islamic principle of solidarity, the construction of a policy on national lines, is simply unthinkable to a Muslim." Iqbal thus stressed not only the need for the political unity of Muslim communities but the undesirability of blending the Muslim population into a wider society not based on Islamic principles.
Even as he rejected secularism and nationalism he would not elucidate or specify if his ideal Islamic state would be a theocracy, and criticized the "intellectual attitudes" of Islamic scholars (ulema) as having "reduced the Law of Islam practically to the state of immobility".
The latter part of Iqbal's life was concentrated on political activity. He travelled across Europe and West Asia to garner political and financial support for the League. He reiterated the ideas of his 1932 address, and, during the third Round Table Conference, he opposed the Congress and proposals for transfer of power without considerable autonomy for Muslim provinces.
He would serve as president of the Punjab Muslim League, and would deliver speeches and publish articles in an attempt to rally Muslims across India as a single political entity. Iqbal consistently criticized feudal classes in Punjab as well as Muslim politicians opposed to the League. Many accounts of Iqbal's frustration toward Congress leadership were also pivotal in providing a vision for the two-nation theory.
Patron of Tolu-e-Islam
Iqbal was the first patron of Tolu-e-Islam, a historical, political, religious and cultural journal of the Muslims of British India. For a long time, Iqbal wanted a journal to propagate his ideas and the aims and objectives of the All India Muslim League. In 1935, according to his instructions, Syed Nazeer Niazi initiated and edited the journal, named after Iqbal's poem "Tulu'i Islam". Niazi dedicated the first issue of the journal to Iqbal. The journal would play an important role in the Pakistan movement. Later, the journal was continued by Ghulam Ahmed Pervez, who had contributed many articles in its early editions.
Literary work
Persian
Iqbal's poetic works are written primarily in Persian rather than Urdu. Among his 12,000 verses of poetry, about 7,000 verses are in Persian. In 1915, he published his first collection of poetry, the Asrar-i-Khudi (Secrets of the Self) in Persian. The poems emphasise the spirit and self from a religious perspective. Many critics have called this Iqbal's finest poetic work. In Asrar-i-Khudi, Iqbal explains his philosophy of "Khudi", or "Self". Iqbal's use of the term "Khudi" is synonymous with the word "Rooh" used in the Quran for a divine spark which is present in every human being, and was said by Iqbal to be present in Adam, for which God ordered all of the angels to prostrate in front of Adam. Iqbal condemns self-destruction. For him, the aim of life is self-realization and self-knowledge. He charts the stages through which the "Self" has to pass before finally arriving at its point of perfection, enabling the knower of the "Self" to become a vice-regent of God.
In his Rumuz-i-Bekhudi (Hints of Selflessness), Iqbal seeks to prove the Islamic way of life is the best code of conduct for a nation's viability. A person must keep his characteristics intact, he asserts, but once this is achieved, he should sacrifice his ambitions for the needs of the nation. Man cannot realize the "Self" outside of society. Published in 1917, this group of poems has as its main themes the ideal community, Islamic ethical and social principles, and the relationship between the individual and society. Although he supports Islam, Iqbal also recognises the positive aspects of other religions. Rumuz-i-Bekhudi complements the emphasis on the self in Asrar-e-Khudi and the two collections are often put in the same volume under the title Asrar-i-Rumuz (Hinting Secrets). It is addressed to the world's Muslims.
Iqbal's 1924 publication, the Payam-e-Mashriq (The Message of the East), is closely connected to the West-östlicher Diwan by the German poet Goethe. Goethe bemoans the West having become too materialistic in outlook, and expects the East will provide a message of hope to resuscitate spiritual values. Iqbal styles his work as a reminder to the West of the importance of morality, religion, and civilization by underlining the need for cultivating feeling, ardor, and dynamism. He asserts that an individual can never aspire to higher dimensions unless he learns of the nature of spirituality. In his first visit to Afghanistan, he presented Payam-e Mashreq to King Amanullah Khan. In it, he admired the uprising of Afghanistan against the British Empire. In 1933, he was officially invited to Afghanistan to join the meetings regarding the establishment of Kabul University.
The Zabur-e-Ajam (Persian Psalms), published in 1927, includes the poems "Gulshan-e-Raz-e-Jadeed" ("Garden of New Secrets") and "Bandagi Nama" ("Book of Slavery"). In "Gulshan-e-Raz-e-Jadeed", Iqbal first poses questions, then answers them with the help of ancient and modern insight. "Bandagi Nama" denounces slavery and attempts to explain the spirit behind the fine arts of enslaved societies. Here, as in other books, Iqbal insists on remembering the past, doing well in the present and preparing for the future, while emphasising love, enthusiasm and energy to fulfill the ideal life.
Iqbal's 1932 work, the Javed Nama (Book of Javed), is named after and in a manner addressed to his son, who is featured in the poems. It follows the examples of the works of Ibn Arabi and Dante's The Divine Comedy, through mystical and exaggerated depictions across time. Iqbal depicts himself as Zinda Rud ("A stream full of life") guided by Rumi, "the master", through various heavens and spheres and has the honour of approaching divinity and coming in contact with divine illuminations. In a passage reliving a historical period, Iqbal condemns the Muslims who were instrumental in the defeat and death of Nawab Siraj-ud-Daula of Bengal and Tipu Sultan of Mysore by betraying them for the benefit of the British colonists, and thus delivering their country to the shackles of slavery. In the end, by addressing his son Javed, he speaks to the young people at large, and guides the "new generation".
Pas Chih Bayed Kard Ay Aqwam-e-Sharq includes the poem "Musafir" ("The Traveller"). Again, Iqbal depicts Rumi as a character and gives an exposition of the mysteries of Islamic laws and Sufi perceptions. Iqbal laments the dissension and disunity among the Indian Muslims as well as Muslim nations. "Musafir" is an account of one of Iqbal's journeys to Afghanistan, in which the Pashtun people are counselled to learn the "secret of Islam" and to "build up the self" within themselves.
His love of the Persian language is evident in his works and poetry. He says in one of his poems:
Translation: Even though in sweetness Hindi* [archaic name for Urdu, lit. "language of India"] is sugar – (but) speech method in Dari [the variety of Persian in Afghanistan] is sweeter *
Throughout his life, Iqbal would prefer writing in Persian as he believed it allowed him to fully express philosophical concepts, and it gave him a wider audience.
Urdu
Muhammad Iqbal's The Call of the Marching Bell (, bang-e-dara), his first collection of Urdu poetry, was published in 1924. It was written in three distinct phases of his life. The poems he wrote up to 1905—the year he left for England—reflect patriotism and the imagery of nature, including the Urdu language patriotic "Saare Jahan se Accha". The second set of poems date from 1905 to 1908, when Iqbal studied in Europe, and dwell upon the nature of European society, which he emphasised had lost spiritual and religious values. This inspired Iqbal to write poems on the historical and cultural heritage of Islam and the Muslim community, with a global perspective. Iqbal urges the entire Muslim community, addressed as the Ummah, to define personal, social and political existence by the values and teachings of Islam.
Iqbal's works were in Persian for most of his career, but after 1930 his works were mainly in Urdu. His works in this period were often specifically directed at the Muslim masses of India, with an even stronger emphasis on Islam and Muslim spiritual and political reawakening. Published in 1935, Bal-e-Jibril (Wings of Gabriel) is considered by many critics as his finest Urdu poetry and was inspired by his visit to Spain, where he visited the monuments and legacy of the kingdom of the Moors. It consists of ghazals, poems, quatrains and epigrams and carries a strong sense of religious passion.
Zarb-i-Kalim (or The Rod of Moses) is another philosophical poetry book of Allama Iqbal in Urdu, it was published in 1936, two years before his death. In which he described as his political manifesto. It was published with the subtitle "A Declaration of War Against the Present Times. Muhammad Iqbal argues that modern problems are due to the godlessness, materialism, and injustice of modern civilization, which feeds on the subjugation and exploitation of weak nations, especially the Indian Muslims.
Iqbal's final work was Armughan-e-Hijaz (The Gift of Hijaz), published posthumously in 1938. The first part contains quatrains in Persian, and the second part contains some poems and epigrams in Urdu. The Persian quatrains convey the impression that the poet is travelling through the Hijaz in his imagination. The profundity of ideas and intensity of passion are the salient features of these short poems.
Iqbal's vision of mystical experience is clear in one of his Urdu ghazals, which was written in London during his student days. Some verses of that ghazal are:
English
Iqbal wrote two books, The Development of Metaphysics in Persia (1908) and The Reconstruction of Religious Thought in Islam (1930), and many letters in the English language. He also wrote a book on Economics that is now rare. In these, he revealed his thoughts regarding Persian ideology and Islamic Sufism – in particular, his beliefs that Islamic Sufism activates the searching soul to a superior perception of life. He also discussed philosophy, God and the meaning of prayer, human spirit and Muslim culture, as well as other political, social and religious problems.
Iqbal was invited to Cambridge to participate in a conference in 1931, where he expressed his views, including those on the separation of church and state, to students and other participants:
Punjabi
Iqbal also wrote some poems in Punjabi, such as "Piyaara Jedi" and "Baba Bakri Wala", which he penned in 1929 on the occasion of his son Javed's birthday. A collection of his Punjabi poetry was put on display at the Iqbal Manzil in Sialkot.
Iqbal was deeply influenced by Punjabi Sufis. Once a comrade recited a poem by Bulleh Shah and he was "so much touched and overwhelmed...that tears rolled down his cheeks."
Modern reputation
"Poet of the East"
Iqbal has been referred to as the "Poet of the East" by academics, institutions and the media.
The Vice-Chancellor of Quaid-e-Azam University, Dr. Masoom Yasinzai, stated in a seminar addressing a distinguished gathering of educators and intellectuals that Iqbal is not only a poet of the East but is a universal poet. Moreover, Iqbal is not restricted to any specific segment of the world community, but he is for all humanity.
Iqbal's revolutionary works through his poetry affected the Muslims of the subcontinent. Iqbal thought that Muslims had long been suppressed by the colonial enlargement and growth of the West. For this concept, Iqbal is recognised as the "Poet of the East".
The Urdu world is very familiar with Iqbal as the "Poet of the East". Iqbal is also called Muffakir-e-Pakistan ("The Thinker of Pakistan") and Hakeem-ul-Ummat ("The Sage of the Ummah"). The Pakistan government officially named him Pakistan's "national poet".
Iran
In Iran, Iqbal is known as Iqbāl-e Lāhorī () (Iqbal of Lahore). Iqbal's Asrare-i-Khudi and Bal-i-Jibreel are particularly popular in Iran. At the same time, many scholars in Iran have recognised the importance of Iqbal's poetry in inspiring and sustaining the Iranian Revolution of 1979. During the early phases of the revolutionary movement, it was common to see people gathering in a park or corner to listen to someone reciting Iqbal's Persian poetry, which is why people of all ages in Iran today are familiar with at least some of his poetry, notably Zabur-i-Ajam.
Ayatollah Ali Khamenei has stated, "We have a large number of non-Persian-speaking poets in the history of our literature, but I cannot point out any of them whose poetry possesses the qualities of Iqbal's Persian poetry. Iqbal was not acquainted with Persian idiom, as he spoke Urdu at home and talked to his friends in Urdu or English. He did not know the rules of Persian prose writing. [...] In spite of not having tasted the Persian way of life, never living in the cradle of Persian culture, and never having any direct association with it, he cast with great mastery the most delicate, the most subtle and radically new philosophical themes into the mould of Persian poetry, some of which are unsurpassable yet."
By the early 1950s, Iqbal became known among the intelligentsia of Iran. Iranian poet laureate Muhammad Taqi Bahar universalized Iqbal in Iran. He highly praised the work of Iqbal in Persian.
In 1952, Iranian Prime Minister Mohammad Mossadeq, a national hero because of his oil nationalization policy, broadcast a special radio message on Iqbal Day and praised his role in the struggle of the Indian Muslims against British imperialism. At the end of the 1950s, Iranians published the complete Persian works. In the 1960s, Iqbal's thesis on Persian philosophy was translated from English to Persian. Ali Shariati, a Sorbonne-educated sociologist, supported Iqbal as his role model as Iqbal had Rumi. An example of the admiration and appreciation of Iran for Iqbal is that he received the place of honour in the pantheon of the Persian elegy writers.
Iqbal became even more popular in Iran in the 1970s. His verses appeared on banners, and his poetry was recited at meetings of intellectuals. Iqbal inspired many intellectuals, including Ali Shariati, Mehdi Bazargan and Abdulkarim Soroush. His book The Reconstruction of Religious Thought in Islam was translated by Mohammad Masud Noruzi.
Key Iranian thinkers and leaders who were influenced by Iqbal's poetry during the rise of the Iranian revolution include Khamenei, Shariati and Soroush, although much of the revolutionary guard was familiar with Iqbal's poetry. At the inauguration of the First Iqbal Summit in Tehran (1986), Khamenei stated that in its "conviction that the Quran and Islam are to be made the basis of all revolutions and movements", Iran was "exactly following the path that was shown to us by Iqbal". Shariati, who has been described as a core ideologue for the Iranian Revolution, described Iqbal as a figure who brought a message of "rejuvenation", "awakening" and "power" to the Muslim world.
Arab countries
Iqbal has an audience in the Arab world, and in Egypt one of his poems has been sung by Umm Kulthum, the most famous modern Egyptian artist, while among his modern admirers there are influential literary figures such as Farouk Shousha. In Saudi Arabia, among the important personalities who were influenced by Iqbal there was Abdullah bin Faisal Al Saud, a member of the Saudi royal family and himself a poet.
Turkey
Mehmet Akif Ersoy, considered the national poet of Turkey for having composed its national anthem, was directly influenced by Iqbal.
In 2016, Turkey's Minister for Culture and Tourism Nabi Avcı presented the Dost Award to Walid Iqbal, the grandson of Iqbal, in order to honour Iqbal's "services to Islam", the ceremony being held in Konya, the resting place of Rumi.
Western countries
Iqbal's views on the Western world have been applauded by Westerners, including United States Supreme Court Associate Justice William O. Douglas, who said that Iqbal's beliefs had "universal appeal". Soviet biographer N. P. Anikoy wrote:
Others, including Wilfred Cantwell Smith, stated that with Iqbal's anti-capitalist holdings, he was "anti-intellect", because "capitalism fosters intellect". Freeland Abbott objected to Iqbal's views of the West, saying that they were based on the role of imperialism and that Iqbal was not immersed enough in Western culture to learn about the various benefits of the modern democracies, economic practices and science. Critics of Abbot's viewpoint note that Iqbal was raised and educated in the European way of life, and spent enough time there to grasp the general concepts of Western civilization.
Legacy
Iqbal is widely commemorated in Pakistan, where he is regarded as the ideological founder of the state. Iqbal is the namesake of many public institutions, including the Allama Iqbal Campus Punjab University in Lahore, the Allama Iqbal Medical College in Lahore, Iqbal Stadium in Faisalabad, Allama Iqbal Open University in Pakistan, Iqbal Memorial Institute in Srinagar, Allama Iqbal Library in the University of Kashmir, the Allama Iqbal International Airport in Lahore, Iqbal Hostel in Government College University, Lahore, the Allama Iqbal Hall at Nishtar Medical College in Multan, Gulshan-e-Iqbal Town in Karachi, Allama Iqbal Town in Lahore, Allama Iqbal Hall at Aligarh Muslim University, Allama Iqbal Hostel at Jamia Millia Islamia in New Delhi and Iqbal Hall at the University of Engineering and Technology, Lahore. Iqbal Academy Lahore has published magazines on Iqbal in Persian, English and Urdu.
In India, his song "Tarana-e-Hind" is frequently played as a patriotic song speaking of communal harmony. Dr. Mohammad Iqbal, an Indian documentary film directed by K.A. Abbas and written by Ali Sardar Jafri was released in 1978. It was produced by Government of India's Films Division.
The Government of Madhya Pradesh in India awards the Iqbal Samman, named in honour of the poet, every year at the Bharat Bhavan to Indian writers for their contributions to Urdu literature and poetry.
The Pakistani government and public organizations have sponsored the establishment of educational institutions, colleges, and schools dedicated to Iqbal and have established the Iqbal Academy Pakistan to research, teach and preserve his works, literature and philosophy. The Allama Iqbal Stamps Society was established for the promotion of Iqbal in philately and in other hobbies. His son Javed Iqbal served as a justice of the Supreme Court of Pakistan. Javaid Manzil was Iqbal's last residence.
Gallery
Bibliography
Prose book in Urdu
Ilm ul Iqtisad (1903)
Prose books in English
The Development of Metaphysics in Persia (1908)
The Reconstruction of Religious Thought in Islam (1930)
Poetic books in Persian
Asrar-i-Khudi (1915)
Rumuz-i-Bekhudi (1917)
Payam-i-Mashriq (1923)
Zabur-i-Ajam (1927)
Javid Nama (1932)
Pas Cheh Bayed Kard ai Aqwam-e-Sharq (1936)
Armughan-e-Hijaz (1938) (in Persian and Urdu)
Poetic books in Urdu
Bang-i-Dara (1924)
Bal-i-Jibril (1935)
Zarb-i Kalim (1936)
See also
Index of Muhammad Iqbal–related articles
References
Further reading
Burzine Waghmar, Annemarie Schimmel: Iqbal and Indo-Muslim Studies, Encyclopædia Iranica, New York: Encyclopædia Iranica Foundation, published online, 16 April 2018.
Md Mahmudul Hasan, "Iqbal's and Hassan's Complaints: A Study of "To the Holy Prophet" and "SMS to Sir Muhammad Iqbal"." The Muslim World 110.2 (2020): 195–216. Iqbal's and Hassan's Complaints: A Study of "To the Holy Prophet" and "SMS to Sir Muhammad Iqbal"
Online
Muhammad Iqbal: poet and philosopher, in Encyclopædia Britannica Online, by Sheila D. McDonough, The Editors of Encyclopædia Britannica, Aakanksha Gaur, Gloria Lotha, J.E. Luebering, Kenneth Pletcher and Grace Young
External links
The collection of Urdu poems: Columbia University
E-Books of Allama Iqbal on Rekhta
1877 births
1938 deaths
Indian Muslims
Leaders of the Pakistan Movement
Urdu-language poets
Indian male poets
Indian Knights Bachelor
20th-century Muslim scholars of Islam
Government College University, Lahore alumni
Heidelberg University alumni
Persian-language poets
Indian Persian-language writers
Islamic philosophers
20th-century Indian philosophers
Kashmiri people
Alumni of Trinity College, Cambridge
Alumni of the Inns of Court School of Law
Members of Lincoln's Inn
Muhammad
People from Sialkot
Writers from Lahore
Urdu-language theologians
Urdu-language children's writers
Urdu-language letter writers
Urdu-language writers from British India
Urdu-language religious writers
20th-century Urdu-language writers
Pakistan Movement
National symbols of Pakistan
Academic staff of the Government College University, Lahore
Oriental College alumni
Murray College alumni
20th-century Indian poets
Indian Arabic-language poets
Islam in India
Founders of Indian schools and colleges
Ludwig Maximilian University of Munich alumni
People from Lahore
People from Punjab Province (British India)
Muslim critics of atheism
Theistic evolutionists
Muslim evolutionists | Muhammad Iqbal | Biology | 8,308 |
1,081,538 | https://en.wikipedia.org/wiki/Complex%20polygon | The term complex polygon can mean two different things:
In geometry, a polygon in the unitary plane, which has two complex dimensions.
In computer graphics, a polygon whose boundary is not simple.
Geometry
In geometry, a complex polygon is a polygon in the complex Hilbert plane, which has two complex dimensions.
A complex number may be represented in the form , where and are real numbers, and is the square root of . Multiples of such as are called imaginary numbers. A complex number lies in a complex plane having one real and one imaginary dimension, which may be represented as an Argand diagram. So a single complex dimension comprises two spatial dimensions, but of different kinds - one real and the other imaginary.
The unitary plane comprises two such complex planes, which are orthogonal to each other. Thus it has two real dimensions and two imaginary dimensions.
A complex polygon is a (complex) two-dimensional (i.e. four spatial dimensions) analogue of a real polygon. As such it is an example of the more general complex polytope in any number of complex dimensions.
In a real plane, a visible figure can be constructed as the real conjugate of some complex polygon.
Computer graphics
In computer graphics, a complex polygon is a polygon which has a boundary comprising discrete circuits, such as a polygon with a hole in it.
Self-intersecting polygons are also sometimes included among the complex polygons. Vertices are only counted at the ends of edges, not where edges intersect in space.
A formula relating an integral over a bounded region to a closed line integral may still apply when the "inside-out" parts of the region are counted negatively.
Moving around the polygon, the total amount one "turns" at the vertices can be any integer times 360°, e.g. 720° for a pentagram and 0° for an angular "eight".
See also
Regular polygon
Convex hull
Nonzero-rule
List of self-intersecting polygons
References
Citations
Bibliography
Coxeter, H. S. M., Regular Complex Polytopes, Cambridge University Press, 1974.
External links
Introduction to Polygons
Types of polygons | Complex polygon | Mathematics | 448 |
1,901,895 | https://en.wikipedia.org/wiki/Qualitative%20property | Qualitative properties are properties that are observed and can generally not be measured with a numerical result, unlike quantitative properties, which have numerical characteristics.
Description
Qualitative properties are properties that are observed and can generally not be measured with a numerical result. They are contrasted to quantitative properties which have numerical characteristics.
Evaluation
Although measuring something in qualitative terms is difficult, most people can (and will) make a judgement about a behaviour on the basis of how they feel treated. This indicates that qualitative properties are closely related to emotional impressions.
A test method can result in qualitative data about something. This can be a categorical result or a binary classification (e.g., pass/fail, go/no go, conform/non-conform). It can sometimes be an engineering judgement.
Categorization
The data that all share a qualitative property form a nominal category. A variable which codes for the presence or absence of such a property is called a binary categorical variable, or equivalently a dummy variable.
Types
Some engineering and scientific properties are qualitative.
Some important qualitative properties that concern businesses are:
Human factors, human work capital is important issue that deals with qualitative properties. Some common aspects are work, motivation, general participation, etc. Although all of these aspects are not measurable in terms of quantitative criteria, the general overview of them could be summarized as a quantitative property.
Environmental issues are in some cases quantitatively measurable, but other properties are qualitative, including environmentally friendly manufacturing, responsibility for the entire life of a product (from the raw-material till scrap), attitudes towards safety, efficiency, and minimum waste production.
Ethical issues are closely related to environmental and human issues, and may be covered in corporate governance. Child labour and illegal dumping of waste are examples of ethical issues.
The way a company deals with its stockholders (the 'acting' of a company).
See also
Categorical variable
Level of measurement
Qualitative research
Quantitative research
Statistical data type
References
Measurement
Mathematical terminology | Qualitative property | Physics,Mathematics | 416 |
981,465 | https://en.wikipedia.org/wiki/Yi%20Xing | Yi Xing (, 683–727), born Zhang Sui (), was a Chinese astronomer, Buddhist monk, inventor, mathematician, mechanical engineer, and philosopher during the Tang dynasty. His astronomical celestial globe featured a liquid-driven escapement, the first in a long tradition of Chinese astronomical clockworks.
Science and technology
Astrogeodetic survey
In the early 8th century, the Tang court put Yi Xing in charge of an astrogeodetic survey. This survey had many purposes. It was established in order to obtain new astronomical data that would aid in the prediction of solar eclipses. The survey was also initiated so that flaws in the calendar system could be corrected and a new, updated calendar installed in its place. The survey was also essential in determining the arc measurement, i.e., the length of meridian arc-although Yi Xing, who did not know the Earth was spherical, did not conceptualize his measurements in these terms. This would resolve the confusion created by the earlier practice of using the difference between shadow lengths of the sun observed at the same time at two places to determine the ground distance between them.
Yi Xing had thirteen test sites established throughout the empire, extending from Jiaozhou in Vietnam — at latitude 17°N — to the region immediately south of Lake Baikal — latitude 50°N. There were three observations done for each site, one for the height of polaris, one for the shadow lengths of summer, and one for the shadow lengths of winter. The latitudes were determined from this data, while the Tang calculation for the length of one degree of meridian was fairly accurate compared to modern calculations. Yi Xing understood the variations in the length of a degree of meridian, and criticized earlier scholars who permanently fixed an estimate for shadow lengths for the duration of the entire year.
The escapement and celestial globe
Yi Xing was famed for his genius, known to have calculated the number of possible positions on a go board game (though without a symbol for zero as he had difficulties expressing the number). He, along with his associate, the mechanical engineer and politician Liang Lingzan, is best known for applying the earliest-known escapement mechanism to a water-powered celestial globe. However, Yi Xing's mechanical achievements were built upon the knowledge and efforts of previous Chinese mechanical engineers, such as the statesman and master of gear systems Zhang Heng (78–139) of the Han dynasty, the mechanical engineer Ma Jun (200–265) of the Three Kingdoms, and the Daoist Li Lan (c. 450) of the Southern and Northern Dynasties period.
It was the earlier Chinese inventor Zhang Heng during the Han dynasty who was the first to apply hydraulic power (i.e. a waterwheel and water clock) in mechanically-driving and rotating his equatorial armillary sphere. The arrangement followed the model of a water-wheel using the drip of a clepsydra (see water clock), which ultimately exerted force on a lug to rotate toothed-gears on a polar-axis shaft. With this, the slow computational movement rotated the armillary sphere according to the recorded movements of the planets and stars. Yi Xing also owed much to the scholarly followers of Ma Jun, who had employed horizontal jack-wheels and other mechanical toys worked by waterwheels. The Daoist Li Lan was an expert at working with water clocks, creating steelyard balances for weighing water that was used in the tank of the clepsydra, providing more inspiration for Yi Xing. Like the earlier water-power employed by Zhang Heng and the later escapement mechanism in the astronomical clock tower engineered and erected by Su Song (1020–1101), Yi Xing's celestial globe employed water-power in order for it to rotate and function properly.
The British biochemist, historian, and sinologist Joseph Needham states (Wade–Giles spelling):
In regards to mercury instead of water (as noted in the quote above), the first to apply liquid mercury for motive power of an armillary sphere was Zhang Sixun in 979 AD (because mercury would not freeze during winter). During his age, the Song dynasty (960–1279) era historical text of the Song Shi mentions Yi Xing and the reason why his armillary sphere did not survive the ages after the Tang (Wade–Giles spelling):
Earlier Tang era historical texts of the 9th century have this to say of Yi Xing's work in astronomical instruments in the 8th century (Wade–Giles spelling):
Buddhist scholarship
Yi Xing wrote a commentary on the Mahavairocana Tantra. This work had a strong influence on the Japanese monk Kūkai and was key in his establishment of Shingon Buddhism.
In his honor
At the Tiantai-Buddhist Guoqing Temple of Mount Tiantai in Zhejiang Province, there is a Chinese pagoda erected directly outside the temple known as the Memorial Pagoda of Monk Yi Xing. His tomb is also located on Mount Tiantai.
See also
List of Chinese people
List of inventors
List of mechanical engineers
Verge escapement
Villard de Honnecourt
Notes
References
Bowman, John S. (2000). Columbia Chronologies of Asian History and Culture. New York: Columbia University Press.
Fry, Tony (2001). The Architectural Theory Review: Archineering in Chinatime. Sydney: University of Sydney Press.
Ju, Zan, "Yixing". Encyclopedia of China (Religion Edition), 1st ed.
Needham, Joseph (1986). Science and Civilization in China: Volume 3. Taipei: Caves Books, Ltd.
Needham, Joseph (1986). Science and Civilization in China: Volume 4, Part 2. Taipei: Caves Books, Ltd.
Boscaro, Adriana (2003) Rethinking Japan: Social Sciences, Ideology and Thought. Routledge. 0-904404-79-x p. 330
External links
Yi Xing at Chinaculture.org
Yi Xing's Tomb Tiantai Mountain
Yi Xing at the University of Maine
683 births
727 deaths
8th-century Buddhists
8th-century Buddhist monks
8th-century Chinese astronomers
8th-century Chinese philosophers
8th-century Chinese writers
8th-century engineers
8th-century inventors
8th-century mathematicians
Astronomical instrument makers
Chinese Buddhists
Chinese inventors
Chinese mechanical engineers
Chinese scholars of Buddhism
Chinese science writers
Chinese scientific instrument makers
Engineers from Henan
Hydraulic engineers
Mathematicians from Henan
Medieval Chinese mathematicians
Philosophers from Henan
Tang dynasty Buddhist monks
Tang dynasty philosophers
Technical writers
Writers from Puyang | Yi Xing | Astronomy | 1,331 |
1,388,539 | https://en.wikipedia.org/wiki/Chagan%20%28nuclear%20test%29 | Chagan () was a Soviet underground nuclear test conducted at the Semipalatinsk Test Site on January 15, 1965.
Description
Chagan was the first and largest of the 124 detonations in the Nuclear Explosions for the National Economy program, designed to produce peaceful nuclear explosions (PNEs) for earth-moving purposes. The concept of using PNEs to create artificial lakes, harbors and canals was modeled after a United States program, Project Plowshare, which conducted the first peaceful nuclear explosion (the 104 kt Sedan shallow cratering test) at the Nevada Test Site in July 1962.
Described as a "near clone" of the Sedan shot, Chagan's yield was the equivalent of 140 kilotons of TNT and sought to produce a large conical crater suitable for a lake. The site was a dry bed of the Chagan River (tributary of Irtysh River) at the edge of the Semipalatinsk Test Site, and was chosen such that the lip of the crater would dam the river during its high spring flow. The resultant lake has a diameter of and is deep.
Shallow subsurface (open) cratering explosions such as Sedan or Chagan release a great deal of steam and pulverized rock along with approximately 20% of the device's fission products into the atmosphere. Although the vast majority of this fallout was deposited in the general area of the test, it also produced a small but measurable radioactive plume, which in Chagan's case was detected over Japan and initially prompted complaints from the US that the Soviets were violating the provisions of the October 1963 Limited Test Ban Treaty, which banned atmospheric tests and any vented (or "open") subsurface detonation which caused "radioactive debris to be present outside the territorial limits of the State under whose jurisdiction or control such explosion is conducted".
The device itself was a low fission-fraction design, meaning it produced only a small portion of its yield from fission and hence produced less fallout than a military device generally designed for low weight and/or size, and not fallout considerations. The device had a primary (fission) stage of and a purely thermonuclear secondary stage.
The photo of the Chagan shot is occasionally confused with that of the Soviet Joe 1 test. The correct image shows a squat, ground-level cloud similar to the Sedan shot rather than the tall mushroom cloud of the tower-detonated Joe-1.
Lake Chagan
Lake Chagan or Lake Shagan, also known as Balapan, is a lake created at the confluence of rivers Shagan and Ashchysu by the Chagan nuclear test roughly in size, is still radioactive, and has been called the "Atomic Lake". As at the Trinity site of the first United States nuclear weapon test in Alamogordo, New Mexico, the exposed rock and sand was melted into a glassy substance called trinitite.
See also
Sedan (nuclear test) – an American cratering detonation
References
External links
The Soviet program for peaceful uses of nuclear weapons
On the Soviet nuclear program
1965 in the Kazakh Soviet Socialist Republic
1965 in military history
Explosions in 1965
Peaceful nuclear explosions
Soviet nuclear weapons testing
Underground nuclear weapons testing
January 1965 events in Asia | Chagan (nuclear test) | Chemistry | 651 |
342,078 | https://en.wikipedia.org/wiki/Gimbal | A gimbal is a pivoted support that permits rotation of an object about an axis. A set of three gimbals, one mounted on the other with orthogonal pivot axes, may be used to allow an object mounted on the innermost gimbal to remain independent of the rotation of its support (e.g. vertical in the first animation). For example, on a ship, the gyroscopes, shipboard compasses, stoves, and even drink holders typically use gimbals to keep them upright with respect to the horizon despite the ship's pitching and rolling.
The gimbal suspension used for mounting compasses and the like is sometimes called a Cardan suspension after Italian mathematician and physicist Gerolamo Cardano (1501–1576) who described it in detail. However, Cardano did not invent the gimbal, nor did he claim to. The device has been known since antiquity, first described in the 3rd c. BC by Philo of Byzantium, although some modern authors support the view that it may not have a single identifiable inventor.
History
The gimbal was first described by the Greek inventor Philo of Byzantium (280–220 BC). Philo described an eight-sided ink pot with an opening on each side, which can be turned so that while any face is on top, a pen can be dipped and inked — yet the ink never runs out through the holes of the other sides. This was done by the suspension of the inkwell at the center, which was mounted on a series of concentric metal rings so that it remained stationary no matter which way the pot is turned.
In Ancient China, the Han dynasty (202 BC – 220 AD) inventor and mechanical engineer Ding Huan created a gimbal incense burner around 180 AD. There is a hint in the writing of the earlier Sima Xiangru (179–117 BC) that the gimbal existed in China since the 2nd century BC. There is mention during the Liang dynasty (502–557) that gimbals were used for hinges of doors and windows, while an artisan once presented a portable warming stove to Empress Wu Zetian (r. 690–705) which employed gimbals. Extant specimens of Chinese gimbals used for incense burners date to the early Tang dynasty (618–907), and were part of the silver-smithing tradition in China.
The authenticity of Philo's description of a cardan suspension has been doubted by some authors on the ground that the part of Philo's Pneumatica which describes the use of the gimbal survived only in an Arabic translation of the early 9th century. Thus, as late as 1965, the sinologist Joseph Needham suspected Arab interpolation. However, Carra de Vaux, author of the French translation which still provides the basis for modern scholars, regards the Pneumatics as essentially genuine. The historian of technology George Sarton (1959) also asserts that it is safe to assume the Arabic version is a faithful copying of Philo's original, and credits Philon explicitly with the invention. So does his colleague Michael Lewis (2001). In fact, research by the latter scholar (1997) demonstrates that the Arab copy contains sequences of Greek letters which fell out of use after the 1st century, thereby strengthening the case that it is a faithful copy of the Hellenistic original, a view recently also shared by the classicist Andrew Wilson (2002).
The ancient Roman author Athenaeus Mechanicus, writing during the reign of Augustus (30 BC–14 AD), described the military use of a gimbal-like mechanism, calling it "little ape" (pithêkion). When preparing to attack coastal towns from the sea-side, military engineers used to yoke merchant-ships together to take the siege machines up to the walls. But to prevent the shipborne machinery from rolling around the deck in heavy seas, Athenaeus advises that "you must fix the pithêkion on the platform attached to the merchant-ships in the middle, so that the machine stays upright in any angle".
After antiquity, gimbals remained widely known in the Near East. In the Latin West, reference to the device appeared again in the 9th century recipe book called the Little Key of Painting''' (mappae clavicula). The French inventor Villard de Honnecourt depicts a set of gimbals in his sketchbook (see right). In the early modern period, dry compasses were suspended in gimbals.
Applications
Inertial navigation
In inertial navigation, as applied to ships and submarines, a minimum of three gimbals are needed to allow an inertial navigation system (stable table) to remain fixed in inertial space, compensating for changes in the ship's yaw, pitch, and roll. In this application, the inertial measurement unit (IMU) is equipped with three orthogonally mounted gyros to sense rotation about all axes in three-dimensional space. The gyro outputs are kept to a null through drive motors on each gimbal axis, to maintain the orientation of the IMU. To accomplish this, the gyro error signals are passed through "resolvers" mounted on the three gimbals, roll, pitch and yaw. These resolvers perform an automatic matrix transformation according to each gimbal angle, so that the required torques are delivered to the appropriate gimbal axis. The yaw torques must be resolved by roll and pitch transformations. The gimbal angle is never measured.
Similar sensing platforms are used on aircraft.
In inertial navigation systems, gimbal lock may occur when vehicle rotation causes two of the three gimbal rings to align with their pivot axes in a single plane. When this occurs, it is no longer possible to maintain the sensing platform's orientation.
Rocket engines
In spacecraft propulsion, rocket engines are generally mounted on a pair of gimbals to allow a single engine to vector thrust about both the pitch and yaw axes; or sometimes just one axis is provided per engine. To control roll, twin engines with differential pitch or yaw control signals are used to provide torque about the vehicle's roll axis.
Photography and imaging
Gimbals are also used to mount everything from small camera lenses to large photographic telescopes.
In portable photography equipment, single-axis gimbal heads are used in order to allow a balanced movement for camera and lenses. This proves useful in wildlife photography as well as in any other case where very long and heavy telephoto lenses are adopted: a gimbal head rotates a lens around its center of gravity, thus allowing for easy and smooth manipulation while tracking moving subjects.
Very large gimbal mounts in the form 2 or 3 axis altitude-altitude mounts are used in satellite photography for tracking purposes.
Gyrostabilized gimbals which house multiple sensors are also used for airborne surveillance applications including airborne law enforcement, pipe and power line inspection, mapping, and ISR (intelligence, surveillance, and reconnaissance). Sensors include thermal imaging, daylight, low light cameras as well as laser range finder, and illuminators.
Gimbal systems are also used in scientific optics equipment. For example, they are used to rotate a material sample along an axis to study their angular dependence of optical properties.
Film and video
Handheld 3-axis gimbals are used in stabilization systems designed to give the camera operator the independence of handheld shooting without camera vibration or shake. There are two versions of such stabilization systems: mechanical and motorized.
Mechanical gimbals have the sled, which includes the top stage where the camera is attached, the post'' which in most models can be extended, with the monitor and batteries at the bottom to counterbalance the camera weight. This is how the Steadicam stays upright, by simply making the bottom slightly heavier than the top, pivoting at the gimbal. This leaves the center of gravity of the whole rig, however heavy it may be, exactly at the operator's fingertip, allowing deft and finite control of the whole system with the lightest of touches on the gimbal.
Powered by three brushless motors, motorized gimbals have the ability to keep the camera level on all axes as the camera operator moves the camera. An inertial measurement unit (IMU) responds to movement and utilizes its three separate motors to stabilize the camera. With the guidance of algorithms, the stabilizer is able to notice the difference between deliberate movement such as pans and tracking shots from unwanted shake. This allows the camera to seem as if it is floating through the air, an effect achieved by a Steadicam in the past. Gimbals can be mounted to cars and other vehicles such as drones, where vibrations or other unexpected movements would make tripods or other camera mounts unacceptable. An example which is popular in the live TV broadcast industry, is the Newton 3-axis camera gimbal.
Marine chronometers
The rate of a mechanical marine chronometer is sensitive to its orientation. Because of this, chronometers were normally mounted on gimbals, in order to isolate them from the rocking motions of a ship at sea.
Gimbal lock
Gimbal lock is the loss of one degree of freedom in a three-dimensional, three-gimbal mechanism that occurs when the axes of two of the three gimbals are driven into a parallel configuration, "locking" the system into rotation in a degenerate two-dimensional space.
The word lock is misleading: no gimbal is restrained. All three gimbals can still rotate freely about their respective axes of suspension. Nevertheless, because of the parallel orientation of two of the gimbals' axes there is no gimbal available to accommodate rotation about one axis.
See also
Canfield joint
Heligimbal
Universal joint
Cardan shaft
Keyhole problem
Trunnion
References
External links
Ancient Roman technology
Chinese inventions
Greek inventions
Gyroscopes
Hellenistic engineering
Mechanisms (engineering) | Gimbal | Engineering | 2,083 |
62,571,269 | https://en.wikipedia.org/wiki/Single-entity%20electrochemistry | Single-Entity Electrochemistry (SEE) refers to the electroanalysis of an individual unit of interest. A unique feature of SEE is that it unifies multiple different branches of electrochemistry. Single-Entity Electrochemistry pushes the bounds of the field as it can measure entities on a scale of 100 microns to angstroms. Single-Entity Electrochemistry is important because it gives the ability to view how a single molecule, or cell, or "thing" affects the bulk response, and thus the chemistry that might have gone unknown otherwise. The ability to monitor the movement of one electron or ion from one unit to another is valuable, as many vital reactions and mechanisms undergo this process. Electrochemistry is well suited for this measurement due to its incredible sensitivity. Single-Entity Electrochemistry can be used to investigate nanoparticles, wires, vesicles, nanobubbles, nanotubes, cells, and viruses, and other small molecules and ions. Single-entity electrochemistry has been successfully used to determine the size distribution of particles as well as the number of particles present inside a vesicle or other similar structures
Early history
Coulter Counter
The Coulter Counter was created by Wallace H. Coulter in 1949. The Coulter counter consists of two electrolyte reservoirs that are connected by a small channel, through which a current of ions flow. Each particle drawn through the channel causes a brief change to the electrical resistance of the liquid. The change in the electrical resistance causes a disturbance in the electric field. The counter detects these changes in electrical resistance; the size of the particles in the field is proportional to magnitude of the disturbance in the electric field.
Patch-Clamp Electrophysiology
Patch-Clamp Electrophysiology was developed by Neher and Sakmann in 1976. This technique allowed measurements of individual proteins through ion channels. A glass pipette was fixed to the cell membrane, and the ion currents though the ion channels were measured. The Patch-Clamp method increased the sensitivity of detection by three orders of magnitude over previous methods, and the time resolution for the measurements was decreased to nearly 10 microseconds. The success of this method was a result of the ability to create a high resistance seal between the glass micropipette and the cell membrane; isolating the system chemically and electrically.
Single-Cell Electrochemistry
While it is useful to study bulk cell entities, there is an underlying need to study an individual or single cell as it will provide a better understanding of how it contributes to the entity as a whole. It was found that the utilization of electrochemical techniques could analyze cells without interrupting cellular activity as well as provide a highly resolute spectrum. This analysis method was first completed by Wightman in 1982. In this method of analysis, a carbon microfiber electrode is placed near the studied cell; this electrode can monitor the call via methods of voltammetry or amperometry. Before the measure can be taken, the cell must be stimulated by an ejection pipette to cause a cellular release. This can be cellular release can be measured via the aforementioned methods. From this method, it was seen that instrumental advances were needed in order to perform quality SEE measurements.
Single-Molecule Redox Cycling
Single-Molecule electrochemistry is an electrochemical technique used to study the faradaic response of redox molecules in electrochemical environments. The ability to study singular molecules gives rise to the potential of developing ultra-sensitive sensors which are necessary in SEE. From the work of Bard and Fan, this technique has had large advances with the use of redox cycling. Redox cycling amplifies a charge transfer by reducing and oxidizing a molecule multiple times as it diffuses between electrodes. Specifically in this technique, an insulated nano-electrode tip is placed near a substrate electrode to form an ultra-small electrochemical chamber. Molecules will become trapped in this chamber where the redox cycling and charge amplification will occur, allowing for detection of single molecules. From this technique, the necessary tool of charge amplification of redox reactions helped improve SEE measurements. It has helped increase detection limits, which need to be high for SEE.
Applications
Single-Cell Electrochemistry
With the advance of nanoscale electrodes, the resolution of SEE has advanced from being able to detect single cells to detecting single molecules within cells. Nanoscale electrodes are small enough they can be inserted into the synapses between neurons, which can be used to detect neurotransmitter concentrations. If the electrode is thin enough, it can be inserted directly into a cell and used to detect concentrations of intracellular molecules, such as metabolites or even DNA.
Optoelectrochemical Imaging
Plasmonic nanoparticles can be individually analyzed through optoelectrochemical imaging (in which electrochemical processes are measured by optical means). When electrochemistry is performed on a nanoparticle, the refractive index of its environment will change resulting in a shift of the localized surface plasmon resonance. The spectral difference can be measured through characterization techniques such as darkfield microscopy to monitor electrochemical reactions at the surface of plasmonic nanoparticles.
Plasmonics-based electrochemical current microscopy (PECM) measures the contrast that appears from the interference of localized surface plasmon scattered light and reflected light that, like above, is sensitive to changes in the refractive index. This can be used to quantify the electrocatalytic reactions occurring at Pt nanoparticles. Since nanoparticles are inherently heterogenous (which affects catalytic activity), SEE methods can provide more information than traditional methods that measure the average of an ensemble of nanoparticles.
Single Enzyme Electron Transferring
At present, single entity electrochemistry is not sensitive enough to quantify the turnover of a single enzyme.
References
Electrochemistry | Single-entity electrochemistry | Chemistry | 1,202 |
4,053,506 | https://en.wikipedia.org/wiki/Sanguinarine | Sanguinarine is a polycyclic quaternary alkaloid. It is extracted from some plants, including the bloodroot plant, from whose scientific name, Sanguinaria canadensis, its name is derived; the Mexican prickly poppy (Argemone mexicana); Chelidonium majus; and Macleaya cordata.
Toxicity
Sanguinarine is a toxin that kills animal cells through its action on the Na+/K+-ATPase transmembrane protein. Epidemic dropsy is a disease that results from ingesting sanguinarine.
If applied to the skin, sanguinarine may cause a massive scab of dead flesh where it killed the cells where it was applied, called an eschar. For this reason, sanguinarine is termed an escharotic.
It is said to be 2.5 times more toxic than dihydrosanguinarine.
Alternative medicine
Native Americans once used sanguinarine in the form of bloodroot as a medical remedy, believing it had curative properties as an emetic, respiratory aid, and for a variety of ailments. In Colonial America, sanguinarine from bloodroot was used as a wart remedy. Later, in 1869, William Cook's The Physiomedical Dispensatory included information on the preparation and uses of sanguinarine. During the 1920s and 1930s, sanguinarine was the chief component of "Pinkard's Sanguinaria Compound," a drug sold by Dr. John Henry Pinkard. Pinkard advertised the compound as "a treatment, remedy, and cure for pneumonia, coughs, weak lungs, asthma, kidney, liver, bladder, or any stomach troubles, and effective as a great blood and nerve tonic." In 1931, several samples of the compound were seized by federal officials who determined Pinkard's claims to be fraudulent. Pinkard pleaded guilty in court and accepted a fine of $25.00.
More recently, sanguinarine from bloodroot has been promoted by many alternative medicine companies as a treatment or cure for cancer; however, the U.S. Food and Drug Administration warns that products containing bloodroot, or other sanguinarine-based plants, have no proven anti-cancer effects, and that they should be avoided on those grounds. Meanwhile, Australian Therapeutic Goods Administration also advise consumers not to purchase or use products marketed as containing Sanguinaria canadensis to cure or treat cancer, including certain types of skin cancer. Indeed, oral use of such products has been associated with oral leukoplakia, a possible precursor of oral cancer. In addition, the escharotic form of sanguinarine, applied to the skin for skin cancers, may leave cancerous cells alive in the skin while creating a significant scar. For this reason it is not recommended as a skin cancer treatment.
Biosynthesis
In plants, sanguinarine biosynthesis begins with 4-hydroxyphenyl-acetaldehyde and dopamine. These two compounds are combined to form norcoclaurine. Next, methyl groups are added to form N-methylcoclaurine. The enzyme CYP80B1 subsequently adds a hydroxyl group, forming 3'-hydroxy-N-methylcoclaurine. The addition of another methyl group transforms this compound into reticuline.
Notably, biosynthesis of sanguinarine up to this point is virtually identical to that of morphine. However, instead of being converted to codeinone (as in the biosynthesis of morphine), reticuline is converted to scoulerine via berberine bridge enzyme (BBE). As such, this is the commitment step in the sanguinarine pathway. Although it is unknown exactly how scoulerine proceeds down the biosynthetic pathway, it is eventually converted to dihydrosanguinarine. The precursor to sanguinarine, dihydrosanguinarine is converted to the final toxin via the action of dihydrobenzophenanthridine oxidase.
See also
Berberine, a plant-derived compound having a chemical classification similar to that of sanguinarine.
Chelidonine
References
Isoquinoline alkaloids
Quinoline alkaloids
Quaternary ammonium compounds
Alkaloids found in Papaveraceae
Toxins | Sanguinarine | Chemistry,Environmental_science | 925 |
576,694 | https://en.wikipedia.org/wiki/C-terminus | The C-terminus (also known as the carboxyl-terminus, carboxy-terminus, C-terminal tail, carboxy tail, C-terminal end, or COOH-terminus) is the end of an amino acid chain (protein or polypeptide), terminated by a free carboxyl group (-COOH). When the protein is translated from messenger RNA, it is created from N-terminus to C-terminus. The convention for writing peptide sequences is to put the C-terminal end on the right and write the sequence from N- to C-terminus.
Chemistry
Each amino acid has a carboxyl group and an amine group. Amino acids link to one another to form a chain by a dehydration reaction which joins the amine group of one amino acid to the carboxyl group of the next. Thus polypeptide chains have an end with an unbound carboxyl group, the C-terminus, and an end with an unbound amine group, the N-terminus. Proteins are naturally synthesized starting from the N-terminus and ending at the C-terminus.
Function
C-terminal retention signals
While the N-terminus of a protein often contains targeting signals, the C-terminus can contain retention signals for protein sorting. The most common ER retention signal is the amino acid sequence -KDEL (Lys-Asp-Glu-Leu) or -HDEL (His-Asp-Glu-Leu) at the C-terminus. This keeps the protein in the endoplasmic reticulum and prevents it from entering the secretory pathway.
Peroxisomal targeting signal
The sequence -SKL (Ser-Lys-Leu) or similar near C-terminus serves as peroxisomal targeting signal 1, directing the protein into peroxisome.
C-terminal modifications
The C-terminus of proteins can be modified posttranslationally, most commonly by the addition of a lipid anchor to the C-terminus that allows the protein to be inserted into a membrane without having a transmembrane domain.
Prenylation
One form of C-terminal modification is prenylation. During prenylation, a farnesyl- or geranylgeranyl-isoprenoid membrane anchor is added to a cysteine residue near the C-terminus. Small, membrane-bound G proteins are often modified this way.
GPI anchors
Another form of C-terminal modification is the addition of a phosphoglycan, glycosylphosphatidylinositol (GPI), as a membrane anchor. The GPI anchor is attached to the C-terminus after proteolytic cleavage of a C-terminal propeptide. The most prominent example for this type of modification is the prion protein.
Methylation
C-terminal leucine is methylated at carboxyl group by enzyme leucine carboxyl methyltransferase 1 in vertebrates, forming methyl ester.
C-terminal domain
The C-terminal domain of some proteins has specialized functions. In humans, the CTD of RNA polymerase II typically consists of up to 52 repeats of the sequence Tyr-Ser-Pro-Thr-Ser-Pro-Ser. This allows other proteins to bind to the C-terminal domain of RNA polymerase in order to activate polymerase activity. These domains are then involved in the initiation of DNA transcription, the capping of the RNA transcript, and attachment to the spliceosome for RNA splicing.
See also
N-terminus
TopFIND, a scientific database covering proteases, their cleavage site specificity, substrates, inhibitors and protein termini originating from their activity
References
Post-translational modification
Protein structure | C-terminus | Chemistry | 780 |
14,623,014 | https://en.wikipedia.org/wiki/Weierstrass%20ring | In mathematics, a Weierstrass ring, named by Nagata after Karl Weierstrass, is a commutative local ring that is Henselian, pseudo-geometric, and such that any quotient ring by a prime ideal is a finite extension of a regular local ring.
Examples
The Weierstrass preparation theorem can be used to show that the ring of convergent power series over the complex numbers in a finite number of variables is a Wierestrass ring. The same is true if the complex numbers are replaced by a perfect field with a valuation.
Every ring that is a finitely-generated module over a Weierstrass ring is also a Weierstrass ring.
References
Bibliography
Commutative algebra | Weierstrass ring | Mathematics | 151 |
599,673 | https://en.wikipedia.org/wiki/New%20Civil%20Engineer | New Civil Engineer is the monthly magazine for members of the Institution of Civil Engineers (ICE), the UK chartered body that oversees the practice of civil engineering in the UK. First published in May 1972, it is today published by Metropolis. Under its previous publisher, Ascential, who, as Emap, acquired the title and editorial control from the ICE in 1995, the ICE regularly discussed the magazine's content through an editorial advisory board and a supervisory board.
Available in print and online after the appropriate subscription has been taken out (it is free for members of the ICE), the magazine is aimed at professionals in the civil engineering industry. It contains industry news and analysis, letters from subscribers, a directory of companies, with listings arranged by companies’ areas of work, and an appointments section. It also occasionally has details of university courses and graduate positions.
In 2013 it had a net circulation of more than 50,000 per issue. Two years later, this had dropped to 42,805, of which some 39,000 related to copies distributed to ICE members. Previously printed on a weekly basis the magazine switched to a monthly format in December 2015.
New Civil Engineer was a co-founder of the British Construction Industry Awards.
In January 2017, Ascential announced its intention to sell 13 titles including New Civil Engineer; the 13 "heritage titles" were to be "hived off into a separate business while buyers are sought." The brands were purchased by Metropolis International Ltd (owner of the Property Week title since 2013) in a £23.5m cash deal, announced on 1 June 2017.
Jacqueline Whitelaw was the magazine's deputy editor from 1998 to 2009.
References
External links
1972 establishments in the United Kingdom
Ascential
Business magazines published in the United Kingdom
Engineering magazines
Civil engineering journals
Magazines established in 1972
Magazines published in London
Monthly magazines published in the United Kingdom
Science and technology magazines published in the United Kingdom
Professional and trade magazines published in the United Kingdom | New Civil Engineer | Engineering | 394 |
18,422,847 | https://en.wikipedia.org/wiki/Psilocybe%20galindoi | Psilocybe galindoi is a psychedelic mushroom in the section Mexicana, having psilocybin and psilocin as its main active compounds. It is also known as Psilocybe galindii. The species was named in honor of Mr. Carlos Galindo Arias and his family by Dr. Gastón Guzmán.
Description
Cap: 1.9 – 2 cm in diameter, conic to campanulate or umbonate, with a very slight papilla, glabrous, even to striate when moist, hygrophanous, brown or yellowish brown fading to pale ochraceous or straw color. Staining blue-green where injured.
Gills: Adnate, brown to dark purple brown, with whitish edges.
Stipe: 5 — 6.5 cm x 1 – 2 mm, equal, hollow, no annulus, reddish brown in the middle, darker towards the base with long rhizomorphic strands. Veil inconspicuous, except for some white appressed silky fibrils on the pileus.
Spores: Dark purple gray in deposit. (8.1)9.6 — 12(14) x 7.1 — 8 μm, subrhomboid in face view or subellipsoid in side view(around 1 μm), yellowish brown, thick walled with a broad germ pore.
Odor: Farinaceous
Taste: Farinaceous
Microscopic Features: Basidia: 18 — 24 x 7.2 — 9.6 μm, hyaline, 4-spored, ventricose. Pleurocystidia: 14.4 — 21 x 7 — 8.4 μm, hyaline, fusoid-ampullaceous, with short necks.
Distribution and habitat
Psilocybe galindoi is found growing gregariously in soil at higher elevations and in tall grass in or near Pinus-Quercus (pine with oak) forests in Mexico. The holotype location is Pie de la Cuesta, Jalisco, Mexico - a bit south of Guadalajara.
Consumption and cultivation
Like several other psilocybin mushrooms in the genus, a mushroom going under the name Psilocybe galindoi has been consumed by indigenous North American and Central American peoples for its entheogenic effects. This is a misidentification, as the mushroom they are cultivating is Psilocybe tampanensis, and the real Psilocybe galindoi is a synonym of Psilocybe mexicana.
In the Western world, sclerotia of a mushroom misidentified as Psilocybe galindoi are sometimes cultivated for entheogenic or medicinal use. The sclerotia usually have a lower content of active substances than the actual mushrooms themselves.
References
Guzman, G. The Genus Psilocybe: A Systematic Revision of the Known Species Including the History, Distribution and Chemistry of the Hallucinogenic Species. Beihefte zur Nova Hedwigia Heft 74. J. Cramer, Vaduz, Germany (1983) [now out of print].
External links
Mushroomobserver psilocybe galindoi
Entheogens
Psychoactive fungi
galindoi
Psychedelic tryptamine carriers
Fungi of North America
Fungi of South America
Taxa named by Gastón Guzmán
Fungus species | Psilocybe galindoi | Biology | 688 |
40,328,832 | https://en.wikipedia.org/wiki/Oleidesulfovibrio%20alaskensis | Oleidesulfovibrio alaskensis (formerly Desulfovibrio alaskensis) belongs to the sulfate-reducing bacteria. The type strain is Al1T (= NCIMB 13491T = DSM 16109T).
Biology
O. alaskensis has the ability to reduce radionuclides and heavy metals such as uranium and chromium to soluble and less toxic forms. The O. alaskensis strain G20 is an anaerobe with an optimal temperature range of 25°C-40°C. It is a gram negative microbe which is rod shaped, does not produce endospores, and is arranged in single cells. This strain is not known to cause any disease.
Genomics
Several strains of O. alaskensis have been sequenced: G20 (DOE JGI, 2007-2011), RB2256, and DSM 16109 (DOE JGI 2013).
References
Further reading
Staley, James T., et al. "Bergey’s manual of systematic bacteriology, vol. 3."Williams and Wilkins, Baltimore, MD (1989): 2250-2251.
Bélaich, Jean-Pierre, Mireille Bruschi, and Jean-Louis Garcia, eds. Microbiology and biochemistry of strict Anaerobes Involved in interspecies hydrogen transfer. No. 54. Springer, 1990.
External links
Desulfovibrio in the List of Prokaryotic names with Standing in Nomenclature
Type strain of Oleidesulfovibrio alaskensis at BacDive - the Bacterial Diversity Metadatabase
Bacteria described in 2004
Desulfovibrio | Oleidesulfovibrio alaskensis | Biology | 340 |
44,948,385 | https://en.wikipedia.org/wiki/Tavis%20Ormandy | Tavis Ormandy is an English computer security white hat hacker. He is currently employed by Google and was formerly part of Google's Project Zero team.
Notable discoveries
Ormandy is credited with discovering severe vulnerabilities in LibTIFF, Sophos' antivirus software and Microsoft Windows.
With Natalie Silvanovich he discovered a severe vulnerability in FireEye products in 2015.
His findings with Sophos' products led him to write a 30-page paper entitled "Sophail: Applied attacks against Sophos Antivirus" in 2012, which concludes that the company was "working with good intentions" but is "ill-equipped to handle the output of one co-operative security researcher working in his spare time" and that its products shouldn't be used on high-value systems.
He also created an exploit in 2014 to demonstrate how a vulnerability in glibc known since 2005 could be used to gain root access on an affected machine running a 32-bit version of Fedora.
In 2016, he demonstrated multiple vulnerabilities in Trend Micro Antivirus on Windows related to the Password Manager, and vulnerabilities in Symantec security products.
In February 2017, he found and reported a critical bug in Cloudflare's infrastructure leaking user-sensitive data along with requests affecting millions of websites around the world which has been referred to as Cloudbleed (in reference to the Heartbleed bug that Google co-discovered).
On or around May 15, 2023, he found and reported a vulnerability called Zenbleed (CVE-2023-20593) affecting all Zen 2 class processors.
References
External links
"Sophail: Applied attacks against Sophos Antivirus" - Ormandy's paper on insecurities in Sophos products
Google employees
Hackers
English computer programmers
Living people
Year of birth missing (living people) | Tavis Ormandy | Technology | 386 |
47,886,489 | https://en.wikipedia.org/wiki/Conservative%20transposition | Transposition is the process by which a specific genetic sequence, known as a transposon, is moved from one location of the genome to another. Simple, or conservative transposition, is a non-replicative mode of transposition. That is, in conservative transposition the transposon is completely removed from the genome and reintegrated into a new, non-homologous locus, the same genetic sequence is conserved throughout the entire process. The site in which the transposon is reintegrated into the genome is called the target site. A target site can be in the same chromosome as the transposon or within a different chromosome. Conservative transposition uses the "cut-and-paste" mechanism driven by the catalytic activity of the enzyme transposase. Transposase acts like DNA scissors; it is an enzyme that cuts through double-stranded DNA to remove the transposon, then transfers and pastes it into a target site.
A simple, or conservative, transposon refers to the specific genetic sequence that is moved via conservative transposition. These specific genetic sequences range in size, they can be hundreds to thousands of nucleotide base-pairs long. A transposon contains genetic sequences that encode for proteins that mediate its own movement, but can also carry genes for additional proteins. Transposase is encoded within the transposon DNA and used to facilitate its own movement, making this process self-sufficient within organisms. All simple transposons contain a transposase encoding region flanked by terminal inverted repeats, but the additional genes within the transposon DNA can vary. Viruses, for example, encode the essential viral transposase needed for conservative transposition as well as protective coat proteins that allow them to survive outside of cells, thus promoting the spread of mobile genetic elements.
"Cut-and-paste" transposition method
The mechanism by which conservative transposition occurs is called the "cut-and-paste" method, which involves five main steps:
The transposase enzyme is bound to the inverted repeated sequences flanking the ends of the transposon Inverted repeats define the ends of transposons and provide recognition sites for transposase to bind.
The formation of the transposition complex In this step the DNA bends and folds into a pre-excision synaptic complex so the two transposases enzymes can interact.
The interaction of these transposases activates the complex; transposase makes double stranded breaks in the DNA and the transposon is fully excised.
The transposase enzymes locate, recognize and bind to the target site within the target DNA.
Transposase creates a double stranded break in the DNA and integrates the transposon into the target site.
Both the excision and insertion of the transposon leaves single or double stranded gaps in the DNA, which are repaired by host enzymes such as DNA polymerase.
Scientific application
Current researchers have developed gene transfer systems on the basis of conservative transposition which can integrate new DNA in both invertebrates and vertebrate genomes. Scientists alter the genetic sequence of a transposon in a laboratory setting, then insert this sequence into a vector which is then inserted into a target cell. The transposase coding region of these transposons is replaced by a gene of interest intended to be integrated into the genome. Conservative transposition is induced by the expression of transposase from another source within the cell, since the transposon no longer contains the transposase coding region to be self sufficient. Generally a second vector is prepared and inserted into the cell for expression of transposase. This technique is used in transgenesis and insertional mutagenesis research fields. The Sleeping Beauty transposon system is an example of gene transfer system developed for use in vertebrates. Further development in integration site preferences of transposable elements is expected to advance the technologies of human gene therapy.
References
Molecular biology | Conservative transposition | Chemistry,Biology | 816 |
39,976,047 | https://en.wikipedia.org/wiki/Rodica%20Simion | Rodica Eugenia Simion (January 18, 1955 – January 7, 2000) was a Romanian-American mathematician. She was the Columbian School Professor of Mathematics at George Washington University. Her research concerned combinatorics: she was a pioneer in the study of permutation patterns, and an expert on noncrossing partitions.
Biography
Simion was one of the top competitors in the Romanian national mathematical olympiads. She graduated from the University of Bucharest in 1974, and immigrated to the United States in 1976. She did her graduate studies at the University of Pennsylvania, earning a Ph.D. in 1981 under the supervision of Herbert Wilf. After teaching at Southern Illinois University and Bryn Mawr College, she moved to George Washington University in 1987, and became Columbian School Professor in 1997.
Recognition
She is included in a deck of playing cards featuring notable women mathematicians published by the Association of Women in Mathematics.
Research contributions
Simion's thesis research concerned the concavity and unimodality of certain combinatorially defined sequences, and included what Richard P. Stanley calls "a very influential result" that the zeros of certain polynomials are all real.
Next, with Frank Schmidt, she was one of the first to study the combinatorics of sets of permutations defined by forbidden patterns; she found a bijective proof that the stack-sortable permutations and the permutations formed by interleaving two monotonic sequences are equinumerous, and found combinatorial enumerations of many permutation classes. The "simsun permutations" were named after her and Sheila Sundaram, after their initial studies of these objects; a simsun permutation is a permutation in which, for all k, the subsequence of the smallest k elements has no three consecutive elements in decreasing order.
Simion also did extensive research on noncrossing partitions, and became "perhaps the world's leading authority" on them.
Other activities
Simion was the main organizer of an exhibit about mathematics, Beyond Numbers, at the Maryland Science Center, based in part on her earlier experience organizing a similar exhibit at George Washington University. She was also a leader in George Washington University's annual Summer Program for Women in Mathematics.
As well as being a mathematician, Simion was a poet and painter; her poem "Immigrant Complex" was published in a collection of mathematical poetry in 1979.
Selected publications
.
.
.
.
See also
Cyclohedron
References
1955 births
2000 deaths
20th-century Romanian mathematicians
20th-century American mathematicians
Romanian emigrants to the United States
Combinatorialists
University of Bucharest alumni
University of Pennsylvania alumni
Southern Illinois University faculty
Bryn Mawr College faculty
George Washington University faculty
20th-century American women scientists
20th-century American women mathematicians | Rodica Simion | Mathematics | 568 |
56,508,282 | https://en.wikipedia.org/wiki/Brigitte%20Servatius | Brigitte Irma Servatius (born 1954) is a mathematician specializing in matroids and structural rigidity. She is a professor of mathematics at Worcester Polytechnic Institute, and has been the editor-in-chief of the Pi Mu Epsilon Journal since 1999.
Education and career
Servatius is originally from Graz in Austria.
As a student at an all-girl gymnasium in Graz that specialized in language studies rather than mathematics, her interest in mathematics was sparked by her participation in a national mathematical olympiad,
and she went on to earn master's degrees in mathematics and physics at the University of Graz.
She became a high school mathematics and science teacher in Leibnitz. She moved to the US in 1981, to begin doctoral studies at Syracuse University. She completed her Ph.D. in 1987, and joined the Worcester Polytechnic Institute faculty in the same year. Her dissertation, Planar Rigidity, was supervised by Jack Graver.
Contributions
While still in Austria, Servatius began working on combinatorial group theory, and her first publication (appearing while she was a graduate student) is in that subject.
She switched to the theory of structural rigidity for her doctoral research,
and later became the author (with Jack Graver and Herman Servatius) of the book Combinatorial Rigidity (1993).
Another well-cited paper of hers in this area characterizes the planar Laman graphs, the minimally rigid graphs that can be embedded without crossings in the plane, as the graphs of pseudotriangulations, partitions of a plane region into subregions with three convex corners studied in computational geometry.
Servatius is also the co-editor of a book on matroid theory.
With Tomaž Pisanski she wrote the book Configurations from a Graphical Viewpoint (2013), on configurations of points and lines in the plane with the same number of points touching each two lines and the same number of lines touching each two points. Other topics in her research include graph duality and the triconnected components of infinite graphs.
Selected publications
References
External links
Home page
1954 births
Living people
Scientists from Graz
Austrian mathematicians
20th-century American mathematicians
21st-century American mathematicians
Group theorists
Graph theorists
University of Graz alumni
Syracuse University alumni
Worcester Polytechnic Institute faculty
20th-century American women mathematicians
21st-century American women mathematicians
Mathematicians from New York (state) | Brigitte Servatius | Mathematics | 474 |
2,054,659 | https://en.wikipedia.org/wiki/Aegirine | Aegirine is a member of the clinopyroxene group of inosilicate minerals. It is the sodium endmember of the aegirine–augite series. It has the chemical formula NaFeSi2O6, in which the iron is present as the ion Fe3+. In the aegirine–augite series, the sodium is variably replaced by calcium with iron(II) and magnesium replacing the iron(III) to balance the charge. Aluminum also substitutes for the iron(III). Acmite is a fibrous green-colored variety.
Aegirine occurs as dark green monoclinic prismatic crystals. It has a glassy luster and perfect cleavage. Its Mohs hardness varies from 5 to 6 and its specific gravity is between 3.2 and 3.4.
This mineral commonly occurs in alkalic igneous rocks, nepheline syenites, carbonatites and pegmatites. It also appears in regionally
metamorphosed schists, gneisses, and iron formations; in blueschist facies rocks, and from sodium metasomatism in granulites. It may occur as an authigenic mineral in shales and marls. It occurs in association with potassic feldspar, nepheline, riebeckite, arfvedsonite, aenigmatite, astrophyllite, catapleiite, eudialyte, serandite and apophyllite.
Localities include Mont Saint-Hilaire, Quebec, Canada; Kongsberg, Norway; Narsarssuk, Greenland; Kola Peninsula, Russia; Magnet Cove, Arkansas, US; Kenya; Scotland and Nigeria.
The acmite variety was first described in 1821, at Kongsberg, Norway, and the aegirine variety in 1835 for an occurrence in Rundemyr, Øvre Eiker, Buskerud, Norway. Aegirine was named after Ægir, the Norse god of the sea. A synonym for the mineral is acmite (from Greek ἀκμή "point, edge") in reference to the typical pointed crystals.
It is sometimes used as a gemstone.
See also
List of minerals
References
External links
Mineral Galleries
Inosilicates
Sodium minerals
Iron(III) minerals
Pyroxene group
Monoclinic minerals
Minerals in space group 15
Gemstones
Minerals described in 1821 | Aegirine | Physics | 505 |
11,126,710 | https://en.wikipedia.org/wiki/List%20of%20AIGA%20medalists | Following is a list of AIGA medalists who have been awarded the American Institute of Graphic Arts medal.
On its website, AIGA says "The medal of the AIGA, the most distinguished in the field, is awarded to individuals in recognition of their exceptional achievements, services or other contributions to the field of graphic design and visual communication."
AIGA Medals have been awarded since 1920. Nine medals were awarded in the 1920s, seven in the 1930s, eight in the 1940s, twelve in the 1950s, ten in the 1960s, 13 in the 1970s, 13 in the 1980s, 33 in the 1990s, and 45 in the 2000s.
2020s
2022
Source:
Andrew Satake Blauvelt
Emily Oberman
Louise Sandhaus
2021
Source:
Archie Boston, Jr.
Cheryl D. Miller
Terry Irwin
Thomas Miller (honorary)
2010s
2019
Alexander Girard
Geoff McFetridge
Debbie Millman
2018
Aaron Douglas
Arem Duplessis
Karin Fong
Susan Kare
Victor Moscoso
2017
Art Chantry
Emmett McBain
Rebeca Méndez
Mark Randall
Nancy Skolos and Tom Wedell
Lance Wyman
2016
Ruth Ansel
Richard Grefé
Maira Kalman
Gere Kavanaugh
Corita Kent
2015
Paola Antonelli
Hillman Curtis
Emory Douglas
Dan Friedman
Marcia Lausen
2014
Sean Adams and Noreen Morioka
Charles S. Anderson
Dana Arnett
Kenneth Carbone and Leslie Smolan
David Carson
Kyle Cooper
Michael Patrick Cronan
Richard Danne
Michael Donovan and Nancye Green
Stephen Doyle
Louise Fili
Bob Greenberg
Sylvia Harris
Cheryl Heller
Alexander Isley
Chip Kidd
Michael Mabry
J. Abbott Miller
Bill Moggridge
Gael Towey
Ann Willoughby
2013
John Bielenberg
William Drenttel
Tobias Frere-Jones
Jessica Helfand
Jonathan Hoefler
Stefan Sagmeister
Lucille Tenazas
Wolfgang Weingart
2011
Ralph Caplan
Elaine Lustig Cohen
Armin Hofmann
Robert Vogele
2010
Steve Frykholm
John Maeda
Jennifer Morla
2000s
2009
Pablo Ferro
Carin Goldberg
Doyald Young
2008
Gail Anderson
Clement Mok
LeRoy Winbush
2007
Edward Fella
Ellen Lupton
Bruce Mau
Georg Olden
2006
Michael Bierut
Rick Valicenti
Lorraine Wild
2005
Bart Crosby
Meredith Davis
Steff Geissbuhler
2004
Joseph Binder
Charles Coiner
Richard, Jean and Patrick Coyne
James Cross
Sheila Levrant de Bretteville
Jay Doblin
Joe Duffy
Martin Fox
Caroline Warner Hightower
Kit Hinrichs
Walter Landor
Philip Meggs
James Miho
Silas Rhodes
Jack Stauffacher
Alex Steinweiss
Deborah Sussman
Edward Tufte
Fred Woodward
Richard Saul Wurman
2003
B. Martin Pedersen
Woody Pirtle
2002
Robert Brownjohn
Chris Pullman
2001
Samuel Antupit
Paula Scher
2000
P. Scott Makela and Laurie Haycock Makela
Fred Seibert
Michael Vanderbyl
1990s
1999
Tibor Kalman
Steven Heller
Katherine McCoy
1998
Louis Danziger
April Greiman
1997
Lucian Bernhard
Zuzana Licko and Rudy VanderLans
1996
Cipe Pineles
George Lois
1995
Matthew Carter
Stan Richards
Ladislav Sutnar
1994
Muriel Cooper
John Massey
1993
Alvin Lustig
Tomoko Miho
1992
Rudolph de Harak
George Nelson
Lester Beall
1991
Colin Forbes
E. McKnight Kauffer
1990
Alvin Eisenman
Frank Zachary
1980s
Paul Davis, 1989
Bea Feitler, 1989
William Golden, 1988
George Tscherny, 1988
Alexey Brodovitch, 1987
Gene Federico, 1987
Walter Herdeg, 1986
Seymour Chwast, 1985
Leo Lionni, 1984
Herbert Matter, 1983
Massimo Vignelli and Lella Vignelli, 1982
Saul Bass, 1981
Herb Lubalin, 1980
1970s
Ivan Chermayeff and Thomas Geismar, 1979
Lou Dorfsman, 1978
Charles and Ray Eames, 1977
Henry Wolf, 1976
Jerome Snyder, 1976
Bradbury Thompson, 1975
Robert Rauschenberg, 1974
Richard Avedon, 1973
Allen Hurlburt, 1973
Philip Johnson, 1973
Milton Glaser, 1972
Will Burtin, 1971
Herbert Bayer, 1970
1960s
Dr. Robert L. Leslie, 1969
Dr. Giovanni Mardersteig, 1968
Romana Javitz, 1967
Paul Rand, 1966
Leonard Baskin, 1965
Josef Albers, 1964
Saul Steinberg, 1963
William Sandberg, 1962
Paul A. Bennett, 1961
Walter Paepcke, 1960
1950s
May Massee, 1959
Ben Shahn, 1958
Dr. M. F. Agha, 1957
Ray Nash, 1956
P. J. Conkwright, 1955
Will Bradley, 1954
Jan Tschichold, 1954
George Macy, 1953
Joseph Blumenthal, 1952
Harry L. Gage, 1951
Earnest Elmo Calkins, 1950
Alfred A. Knopf, 1950
1940s
Lawrence C. Wroth, 1948
Elmer Adler, 1947
Stanley Morison, 1946
Frederic G. Melcher, 1945
Edward Epstean, 1944
Edwin and Robert Grabhorn, 1942
Carl Purington Rollins, 1941
Thomas M. Cleland, 1940
1930s
William A. Kittredge, 1939
Rudolph Ruzicka, 1935
J. Thompson Willing, 1935
Henry Lewis Bullen, 1934
Porter Garnett, 1932
Dard Hunter, 1931
Henry Watson Kent, 1930
1920s
William A. Dwiggins, 1929
Timothy Cole, 1927
Frederic W. Goudy, 1927
Burton Emmett, 1926
Bruce Rogers, 1925
John G. Agar, 1924
Stephen H. Horgan, 1924
Daniel Berkeley Updike, 1922
Norman T. A. Munder, 1920
See also
Art Directors Club Hall of Fame
Masters Series (School of Visual Arts)
References
Design awards
AIGA | List of AIGA medalists | Engineering | 1,117 |
183,919 | https://en.wikipedia.org/wiki/Coccidioidomycosis | Coccidioidomycosis (, ) is a mammalian fungal disease caused by Coccidioides immitis or Coccidioides posadasii. It is commonly known as cocci, Valley fever, as well as California fever, desert rheumatism, or San Joaquin Valley fever. Coccidioidomycosis is endemic in certain parts of the United States in Arizona, California, Nevada, New Mexico, Texas, Utah, and northern Mexico.
Description
C. immitis is a dimorphic saprophytic fungus that grows as a mycelium in the soil and produces a spherule form in the host organism. It resides in the soil in certain parts of the southwestern United States, most notably in California and Arizona. It is also commonly found in northern Mexico, and parts of Central and South America. C. immitis is dormant during long dry spells, then develops as a mold with long filaments that break off into airborne spores when it rains. The spores, known as arthroconidia, are swept into the air by disruption of the soil, such as during construction, farming, low-wind or singular dust events, or an earthquake. Windstorms may also cause epidemics far from endemic areas. In December 1977, a windstorm in an endemic area around Arvin, California led to several hundred cases, including deaths, in non-endemic areas hundreds of miles away.
Coccidioidomycosis is a common cause of community-acquired pneumonia in the endemic areas of the United States. Infections usually occur due to inhalation of the arthroconidial spores after soil disruption. The disease is not contagious. In some cases the infection may recur or become chronic.
It was reported in 2022 that valley fever had been increasing in California's Central Valley for years (1,000 cases in Kern county in 2014, 3,000 in 2021); experts said that cases could rise across the American west as the climate makes the landscape drier and hotter.
Classification
After Coccidioides infection, coccidioidomycosis begins with Valley fever, which is its initial acute form. Valley fever may progress to the chronic form and then to disseminated coccidioidomycosis. Therefore, coccidioidomycosis may be divided into the following types:
Acute coccidioidomycosis, sometimes described in literature as primary pulmonary coccidioidomycosis
Chronic coccidioidomycosis
Disseminated coccidioidomycosis, which includes primary cutaneous coccidioidomycosis
Valley fever is not a contagious disease. In some cases the infection may recur or become chronic.
Signs and symptoms
An estimated 60% of people infected with the fungi responsible for coccidioidomycosis have minimal to no symptoms, while 40% will have a range of possible clinical symptoms. Of those who do develop symptoms, the primary infection is most often respiratory, with symptoms resembling bronchitis or pneumonia that resolve over a matter of a few weeks. In endemic regions, coccidioidomycosis is responsible for 20% of cases of community-acquired pneumonia. Notable coccidioidomycosis signs and symptoms include a profound feeling of tiredness, loss of smell and taste, fever, cough, headaches, rash, muscle pain, and joint pain. Fatigue can persist for many months after initial infection. The classic triad of coccidioidomycosis known as "desert rheumatism" includes the combination of fever, joint pains, and erythema nodosum.
A minority (3–5%) of infected individuals do not recover from the initial acute infection and develop a chronic infection. This can take the form of chronic lung infection or widespread disseminated infection (affecting the tissues lining the brain, soft tissues, joints, and bone). Chronic infection is responsible for most of the morbidity and mortality. Chronic fibrocavitary disease is manifested by cough (sometimes productive of mucus), fevers, night sweats and weight loss. Osteomyelitis, including involvement of the spine, and meningitis may occur months to years after initial infection. Severe lung disease may develop in HIV-infected persons.
Complications
Serious complications may occur in patients who have weakened immune systems, including severe pneumonia with respiratory failure and bronchopleural fistulas requiring resection, lung nodules, and possible disseminated form, where the infection spreads throughout the body. The disseminated form of coccidioidomycosis can devastate the body, causing skin ulcers, abscesses, bone lesions, swollen joints with severe pain, heart inflammation, urinary tract problems, and inflammation of the brain's lining, which can lead to death. Coccidioidomycosis is a common cause of community-acquired pneumonia in the endemic areas of the United States. Infections usually occur due to inhalation of the arthroconidial spores after soil disruption.
A particularly severe case of meningitis caused by valley fever in 2012 initially received several incorrect diagnoses such as sinus infections and cluster headaches. The patient became unable to work during diagnosis and original search for treatments. Eventually the right treatment was found—albeit with severe side effects—requiring four pills a day and medication administered directly into the brain every 16 weeks.
Cause
C. immitis is a dimorphic saprophytic fungus that grows as a mycelium in the soil and produces a spherule form in the host organism. It resides in the soil in certain parts of the southwestern United States, most notably in California and Arizona. It is also commonly found in northern Mexico, and parts of Central and South America. C. immitis is dormant during long dry spells, then develops as a mold with long filaments that break off into airborne spores when it rains. The spores, known as arthroconidia, are swept into the air by disruption of the soil, such as during construction, farming, low-wind or singular dust events, or an earthquake. Windstorms may also cause epidemics far from endemic areas. In December 1977, a windstorm in an endemic area around Arvin, California led to several hundred cases, including deaths, in non-endemic areas hundreds of miles away.
Rain starts the cycle of initial growth of the fungus in the soil. In soil (and in agar media), Coccidioides exist in filament form. It forms hyphae in both horizontal and vertical directions. Over a prolonged dry period, cells within hyphae degenerate to form alternating barrel-shaped cells (arthroconidia) which are light in weight and carried by air currents. This happens when the soil is disturbed, often by clearing trees, construction or farming. As the population grows, so do all these activities, causing a potential cascade effect. The more land that is cleared and the more arid the soil, the riper the environment for Coccidioides. These spores can be easily inhaled unknowingly. On reaching alveoli they enlarge in size to become spherules, and internal septations develop. This division of cells is made possible by the optimal temperature inside the body. Septations develop and form endospores within the spherule. Rupture of spherules release these endospores, which in turn repeat the cycle and spread the infection to adjacent tissues within the body of the infected individual. Nodules can form in lungs surrounding these spherules. When they rupture, they release their contents into bronchi, forming thin-walled cavities. These cavities can cause symptoms including characteristic chest pain, coughing up blood, and persistent cough. In individuals with a weakened immune system, the infection can spread through the blood. The fungus can also, rarely, enter the body through a break in the skin and cause infection.
Diagnosis
Coccidioidomycosis diagnosis relies on a combination of an infected person's signs and symptoms, findings on radiographic imaging, and laboratory results.
The disease is commonly misdiagnosed as bacterial community-acquired pneumonia. The fungal infection can be demonstrated by microscopic detection of diagnostic cells in body fluids, exudates, sputum and biopsy tissue by methods of Papanicolaou or Grocott's methenamine silver staining. These stains can demonstrate spherules and surrounding inflammation.
With specific nucleotide primers, C. immitis DNA can be amplified by polymerase chain reaction (PCR). It can also be detected in culture by morphological identification or by using molecular probes that hybridize with C. immitis RNA. C. immitis and C. posadasii cannot be distinguished on cytology or by symptoms, but only by DNA PCR.
An indirect demonstration of fungal infection can be achieved also by serologic analysis detecting fungal antigen or host IgM or IgG antibody produced against the fungus. The available tests include the tube-precipitin (TP) assays, complement fixation assays, and enzyme immunoassays. TP antibody is not found in cerebrospinal fluid (CSF). TP antibody is specific and is used as a confirmatory test, whereas ELISA is sensitive and thus used for initial testing.
If the meninges are affected, CSF will show abnormally low glucose levels, an increased level of protein, and lymphocytic pleocytosis. Rarely, CSF eosinophilia is present.
Imaging
Chest X-rays rarely demonstrate nodules or cavities in the lungs, but these images commonly demonstrate lung opacification, pleural effusions, or enlargement of lymph nodes associated with the lungs. Computed tomography scans of the chest are more sensitive than chest X-rays to detect these changes.
Prevention
Preventing coccidioidomycosis is challenging because it is difficult to avoid breathing in the fungus should it be present; however, the public health effect of the disease is essential to understand in areas where the fungus is endemic. Enhancing surveillance of coccidioidomycosis is key to preparedness in the medical field in addition to improving diagnostics for early infections. There are no completely effective preventive measures available for people who live or travel through Valley fever-endemic areas. Recommended preventive measures include avoiding airborne dust or dirt, but this does not guarantee protection against infection. People in certain occupations may be advised to wear face masks. The use of air filtration indoors is also helpful, in addition to keeping skin injuries clean and covered to avoid skin infection.
From 1998–2011, there were 111,117 U.S. cases of coccidioidomycosis logged in the National Notifiable Diseases Surveillance System (NNDSS). Since many U.S. states do not require reporting of coccidioidomycosis, the actual numbers may be higher. The United States' Centers for Disease Control and Prevention (CDC) called the disease a "silent epidemic" and acknowledged that there is no proven anticoccidioidal vaccine available. A 2001 cost-effectiveness analysis indicated that a potential vaccine could improve health as well as reducing total health care expenditures among infants, teens, and immigrant adults, and more modestly improve health but increase total health care expenditures in older age groups.
Raising both surveillance and awareness of the disease while medical researchers are developing a human vaccine can positively contribute towards prevention efforts. Research demonstrates that patients from endemic areas who are aware of the disease are most likely to request diagnostic testing for coccidioidomycosis. Presently, Meridian Bioscience manufactures the so-called EIA test to diagnose the Valley fever, which however is known for producing a fair quantity of false positives. Recommended prevention measures can include type-of-exposure-based respirator protection for persons engaged in agriculture, construction and others working outdoors in endemic areas. Dust control measures such as planting grass and wetting the soil, and also limiting exposure to dust storms are advisable for residential areas in endemic regions.
Treatment
Significant disease develops in fewer than 5% of those infected and typically occurs in those with a weakened immune system. Mild asymptomatic cases often do not require any treatment. Those with severe symptoms may benefit from antifungal therapy, which requires 3–6 months or more of treatment depending on the response to the treatment. There is a lack of prospective studies that examine optimal antifungal therapy for coccidioidomycosis.
On the whole, oral fluconazole and intravenous amphotericin B are used in progressive or disseminated disease, or in immunocompromised individuals. Amphotericin B was originally the only available treatment, but alternatives, including itraconazole and ketoconazole, became available for milder disease. Fluconazole is the preferred medication for coccidioidal meningitis, due to its penetration into CSF. Intrathecal or intraventricular amphotericin B therapy is used if infection persists after fluconazole treatment. Itraconazole is used for cases that involve treatment of infected person's bones and joints. The antifungal medications posaconazole and voriconazole have also been used to treat coccidioidomycosis. Because the symptoms of coccidioidomycosis are similar to the common flu, pneumonia, and other respiratory diseases, it is important for public health professionals to be aware of the rise of coccidioidomycosis and the specifics of diagnosis. Greyhound dogs often get coccidioidomycosis; their treatment regimen involves 6–12 months of ketoconazole taken with food.
Toxicity
Conventional amphotericin B desoxycholate (AmB: used since the 1950s as a primary agent) is known to be associated with increased drug-induced nephrotoxicity impairing kidney function. Other formulations have been developed such as lipid-soluble formulations to mitigate side-effects such as direct proximal and distal tubular cytotoxicity. These include liposomal amphotericin B, amphotericin B lipid complex such as Abelcet (brand) amphotericin B phospholipid complex also as AmBisome Intravenous, or Amphotec Intravenous (Generic; Amphotericin B Cholesteryl Sul), and amphotericin B colloidal dispersion, all shown to exhibit a decrease in nephrotoxicity. The latter was not as effective in one study as amphotericin B desoxycholate which had a 50% murine (rat and mouse) morbidity rate versus zero for the AmB colloidal dispersion.
The cost of the nephrotoxic AmB deoxycholate, in 2015, for a patient of at 1 mg/kg/day dosage, was approximately US$63.80, compared to $1318.80 for 5 mg/kg/day of the less toxic liposomal AmB.
Epidemiology
Coccidioidomycosis is endemic to the western hemisphere between 40°N and 40°S, including certain parts of the United States in Arizona, California, Nevada, New Mexico, Texas, Utah, and northern Mexico. The ecological niches are characterized by hot summers and mild winters with an annual rainfall of 10–50 cm.
The species are found in alkaline sandy soil, typically 10–30 cm below the surface. In harmony with the mycelium life cycle, incidence increases with periods of dryness after a rainy season; this phenomenon, termed "grow and blow", refers to growth of the fungus in wet weather, producing spores which are spread by the wind during succeeding dry weather. While the majority of cases are observed in the endemic region, cases reported outside the area are generally visitors, who contact the infection and return to their native areas before becoming symptomatic.
North America
In the United States, C. immitis is endemic to southern and central California with the highest presence in the San Joaquin Valley. C. posadassi is most prevalent in Arizona, although it can be found in a wider region spanning from Utah, New Mexico, Texas, and Nevada. Approximately 25,000 cases are reported every year, although the total number of infections is estimated to be around 150,000 per year; the disease is underreported because many cases are asymptomatic, and those who do have symptoms are often difficult to distinguish from other causes of pneumonia if they are not specifically tested for valley fever.The incidence of coccidioidomycosis in the United States in 2011 (42.6 per 100,000) was almost ten times higher than the incidence reported in 1998 (5.3 per 100,000). In area where it is most prevalent, the infection rate is 2-4%.
Incidence varies widely across the west and southwest. In Arizona, for instance, in 2007, there were 3,450 cases in Maricopa County, which in 2007 had an estimated population of 3,880,181 for an incidence of approximately 1 in 1,125. In contrast, though southern New Mexico is considered an endemic region, there were 35 cases in the entire state in 2008 and 23 in 2007, in a region that had an estimated 2008 population of 1,984,356, for an incidence of approximately 1 in 56,695.
Infection rates vary greatly by county, and although population density is important, so are other factors that have not been proven yet. Greater construction activity may disturb spores in the soil. In addition, the effect of altitude on fungi growth and morphology has not been studied, and altitude can range from sea level to 10,000 feet or higher across California, Arizona, Utah and New Mexico.
In California from 2000 to 2007, there were 16,970 reported cases (5.9 per 100,000 people) and 752 deaths of the 8,657 people hospitalized. The highest incidence was in the San Joaquin Valley with 76% of the 16,970 cases (12,855) occurring in the area. Following the 1994 Northridge earthquake, there was a sudden increase of cases in the areas affected by the quake, at a pace of over 10 times baseline.
There was an outbreak in the summer of 2001 in Colorado, away from where the disease was considered endemic. A group of archeologists visited Dinosaur National Monument, and eight members of the crew, along with two National Park Service workers were diagnosed with Valley fever.
California state prisons, beginning in 1919, have been particularly affected by coccidioidomycosis. In 2005 and 2006, the Pleasant Valley State Prison near Coalinga and Avenal State Prison near Avenal on the western side of the San Joaquin Valley had the highest incidence in 2005, of at least 3,000 per 100,000. The receiver appointed in Plata v. Schwarzenegger issued an order in May 2013 requiring relocation of vulnerable populations in those prisons.
The incidence rate has been increasing, with rates as high as 7% during 2006–2010. The cost of care and treatment is $23 million in California prisons. A lawsuit was filed against the state in 2014 on behalf of 58 inmates stating that the Avenal and Pleasant valley state prisons did not take necessary steps to prevent infections.
Population risk factors
There are several populations that have a higher risk for contracting coccidioidomycosis and developing the advanced disseminated version of the disease. Populations with exposure to the airborne arthroconidia working in agriculture and construction have a higher risk. Outbreaks have also been linked to earthquakes, windstorms and military training exercises where the ground is disturbed. Historically, an infection is more likely to occur in males than females, although this could be attributed to occupation rather than being sex-specific. Women who are pregnant and immediately postpartum are at a high risk of infection and dissemination. There is also an association between stage of pregnancy and severity of the disease, with third trimester women being more likely to develop dissemination. Presumably this is related to highly elevated hormonal levels, which stimulate growth and maturation of spherules and subsequent release of endospores. Certain ethnic populations are more susceptible to disseminated coccidioidomycosis. The risk of dissemination is 175 times greater in Filipinos and 10 times greater in African Americans than non-Hispanic whites. Individuals with a weakened immune system are also more susceptible to the disease. In particular, individuals with HIV and diseases that impair T-cell function. Individuals with pre-existing conditions such as diabetes are also at a higher risk. Age also affects the severity of the disease, with more than one-third of deaths being in the 65-84 age group.
History
The first case of what was later named coccidioidomycosis was described in 1892 in Buenos Aires by Alejandro Posadas, a medical intern at the Hospital de Clínicas "José de San Martín". Posadas established an infectious character of the disease after being able to transfer it in laboratory conditions to lab animals. In the U.S., Dr. E. Rixford, a physician from a San Francisco hospital, and T. C. Gilchrist, a pathologist at Johns Hopkins Medical School, became early pioneers of clinical studies of the infection. They decided that the causative organism was a Coccidia-type protozoan and named it Coccidioides immitis (resembling Coccidia, not mild).
Dr. William Ophüls, a professor at Stanford University Hospital (San Francisco), discovered that the causative agent of the disease that was at first called Coccidioides infection and later coccidioidomycosis was a fungal pathogen, and coccidioidomycosis was also distinguished from Histoplasmosis and Blastomycosis. Further, Coccidioides immitis was identified as the culprit of respiratory disorders previously called San Joaquin Valley fever, desert fever, and Valley fever, and a serum precipitin test was developed by Charles E. Smith that was able to detect an acute form of the infection. In retrospect, Smith played a major role in both medical research and raising awareness about coccidioidomycosis, especially when he became dean of the School of Public Health at the University of California at Berkeley in 1951.
Coccidioides immitis was considered by the United States during the 1950s and 1960s as a potential biological weapon. The strain selected for investigation was designated with the military symbol OC, and initial expectations were for its deployment as a human incapacitant. Medical research suggested that OC might have had some lethal effects on the populace, and Coccidioides immitis started to be classified by the authorities as a threat to public health. Coccidioides immitis was never weaponized to the public's knowledge, and most of the military research in the mid-1960s was concentrated on developing a human vaccine. Coccidioides immitis is not on the U.S. Department of Health and Human Services' or Centers for Disease Control and Prevention's list of select agents and toxins.
In 2002, Coccidioides posadasii was identified as genetically distinct from Coccidioides immitis despite their morphologic similarities and can also cause coccidioidomycosis.
It was reported in 2022 that valley fever had been increasing in Central Valley of California for years (1,000 cases in Kern county in 2014, 3,000 in 2021); experts said that cases could rise across the American west as the climate makes the landscape drier and hotter. The Coccidioides flourishes due to the oscillation between extreme dryness and extreme wetness. The California Department of Public Health said the 9,280 new cases of Valley fever with onset dates in 2023 was the highest number the department has ever documented.
Research
As of 2023, there is no vaccine available to prevent infection with Coccidioides immitis or Coccidioides posadasii, but efforts to develop such a vaccine are underway. Anivive Lifesciences and a team at the University of Arizona Medical School was developing a vaccine for use in dogs, which could eventually lead to a vaccine in humans.
Other animals
In dogs, the most common symptom of coccidioidomycosis is a chronic cough, which can be dry or moist. Other symptoms include fever (in approximately 50% of cases), weight loss, anorexia, lethargy, and depression. The disease can disseminate throughout the dog's body, most commonly causing osteomyelitis (infection of the bone), which leads to lameness. Dissemination can cause other symptoms, depending on which organs are infected. If the fungus infects the heart or pericardium, it can cause heart failure and death.
In cats, symptoms may include skin lesions, fever, and loss of appetite, with skin lesions being the most common.
Other species in which Valley fever has been found include livestock such as cattle and horses; llamas; marine mammals, including sea otters; zoo animals such as monkeys and apes, kangaroos, tigers, etc.; and wildlife native to the geographic area where the fungus is found, such as cougars, skunks, and javelinas.
Additional images
In popular culture
In the Season 1 episode of Bones called "The Man in the Fallout Shelter" the entire lab is exposed to coccidioidomycosis through inhalation of bone dust. Erroneously, the team is forced to quarantine in the lab on Christmas Eve to prevent the disease from spreading to the public (in real life, the disease is not contagious). The lab is later exposed to it again in the Season 2 episode "The Priest in the Churchyard" from contaminated graveyard soil but only receives a series of injections rather than be forced to quarantine.
Everything in Between, a 2022 Australian feature film, contains references to coccidioidomycosis.
In House Season 3 Episode 4, "Lines in the Sand", a 17-year-old patient who has been exposed to Coccidioides immitis exhibits symptoms of coccidioidomycosis.
Thunderhead, a 1999 novel by Douglas Preston and Lincoln Child, uses the fungus and illness as a central plot point.
See also
Coccidioides
Coccidioides immitis
Coccidioides posadasii
Zygomycosis
Medical geology
List of cutaneous conditions
Thunderhead, a 1999 novel by Douglas Preston and Lincoln Child which uses the fungus and illness as a central plot point.
References
Further reading
(Review).
(Review).
External links
U.S. Centers for Disease Control and Prevention page on coccidioidomycosis
Medline Plus Entry for coccidioidomycosis
Biological agents
Animal fungal diseases
Neglected American diseases
Fungal diseases | Coccidioidomycosis | Biology,Environmental_science | 5,610 |
36,809,506 | https://en.wikipedia.org/wiki/World%20Computer%20Exchange | World Computer Exchange (WCE) is a United States and Canada based charity organization whose mission is "to reduce the digital divide for youth in developing countries, to use our global network of partnerships to enhance communities in these countries, and to promote the reuse of electronic equipment and its ultimate disposal in an environmentally responsible manner." According to UNESCO, it is North America's largest non-profit supplier of tested used computers to schools and community organizations in developing countries.
History
WCE was founded in 1999 by Timothy Anderson. It is a non-profit organization.
Its headquarters are in Hull, Massachusetts, and there are 15 chapters in the US and five in Canada.
In 2015, WCE opened a chapter in Puerto Rico.
By November 2002, the organisation shipped 4,000 computers to 585 schools in many developing countries.
By October, 2011, along with partner organizations, WCE has shipped 30,000 computers, established 2,675 computer labs. In February 2012, the Boston Chapter sent out their 68th shipment bringing their total to 13,503 computers.
Activities
WCE provides computers and technology, and the support to make them useful in developing communities. WCE delivers educational content and curriculum on agriculture, health, entrepreneurship, water, and energy. The program also ensures that teachers will know how to use the technology and content by providing staff and teacher training, as well as ongoing tech support.
Each chapter of WCE collects donated computers, refurbishes and prepares them for shipment. They also raise funds to ship the computers.
Volunteers inspect and repair each computer, then install the operating system and educational material onto each computer.
WCE calls recipients of its computers "partners." The requests of computer donations originate from the partners. Once the refurbished computers and the funds to ship the computers are fulfilled, WCE initiates shipment. When possible, WCE coordinates shipments with other organizations, such as University of the People, Peace Corps, Computers4Africa.org, ADEA (Assoc. for the development of Education in Africa) and others.
In June 2013, WCE Chicago chapter sent 400 computers to Mexico, and 300 to the Dominican Republic with help of 85 volunteers.
In November 2015, WCE sent two Spanish speakers to visit Honduras for two weeks in 2015 to pilot tech skills training for youth under a contract with World Vision.
The WCE Computers for Girls (C4G) initiative is field testing of eight tools to provide technological training and STEM education for interested teachers helping their girl students in four West African countries (Ghana, Liberia, Mali, and Zambia) and Pakistan.
In September 2016, World Computer Exchange-Puerto Rico and 4GCommunity.org, two not-for-profit corporations, have announced their alliance to improve public school and family access to technology where needed throughout Puerto Rico.
eCorps
To install computers at partner sites without access to experts, WCE recruits and supports volunteers from the USA under its initiative. To be eligible, volunteers must be 21 years of age, have necessary tech skills, and be prepared to self-fund their travel and accommodation expenses. 18 training teams have worked in Dominican Republic, Ethiopia, Georgia, Ghana, Honduras, Kenya, Liberia, Mali, Nepal, Nicaragua, Nigeria, Philippines, Puerto Rico, Tanzania, and Zimbabwe.
The "Travelers" program is geared towards those already planning to go to one of the countries in the WCE network, to provide tech support during their trip. 79 "Travelers" have visited the following 41 developing countries including: Armenia, Bolivia, Cambodia, Cameroon, Democratic Republic of Congo, Dominican Republic, Ecuador, Ethiopia, Haiti, Honduras, India, Indonesia, Jordan, Kenya, Liberia, Malawi, Mexico, Namibia, Nepal, Pakistan, Palestine, Panama, Peru, Puerto Rico, Qatar, Senegal, Sierra Leone, South Africa, Swaziland, Tanzania, Togo, and Uganda. In 2015, "Travelers" visited: Cambodia, Haiti, Honduras, Mexico, Puerto Rico, and South Africa.
Computers
WCE uses the Ubuntu operating system on their computers, citing the cost of license and being less prone to malware while providing a computing environment such as word processor and printer drivers.
Unlike One Laptop per Child, the computers do not contain specialized software. Each computer is loaded with educational materials to allow users to learn without an internet connection.
See also
Computer recycling
Electronic waste in the United States
Empower Up
Free Geek
Geekcorps
Geeks Without Bounds
Global digital divide
ICVolunteers
Inveneo
NetCorps
NetDay
Nonprofit Technology Resources
United Nations Information Technology Service (UNITeS)
External links
http://www.worldcomputerexchange.org - Official Website
https://www.linkedin.com/company/world-computer-exchange/ - Company LinkedIn
References
Computer recycling
Digital divide
Information and communication technologies for development
Charities
Charities
International volunteer organizations
Non-profit technology | World Computer Exchange | Technology | 993 |
28,097,633 | https://en.wikipedia.org/wiki/United%20States%20v.%20Riverside%20Bayview | United States v. Riverside Bayview, 474 U.S. 121 (1985), was a United States Supreme Court case challenging the scope of federal regulatory powers over waterways as pertaining to the definition of "waters of the United States" as written in the Clean Water Act of 1972. The Court ruled unanimously that the government does have the power to control intrastate wetlands as waters of the United States. This ruling was effectively revised in Rapanos v. United States (2006), in which the Court adopted a very narrow interpretation of "navigable waters."
Prior history
The case involves developer Riverside Bayview Homes Inc., which began placing fill materials on its property near the shores of Lake St. Clair, Michigan. The Army Corps of Engineers (Corps) filed suit in Federal District Court to prevent Riverside Bayview from filling its property without dredge and fill exception from the Corps as required under Clean Water Act §404.
The Eastern Michigan District Court held that the property was freshwater wetlands under the Corps's regulatory definition, which reads, "those areas that are inundated or saturated by surface or ground water at a frequency and duration sufficient to support, and that under normal circumstances do support, a prevalence of vegetation typically adapted for life in saturated soil conditions", and as such is subject to the Corps's permit authority because the lands were characterized by those conditions, and the property was adjacent to a body of navigable water. The Court of Appeals reversed, arguing that the Corps overstepped the definition of "waters of the United States", and took the view that the Corps's authority under the Clean Water Act and its implementing regulations must be narrowly construed to avoid a taking without just compensation in violation of the Fifth Amendment, and therefore Riverside Bayview was free to fill its property without obtaining a permit.
Decision
Writing the opinion for a unanimous Court, Justice Byron White ruled that neither the imposition of the permit requirement itself nor the denial of a permit would constitute taking, and that other legislation such as the Tucker Act exist to provide compensation for takings that may result. The Court ruled that the District Court did not err in its finding that the property falls within the Corps's regulatory definition of wetlands. White added that the Clean Water Act's language, policies, and history compel a holding that the Corps acted reasonably in its interpretation of its authorities over discharge material in wetlands.
See also
List of United States Supreme Court cases, volume 474
List of United States Supreme Court cases
References
External links
United States environmental case law
United States Supreme Court cases
United States Supreme Court cases of the Burger Court
1985 in the environment
1985 in United States case law
Takings Clause case law
United States Army Corps of Engineers
Legal history of Michigan
United States water case law | United States v. Riverside Bayview | Engineering | 559 |
42,763,530 | https://en.wikipedia.org/wiki/Confidential%20incident%20reporting | A confidential incident reporting system is a mechanism which allows problems in safety-critical fields such as aviation and medicine to be reported in confidence. This allows events to be reported which otherwise might not be reported through fear of blame or reprisals against the reporter. Analysis of the reported incidents can provide insight into how those events occurred, which can spur the development of measures to make the system safer.
Examples
The Aviation Safety Reporting System, created by the US aviation industry in 1976, was one of the earliest confidential reporting systems. The International Confidential Aviation Safety Systems Group is an umbrella organization for confidential reporting systems in the airline industry.
Other examples include:
CIRAS, (Confidential Incident Reporting and Analysis System), the confidential reporting system for the British railway industry
CHIRP, (Confidential Human Factors Incident Reporting Programme / Confidential Hazardous Incident Reporting Programme) a confidential reporting system for the British aviation and maritime industries
CROSS (Confidential Reporting on Structural Safety), a confidential reporting system for the structural and civil engineering industry
It has been suggested that medical organizations also adopt the confidential reporting model. Examples of confidential reporting in medicine include CORESS, a confidential reporting system for surgery in the United Kingdom.
References
See also
Mandatory reporting
Near miss (safety)
Root cause analysis
Safety engineering
Confidentiality
Error detection and correction
Aviation safety
Railway safety
Emergency medicine | Confidential incident reporting | Engineering | 260 |
16,796,463 | https://en.wikipedia.org/wiki/Annual%20International%20Conference%20on%20Real%20Options | The Annual International Conference on Real Options: Theory Meets Practice is a yearly conference organized by the Real Options Group in cooperation with various top universities. Its stated aim is to "bring together academics and practitioners at the forefront of real options and investment under uncertainty to discuss recent developments and applications."
The conference has taken place in a different location every year since its inception in 1997. Notable keynote speakers have included Robert C. Merton of Harvard University, Myron Scholes of Stanford University, Robert Pindyck and Stewart Myers of MIT, and Stephen A. Ross of Yale.
In 2008, the conference was held in Rio de Janeiro, Brazil; in 2009, it was held in Portugal and Spain; in 2010, it was held in Rome, Italy; and in 2011, it was held in Turku, Finland.
External links
Annual International Conference on Real Options, official website of the conference
Academic conferences
Real options | Annual International Conference on Real Options | Engineering | 182 |
832,482 | https://en.wikipedia.org/wiki/Rotten%20Tomatoes | Rotten Tomatoes is an American review-aggregation website for film and television. The company was launched in August 1998 by three undergraduate students at the University of California, Berkeley: Senh Duong, Patrick Y. Lee, and Stephen Wang. Although the name "Rotten Tomatoes" connects to the practice of audiences throwing rotten tomatoes in disapproval of a poor stage performance, the direct inspiration for the name from Duong, Lee, and Wang came from an equivalent scene in the 1992 Canadian film Léolo.
Since January 2010, Rotten Tomatoes has been owned by Flixster, which was in turn acquired by Warner Bros. in 2011. In February 2016, Rotten Tomatoes and its parent site Flixster were sold to Comcast's Fandango ticketing company. Warner Bros. retained a minority stake in the merged entities, including Fandango.
The site is influential among moviegoers, a third of whom say they consult it before going to the cinema in the U.S. It has been criticized for oversimplifying reviews by flattening them into a fresh vs. rotten dichotomy. It has also been criticized for being easy for studios to manipulate by limiting early screenings to critics inclined to be favorable, among other tactics.
History
Rotten Tomatoes was launched on August 12, 1998, as a spare-time project by Senh Duong. His objective in creating Rotten Tomatoes was "to create a site where people can get access to reviews from a variety of critics in the U.S". As a fan of Jackie Chan, Duong was inspired to create the website after collecting all the reviews of Chan's Hong Kong action movies as they were being released in the United States. The catalyst for the creation of the website was Rush Hour (1998), Chan's first major Hollywood crossover, which was originally planned to release in August 1998. Duong coded the website in two weeks and the site went live the same month, but the release of Rush Hour was delayed until September 1998. Besides Jackie Chan films, he began including other films on Rotten Tomatoes, extending it beyond Chan's fandom. The first non-Chan Hollywood movie whose reviews were featured on Rotten Tomatoes was Your Friends & Neighbors (1998). The website was an immediate success, receiving mentions by Netscape, Yahoo!, and USA Today within the first week of its launch; it attracted "600–1,000 daily unique visitors" as a result.
Duong teamed up with University of California, Berkeley classmates Patrick Y. Lee and Stephen Wang, his former partners at the Berkeley, California-based web design firm Design Reactor, to pursue Rotten Tomatoes on a full-time basis. They officially launched it on April 1, 2000.
In June 2004, IGN Entertainment acquired Rotten Tomatoes for an undisclosed sum. In September 2005, IGN was bought by News Corp's Fox Interactive Media. In January 2010, IGN sold the website to Flixster. The combined reach of both companies is 30 million unique visitors a month across all different platforms, according to the companies. In 2011, Warner Bros. acquired Rotten Tomatoes.
In early 2009, Current Television launched The Rotten Tomatoes Show, a televised version of the web review site. It was hosted by Brett Erlich and Ellen Fox and written by Mark Ganek. The show aired Thursdays at 10:30 EST until September 16, 2010. It returned as a much shorter segment of InfoMania, a satirical news show that ended in 2011.
By late 2009, the website was designed to enable Rotten Tomatoes users to create and join groups to discuss various aspects of film. One group, "The Golden Oyster Awards", accepted votes of members for various awards, spoofing the better-known Academy Awards or Golden Globes. When Flixster bought the company, they disbanded the groups.
As of February 2011, new community features have been added and others removed. For example, users can no longer sort films by Fresh Ratings from Rotten Ratings, and vice versa.
On September 17, 2013, a section devoted to scripted television series, called TV Zone, was created as a subsection of the website.
In February 2016, Rotten Tomatoes and its parent site Flixster were sold to Comcast's Fandango Media. Warner Bros retained a minority stake in the merged entities, including Fandango.
In December 2016, Fandango and all its various websites moved to Fox Interactive Media's former headquarters in Beverly Hills, California.
In July 2017, the website's editor-in-chief since 2007, Matt Atchity, left to join The Young Turks YouTube channel. On November 1, 2017, the site launched a new web series on Facebook, See It/Skip It, hosted by Jacqueline Coley and Segun Oduolowu.
In March 2018, the site announced its new design, icons and logo for the first time in 19 years at South by Southwest.
On May 19, 2020, Rotten Tomatoes won the 2020 Webby People's Voice Award for Entertainment in the Web category.
In February 2021, the Rotten Tomatoes staff made an entry on their Product Blog, announcing several design changes to the site: Each film's 'Score Box' at the top of the page would now also include its release year, genre, and runtimes, with an MPAA rating to be soon added; the number of ratings would be shown in groupings – from 50+ up to 250,000+ ratings, for easier visualization. Links to critics and viewers are included underneath the ratings. By clicking on either the Tomatometer Score or the Audience Score, the users can access "Score Details" information, such as the number of Fresh and Rotten reviews, average rating, and Top Critics' score. The team also added a new "What to Know" section for each film entry page, which could combine the "Critics Consensus" blurb with a new "Audience Says" blurb, so users can see an at-a-glance summary of the sentiments of both certified critics and verified audience members.
Features
Critics' aggregate score
Rotten Tomatoes staff first collect online reviews from writers who are certified members of various writing guilds or film critic-associations. To be accepted as a critic on the website, a critic's original reviews must garner a specific number of "likes" from users. Those classified as "Top Critics" generally write for major newspapers. The critics upload their reviews to the movie page on the website, and need to mark their review "fresh" if it is generally favorable or "rotten" otherwise. It is necessary for the critic to do so as some reviews are qualitative and do not grant a numeric score, making it impossible for the system to be automatic.
The website keeps track of all the reviews counted for each film and calculates the percentage of positive reviews. If the positive reviews make up 60% or more, the film is considered "fresh". If the positive reviews are less than 60%, the film is considered "rotten". An average score on a 0 to 10 scale is also calculated. With each review, a short excerpt of the review is quoted that also serves a hyperlink to the complete review essay for anyone interested to read the critic's full thoughts on the subject.
"Top Critics", such as Roger Ebert, Desson Thomson, Stephen Hunter, Owen Gleiberman, Lisa Schwarzbaum, Peter Travers and Michael Phillips are identified in a sub-listing that calculates their reviews separately. Their opinions are also included in the general rating. When there are sufficient reviews, the staff creates and posts a consensus statement to express the general reasons for the collective opinion of the film.
This rating is indicated by an equivalent icon at the film listing, to give the reader a one-glance look at the general critical opinion about the work. The "Certified Fresh" seal is reserved for movies that satisfy two criteria: a "Tomatometer" of 75% or better and at least 80 reviews (40 for limited release movies) from "Tomatometer" critics (including 5 Top Critics). Films earning this status will keep it unless the positive critical percentage drops below 70%. Films with 100% positive ratings that lack the required number of reviews may not receive the "Certified Fresh" seal.
When a film or TV show reaches the requirements for the "Certified Fresh", it is not automatically granted the seal; "the Tomatometer score must be consistent and unlikely to deviate significantly" before it is thus marked. Once certified, if a film's score drops and remains consistently below 70%, it loses its Certified Fresh designation.
Golden Tomato Awards
In 2000, Rotten Tomatoes announced the RT Awards honoring the best-reviewed films of the year according to the website's rating system. The awards were later renamed the Golden Tomato Awards. The nominees and winners are announced on the website, although there is no actual awards ceremony.
The films are divided into wide release and limited release categories. Limited releases are defined as opening in 599 or fewer theaters at initial release. Platform releases, movies initially released under 600 theaters but later receiving wider distribution, fall under this definition. Any film opening in more than 600 theaters is considered wide release. There are also two categories purely for British and Australian films. The "User"-category represents the highest rated film among users, and the "Mouldy"-award represents the worst-reviewed films of the year. A movie must have 40 (originally 20) or more rated reviews to be considered for domestic categories. It must have 500 or more user ratings to be considered for the "User"-category.
Films are further classified based on film genre. Each movie is eligible in only one genre, aside from non-English-language films, which can be included in both their genre and the respective "Foreign" category.
Once a film is considered eligible, its "votes" are counted. Each critic from the website's list gets one vote (as determined by their review), all weighted equally. Because reviews are continually added, manually and otherwise, a cutoff date at which new reviews are not counted toward the Golden Tomato awards is initiated each year, usually the first of the new year. Reviews without ratings are not counted toward the results of the Golden Tomato Awards.
Audience score and reviews
Each movie features a "user average", which calculates the percentage of registered users who have rated the film positively on a 5-star scale, similar to calculation of recognized critics' reviews.
On May 24, 2019, Rotten Tomatoes introduced a verified rating system that would replace the earlier system where users were merely required to register to submit a rating. So, in addition to creating an account, users will have to verify their ticket purchase through ticketing company Fandango Media, parent company of Rotten Tomatoes. While users can still leave reviews without verifying, those reviews will not account for the average audience score displayed next to the Tomatometer.
On August 21, 2024, Rotten Tomatoes rebranded its audience score as the Popcornmeter and introduced a new "Verified Hot" badge. The designation is only given to films which have reached an audience score of 90 percent or higher among users whom Rotten Tomatoes has verified as having purchased a ticket to the film through Fandango. A representative for Rotten Tomatoes stated that their goal is to include other services in the future for users who do not use Fandango. Upon its creation, the "Verified Hot" badge was installed retroactively on over 200 films which achieved a verified audience score of 90% or higher since the launch of Rotten Tomatoes' verified audience ratings in May 2019.
"What to Know"
In February 2021, a new "What to Know" section was created for each film entry, combining the "Critics Consensus" and a new "Audience Says" blurbs within it, to give users an at-a-glance summary of the general sentiments of a film as experienced by critics and audiences. Prior to February 2021, only the "Critics Consensus" blurb was posted for each entry, after enough certified critics had submitted reviews. When the "Audience Says" blurbs were added, Rotten Tomatoes initially included them only for newer films and those with a significant audience rating, but suggested that they may later add them for older films as well.
"Critics Consensus" / "Audience Says"
Each movie features a brief blurb summary of the critics' reviews, called the "Critical Consensus", used in that entry's Tomatometer aggregate score.
In February 2021, Rotten Tomatoes added an "Audience Says" section; similar to the "Critics Consensus", it summarizes the reviews noted by registered users into a concise blurb. The Rotten Tomatoes staff noted that for any given film, if there were any external factors such as controversies or issues affecting the sentiments of a film, they may address it in the "Audience Says" section to give users the most relevant info regarding their viewing choices.
Localized versions
Localized versions of the site available in the United Kingdom, India, and Australia were discontinued following the acquisition of Rotten Tomatoes by Fandango. The Mexican version of the site, , remains active.
API
The Rotten Tomatoes API provides limited access to critic and audience ratings and reviews, allowing developers to incorporate Rotten Tomatoes data on other websites. The free service is intended for use in the US only; permission is required for use elsewhere. As of 2022, API access is restricted to approved developers that must go through an application process.
Influence
Major Hollywood studios have come to see Rotten Tomatoes as a potential threat to their marketing. In 2017, several blockbuster films like Pirates of the Caribbean: Dead Men Tell No Tales, Baywatch and The Mummy were projected to open with gross receipts of $90 million, $50 million and $45 million, respectively, but ended up debuting with $62.6 million, $23.1 million and $31.6 million. Rotten Tomatoes, which scored the films at 30%, 19% and 16%, respectively, was blamed for undermining them. That same summer, films like Wonder Woman and Spider-Man: Homecoming (both 92%) received high scores and opened at or exceeded expectations with their $100+ million trackings.
As a result of this concern, 20th Century Fox commissioned a 2015 study, titled "Rotten Tomatoes and Box Office", that stated the website combined with social media was going to be an increasingly serious complication for the film business: "The power of Rotten Tomatoes and fast-breaking word of mouth will only get stronger. Many Millennials and even Gen X-ers now vet every purchase through the Internet, whether it's restaurants, video games, make-up, consumer electronics or movies. As they get older and comprise an even larger share of total moviegoers, this behavior is unlikely to change". Other studios have commissioned a number of studies on the subject, with them finding that 7/10 people said they would be less interested in seeing a film if the Rotten Tomatoes score was below 25%, and that the site has the most influence on people 25 and younger.
The scores have reached a level of online ubiquity which film companies have found threatening. For instance, the scores are regularly posted in Google search results for films so reviewed. Furthermore, the scores are prominently featured in Fandango's popular ticket purchasing website, on its mobile app, on popular streaming services like Peacock, and on Flixster, which led to complaints that "rotten" scores damaged films' performances.
Others have argued that filmmakers and studios have only themselves to blame if Rotten Tomatoes produces a bad score, as this only reflects a poor reception among film critics. As one independent film distributor marketing executive noted, "To me, it's a ridiculous argument that Rotten Tomatoes is the problem ... make a good movie!". ComScore's Paul Dergarabedian had similar comments, saying: "The best way for studios to combat the 'Rotten Tomatoes Effect' is to make better movies, plain and simple".
Some studios have suggested embargoing or cancelling early critic screenings in a response to poor reviews prior to a film's release affecting pre-sales and opening weekend numbers. In July 2017, Sony embargoed critic reviews for The Emoji Movie until mid-day the Thursday before its release. The film ended up with a 9% rating (including 0% after the first 25 reviews), but still opened to $24 million, on par with projections. Josh Greenstein, Sony Pictures President of Worldwide Marketing and Distribution, said, "The Emoji Movie was built for people under 18 ... so we wanted to give the movie its best chance. What other wide release with a score under 8 percent has opened north of $20 million? I don't think there is one". Conversely, Warner Bros. also did not do critic pre-screenings for The House, which held a score of 16% until the day of its release, and opened to just $8.7 million; the lowest of star Will Ferrell's career.
That marketing tactic can backfire, and drew the vocal disgust of influential critics such as Roger Ebert, who was prone to derisively condemn such moves, with gestures such as "The Wagging Finger of Shame", on At the Movies. Furthermore, the very nature of withholding reviews can draw early conclusions from the public that the film is of poor quality because of that marketing tactic.
On February 26, 2019, in response to issues surrounding coordinated "bombing" of user reviews for several films, most notably Captain Marvel and Star Wars: The Rise of Skywalker, prior to their release, the site announced that user reviews would no longer be accepted until a film is publicly released. The site also announced plans to introduce a system for "verified" reviews, and that the "Want to See" statistic would now be expressed as a number so that it would not be confused with the audience score.
Despite arguments on how Rotten Tomatoes scores impact the box office, academic researchers so far have not found evidence that Rotten Tomatoes ratings affect box office performance.
Criticism
Oversimplification
In January 2010, on the occasion of the 75th anniversary of the New York Film Critics Circle, its chairman Armond White cited Rotten Tomatoes in particular and film review aggregators in general as examples of how "the Internet takes revenge on individual expression". He said they work by "dumping reviewers onto one website and assigning spurious percentage-enthusiasm points to the discrete reviews". According to White, such websites "offer consensus as a substitute for assessment". Landon Palmer, a film and media historian and an assistant professor in the Department of Journalism and Creative Media director in the College of Communication and Information Sciences at the University of Alabama agreed with White, stating that "[Rotten Tomatoes applies a] problematic algorithm to pretty much all avenues of modern media art and entertainment".
Director and producer Brett Ratner has criticized the website for "reducing hundreds of reviews culled from print and online sources into a popularized aggregate score", while expressing respect for traditional film critics. Writer Max Landis, following his film Victor Frankenstein receiving an approval rating of 24% on the site, wrote that the site "breaks down entire reviews into just the word 'yes' or 'no', making criticism binary in a destructive arbitrary way".
Review manipulation
Vulture ran an article in September 2023 that raised several criticisms of Rotten Tomatoes's system, including the ease at which large companies are able to manipulate reviewer ratings. The article cited publicity company Bunker 15 as an example of how scores can be boosted by recruiting obscure, often self-published reviewers, using the example of 2018's Ophelia.
Rotten Tomatoes responded by delisting several Bunker 15 films, including Ophelia. It told Vulture in a statement, "We take the integrity of our scores seriously and do not tolerate any attempts to manipulate them. We have a dedicated team who monitors our platforms regularly and thoroughly investigates and resolves any suspicious activity."
WIRED published an article in February 2024 written by Christopher Null, a former film critic, that argued such methods are standard activities performed by all PR agencies. In particular, Null points out that sponsoring legitimate, honest reviews has a long history in other industries and is a "common tactic employed by indie titles to get visibility."
Other criticisms
American director Martin Scorsese wrote a column in The Hollywood Reporter criticizing both Rotten Tomatoes and CinemaScore for promoting the idea that films like Mother! had to be "instantly liked" to be successful. Scorsese, in a dedication for the Roger Ebert Center for Film Studies at the University of Illinois later continued his criticism, voicing that Rotten Tomatoes and other review services "devalue cinema on streaming platforms to the level of content".
In 2015, while promoting the film Suffragette (which has a 73% approval rating) actress Meryl Streep accused Rotten Tomatoes of disproportionately representing the opinions of male film critics, resulting in a skewed ratio that adversely affected the commercial performances of female-driven films. "I submit to you that men and women are not the same, they like different things," she said. "Sometimes they like the same thing, but sometimes their tastes diverge. If the Tomatometer is slighted so completely to one set of tastes that drives box office in the United States, absolutely". Critics took issue with the sentiment that someone's gender or ethnic background would dictate their response to art.
Rotten Tomatoes deliberately withheld the critic score for Justice League based on early reviews until the premiere of its See It/Skip It episode on the Thursday before its release. Some critics viewed the move as a ploy to promote the web series, but some argued that the move was a deliberate conflict of interest on account of Warner Bros.' ownership of the film and Rotten Tomatoes, and the tepid critical reception of the DC Extended Universe films at the time.
The New York Times aggregated statistics on the critical reception of audience scores versus critic scores, and noticed in almost every genre that "The public rates a movie more positively than do the critics. The only exceptions are black comedies and documentaries. Critics systematically rate films in these genres more highly than do Rotten Tomatoes users". Slate magazine collected data in a similar survey that revealed a noticeable favor for movies released before the 1990s, that "may be explained by a bias toward reviewers reviewing, or Rotten Tomatoes scoring, only the best movies from bygone eras".
See also
Metacritic
Internet Movie Database (IMDb)
List of films with a 0% rating on Rotten Tomatoes
List of films with a 100% rating on Rotten Tomatoes
"Splatty Tomato"
References
Further reading
External links
American film review websites
Fandango
Former News Corporation subsidiaries
Internet properties established in 1998
Online film databases
Recommender systems
Television websites
1998 establishments in the United States
Tomatoes in popular culture | Rotten Tomatoes | Technology | 4,638 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.