text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Gay pornography is the representation of sexual activity between males with the primary goal to sexually arouse its audience. Softcore gay pornography also exists; which at one time constituted the genre, and may be produced as beefcake pornography directed toward heterosexual female, homosexual male, and bisexual audiences of any gender.
Homoerotic art and artifacts have a long history, reaching back to Greek antiquity. Every medium has been used to represent sexual acts between men. In contemporary mass media, this is mostly shared through home videos (including DVDs), cable broadcast and emerging video on demand and wireless markets, as well as online picture sites and gay pulp fiction.
== History ==
=== Early modern in the United States ===
Homoeroticism has been present in photography and film since their invention. During much of that time, any sexual depiction had to remain underground because of obscenity laws. In particular, gay material might constitute evidence of an illegal act under sodomy laws in many jurisdictions. This is no longer the case in the United States, since such laws were ruled unconstitutional by the Supreme Court in 2003 in Lawrence v. Texas.
Hardcore pornographic motion pictures (stag films, as they were called prior to their legalization in 1970) were produced relatively early in the history of film. The first known pornographic film appears to have been made in Europe in 1908. The earliest known film to depict hardcore gay (and bisexual) sexual activity was the French film Le ménage moderne du Madame Butterfly, produced and released in 1920. Most historians consider the first American stag film to be A Free Ride, produced and released in 1915. In the United States, hardcore gay sexual activity did not make it onto film until 1929's The Surprise of a Knight. Other American examples include A Stiff Game from the early 1930s, which features interracial homosexual acts as part of its plot, and Three Comrades (1950s), which features exclusively homosexual activity.
Legal restrictions meant that early hardcore gay pornography was underground and that commercially available gay pornography primarily consisted of pictures of individual men either fully naked or wearing a G-string. Pornography in the 1940s and 1950s focused on athletic men or bodybuilders in statuesque poses. They were generally young, muscular, and with little or no visible body hair. These pictures were sold in physique magazines, also known as beefcake magazines, allowing the reader to pass as a fitness enthusiast.
The Athletic Model Guild (AMG), founded by photographer Bob Mizer in 1945 in Los Angeles, was arguably the first studio to commercially produce material specifically for gay men and published the first magazine known as Physique Pictorial in 1951. Tom of Finland drawings are featured in many issues. Mizer produced about a million images, and thousands of films and videos before he died on May 12, 1992. During the late 1960s and early 1970s, the advent of 16 mm film cameras enabled these photographers to produce underground movies of gay sex, male masturbation, or both. Sales of these products were either by mail-order or through more discreet channels. Some of the early gay pornographers would travel around the country selling their photographs and films out of their hotel rooms, with advertising only through word of mouth and magazine ads.
The 1960s were also a period where many underground art-film makers integrated suggestive or overtly gay content in their work. Kenneth Anger's Scorpio Rising (1963), Andy Warhol's Blow Job (1963) and My Hustler (1965), or Paul Morrissey's Flesh (1968) are examples of experimental films that are known to have influenced further gay pornographic films with their formal qualities and narratives. Also of note is Joe Dallesandro, who acted in hardcore gay pornographic films in his early 20s, posed nude for Francesco Scavullo, Bruce of L.A. and Bob Mizer, and later acted for Warhol in films such as Flesh. Dallesandro was well known to the public. In 1969 Time called him one of the most beautiful people of the 1960s, and he appeared on the cover of Rolling Stone magazine in April 1971. Dallesandro also appeared on the cover of The Smiths' eponymous debut album, The Smiths.
=== Sexual revolution ===
During the 1960s, a series of United States Supreme Court rulings created a more liberalized legal environment that allowed the commercialization of pornography. MANual Enterprises, Inc. v. Day was the first decision by the United States Supreme Court which held that magazines consisting largely of photographs of nude or near-nude male models are not obscene within the meaning of § 1461. It was the first case in which the Court engaged in plenary review of a Post Office Department order holding obscene matter "nonmailable." The case is notable for its ruling that photographs of nude men are not obscene, an implication which opened up the U.S. Postal Service to nude male pornographic magazines, especially those catering to gay men.
Wakefield Poole's Boys in the Sand, starring Casey Donovan, was the first gay pornographic feature film, along with the works of filmmakers such as Pat Rocco and the Park Theatre, Los Angeles, California, c. 1970. In fact, it was the first pornographic feature film of any sort. Boys in the Sand opened in a theater in New York City in December 1971 and played to a packed house with record-breaking box office receipts, preceding Deep Throat, the first commercial straight pornography film in America, which opened in June 1972. This success launched gay pornographic film as a popular phenomenon.
The production of gay pornography films expanded during the 1970s. A few studios released films for the growing number of gay adult movie theaters, where men could also have sexual encounters. Often, the films reflected the sexual liberation that gay men were experiencing at the time, depicting the numerous public spaces where men engaged in sex: bathhouses, sex clubs, beaches, etc.
Peter Berlin's 1973 film Nights in Black Leather was the first major pornographic film designed to appeal to the gay leather subculture and drew some mainstream gays into this culture.
The 1960s and 1970s also saw the rise of gay publishing with After Dark and Michael's Thing. During this time many more magazines were founded, including In Touch and Blueboy. Playgirl, ostensibly produced for women, was purchased and enjoyed by gay men and feature full frontal nudity (the posing straps and fig leaves were removed).
Gay pornography of the 1950s through the production date of the movie is reviewed, with many excerpts, in Fred Halsted's documentary Erotikus: A History of the Gay Movie (1974).
=== 1970–1985 ===
From 1970 to 1985, commercial gay pornography was just getting set up to become the large industry that it was to become. Because it was in the fledgling stage, it recruited actors from the only network it had access to: the gay community. Even among members of the gay community, people willing to act in gay porn were hard to come by due to the social stigma and implicated social risk of being publicly out.
=== 1980s ===
The 1980s were a period of transition for gay pornography film. The proliferation of VCRs made pornography videos easily accessible, and, as their prices fell, the market for home videos aimed at adult viewers became more and more lucrative. By the mid-1980s, the standard was to release pornography movies directly on video, which meant the wide disappearance of pornography theaters. Furthermore, video recording being more affordable, a multitude of producers entered the market, making low-budget pornography videos.
This shift from watching pornography as a public activity to doing so in private was also influenced by the discovery of HIV and the subsequent AIDS crisis. Public spaces for sex, such as theaters, became less attended when in the early 1980s it became a much riskier behavior. Masturbatory activities in the privacy of the home became a safe sex practice in the midst of this health crisis.
Gay movies of the 1970s had contained some exploration of novel ways to represent the sexual act. In the 1980s, by contrast, all movies seemed to be made under an unwritten set of rules and conventions. Most scenes would start with a few lines of dialogue, have performers engage in foreplay (fellatio), followed by anal penetration, and ending with a visual climax close-up of ejaculating penises, called a money shot or cum shot. Video technology allowed the recording of longer scenes than did the costly film stock. Scenes were often composed of extended footage of the same act filmed from different shots using multiple cameras. The quality of the picture and sound were often very poor.
Major directors such as Matt Sterling, Eric Peterson, John Travis, and William Higgins set the standard for the models of the decade. The performers they cast were especially young, usually appearing to be around the ages of 22 or 23. Their bodies were slender and hairless, of the "swimmer's build" type, which contrasted with the older, bigger, and hairier man of the 1970s' gay pornography. Performer roles also evolved into the tight divisions of tops and bottoms. The top in anal sex is the penetrating partner, who, in these films, typically has a more muscular body and the larger penis. The bottom, or receiver of anal sex, in the films, is often smaller and sometimes more effeminate. The stars of the decade were almost always tops, while the bottoms were interchangeable (with the exception of Joey Stefano, a popular star, who was more of a bottom.)
This strict division between tops and bottoms may have reflected a preference by some of the popular directors of the decade to hire heterosexual men for their movies. Heterosexual men who perform gay sex for monetary reasons (commonly labeled gay-for-pay) were considered a rare commodity in the gay sex trade, but the biggest producers of the decade could afford them. Many critics attributed the conventionalization of gay pornography of the 1980s to this trend.
=== After 1985 ===
1985 was a pivotal year for gay porn because by then, the market had grown enough to make it a desirable field of work for not only gay men but also straight men. According to one estimate by porn director Chi Chi Larue, 60% of the actors in gay porn are actually straight. This incidence of straight men in gay porn is known as gay-for-pay and the ethics behind it and implications of it are highly disputed.
=== 1990s ===
The gay pornography industry diversified steadily during the 1990s.
In 1989, director Kristen Bjorn started a pornographic business which was considered as setting a standard for gay pornography producers. He was a professional photographer, and the images in his videos were considered to be of high-quality. As a former porn star himself, he directed his models with care, which helped improved the actors' believability. Other directors had to improve their technical quality to keep up with demands from their audiences.
Another significant change during this decade was the explosion of the niche market. Many videos began to be produced for viewers with specific tastes (i.e. for amateur pornography, military (men in uniform) pornography, transgender performers, bondage fetishes, performers belonging to specific ethnic groups, etc.), and this led to a diversification of the people involved in pornography production and consumption.
The gay pornography industry grew substantially in popularity during the 1990s, evolving into a complex and interactive subculture. Professional directors (such as Chi Chi LaRue and John Rutherford), technicians or deck operators during the U-matic phase of video technology, and performers started to engage in pornography as a career, their work sustained by emerging pornographic media and critics, such as Mikey Skee.
=== 21st century ===
In the 21st century, gay pornography has become a highly profitable enterprise, ranging from the "straight-guy" pornography of Active Duty and Sean Cody, to the 'twinks' of BelAmi. Many niche genres and online delivery sites cater to various and changing interests. For instance much of Van Darkholme's work contains bondage and particularly shibari, the Japanese art of bondage and knot-tying, a specialty within BDSM cultures.
On the other hand, Lucas Kazan Productions successfully adapted literary classics: Decameron: Two Naughty Tales is based on two novels by Boccaccio, The Innkeeper on Goldoni's La Locandiera. Lucas Kazan also found inspiration in 19th and 20th century operas, combining gay porn and melodrama: The School for Lovers, 2007 GayVN Award Winner for Best Foreign Picture, is in fact inspired by Mozart's Così fan tutte.
Some controversy currently exists regarding studios that produce bareback videos (videos of sexual penetration by the penis without a condom). Mainstream companies, such as Falcon Entertainment, Hot House Entertainment, Channel 1 Releasing, Lucas Entertainment, Raging Stallion Studios, Lucas Kazan Productions and Titan Media and LGBTQ health advocates assert that condomless videos promote unsafe sex and contribute to the HIV/AIDS pandemic, both in the pornography industry and in the gay community as a whole. The controversy dates back to the first few years of the HIV crisis, when nearly all gay pornography production companies voluntarily required their models to wear condoms for anal sex.
The premise of industry figures, notably Chi Chi LaRue, is that gay pornography serves as a leading forum for teaching safer sex skills and modelling healthy sexual behaviors. At least one bareback studio agrees that porn should promote healthy sexual behaviors, but disagrees on the definition of healthy in this context: speaking about the AIDS crisis, Treasure Island Media owner and founder Paul Morris has expressed his belief that,To a great extent, the current gay mindset surrounding HIV is a result of a generation of men living with PTSD and not getting the support and help they need now that the war is over. [...] As a pornographer, all I can do in response is to produce work that features men who are openly positive (or negative) and happily living their lives honestly and fully.
=== Sex education ===
Emerging research has suggested that pornography is a possible source of education about sex and relationships. In the absence of inclusive same-sex relationship education in traditional sources (i.e., schools, parents, friends, and mainstream media), gay pornography may be used by men who have sex with men as a source of information about intimacy, while serving its main purpose as a masturbatory aid. Contrary to popular views that pornography does not depict intimacy, a recent study showed that gay pornography depicts both physical and verbal intimacy.
=== Gay-for-pay ===
The authenticity and ethics behind gay-for-pay porn are highly disputed, even within the gay community. Viewers of gay porn in a survey by Escoffier reported a preference for authentic porn, which they define as exhibiting both erections and orgasms. Escoffier argues if straight-identifying actors are able to deliver erections and orgasms to the set, their performance is classified as situational homosexuality; therefore, the porn itself is authentic gay porn. Simon and Gagnon examine authenticity through scripts, arguing that porn actors follow learned behavioral sex scripts, so no porn is any more or less authentic than any other porn.
Because the term "gay-for-pay" implies a motivation that is solely economic, Escoffier argues it is not a fitting title. Other reasons certain gay-for-pay actors report for their career choice include latent homosexual fantasy and curiosity.
Among gay-for-pay actors, there is divided preference for the performance roles of top vs. bottom. It is common for gay-for-pay porn actors to start out as tops before they eventually give in to fan and industry pressure to shoot a scene or more as a bottom. Gay-for-pay actors are typically more comfortable being tops because the role of top is analogous to the "less gay" penetrator role of the man in straight sex. On the other hand, some gay-for-pay porn actors prefer to act as bottoms because they can do so without maintaining an erection. The implication here is that they are not even necessarily aroused during sex, making this the "less gay" of the two positions.
Even though they are acting in gay porn, some gay-for-pay actors hold homophobic views, causing tension in the workplace. Additionally, gay actors often find it difficult to perform with straight actors due to the lack of attraction. Tommy Cruise, a bisexual actor in gay porn, is quoted saying, "A lot of straight guys, they don't even want me touching them. I'm like 'Why are you even in this business?'" Some gay and bisexual porn actors, such as Buddy Jones, enjoy working with straight men in some circumstances.
== Audience ==
In August 2005, adult star Jenna Jameson launched "Club Thrust", an interactive website featuring gay male pornographic videos, which was shown to attract a female audience as well. Yaoi comic books and slash fiction are both genres featuring gay men, but primarily written by and for straight women. Some lesbian and bisexual women are also fans of gay male pornography, specifically yaoi, for its feminine-styled men.
== Bareback ==
Bareback gay pornography was standard in "pre-condom" films from the 1970s and early 1980s. As awareness of the risk of AIDS developed, pornography producers came under pressure to use condoms, both for the health of the performers and to serve as role models for their viewers. By the early 1990s new pornographic videos usually featured the use of condoms for anal sex. Beginning in the 1990s, an increasing number of studios have been devoted to the production of new films featuring men engaging in unprotected sex. For example, San Francisco-based studio Treasure Island Media, whose work focuses in this area, has produced bareback films since 1999. Other companies that do so include SEVP and Eurocreme. Mainstream gay pornographic studios such as Kristen Bjorn Productions have featured the occasional bareback scene, such as in "El Rancho" between performers who are real-life partners. Other studios such as Falcon Entertainment have also reissued older pre-condom films. Also, mainstream studios that consistently use condoms for anal sex scenes may sometimes choose editing techniques that make the presence of condoms somewhat ambiguous and less visually evident, and thus may encourage viewers to fantasize that barebacking is taking place, even though the performers are following safer-sex protocols. (In contrast, some mainstream directors are conscientious about using close-up shots of condom packets being opened, etc., to help clearly establish for the viewer that the sex is not bareback.)
Some scholars argue that while "barebacking" and "UAI" technically both mean the same thing, they have different undertones. With the increased use of the term "barebacking", the term has been adopted for marketing purposes. This is because "Unprotected Anal Intercourse" makes a direct connection between unprotected sex and the risk of contracting diseases like HIV/AIDS. In a study where participants were shown two different scenes featuring anal sex, the significance of the words "bareback" and "UAI" became apparent.
The first scene featured group sex in which several men were on top engaging in intercourse with one man on the bottom. The men on top were in their mid-30s and of varying ethnicities while the man on the bottom was around 18 years old. The second scene featured two men both in their 20s in a living room setting. During the interview, the participants were much more reluctant to classify the second scene as "bareback" or "UAI", than they were for the first scene. Participants readily used "bareback" to describe the first scene in which there were clear contrasts in race, age, and power. The participants described the second scene as being more "meaningful and romantic" and hence more likely to use a condom to protect the other. The implication of this study is that the term "bareback" ultimately does have a dark meaning as it relates to HIV/AIDS, regardless if it does not mention protection in its name. Thus, studies have shown that barebacking is decreasing in popularity within the gay subculture. Bareback pornography does not necessarily encourage more unprotected anal sex in reality, nor do all men who participate in anal sex necessarily want to have unprotected sex. What is clear is that there is still a sense of risk among participants of anal sex.
== Notable movies ==
=== 1970s ===
Boys in the Sand (Wakefield Poole, 1971) is the first feature gay pornographic film to achieve mainstream crossover success; helped usher in "porn chic." Said to be "a textbook example of gay erotic filmmaking" that was screened in film festivals all over the world.
The Back Row (Jerry Douglas, 1972) is the first feature from Douglas. Re-made by Chi Chi LaRue in 2001. Featured in Unzipped Magazine's The 100 Greatest Gay Adult Films Ever Made (2005).
L.A. Plays Itself (Fred Halsted, 1972) is archived at the Museum of Modern Art (MoMA), New York.
Nights in Black Leather (Richard Abel and Peter Berlin, 1973) is a movie starring Peter Berlin.
Falconhead (Michael Zen, 1977) is still acclaimed by cultural critics as one of a few gay pornographic movies that tried to bring complexity to the blue movie. Inspired many contemporary pornographic directors (Morris, 2004). Featured in Unzipped Magazine's The 100 Greatest Gay Adult Films Ever Made (2005).
Dune Buddies (Jack Deveau, 1978) Hand in Hand Films, is a film by a prominent director and studio of the 1970s. Shot on the historically gay-friendly Fire Island, the film (and others of the company) document well the sexual lives of New York City's gay men of the period. Excerpts displayed in the documentary Gay Sex in the 70s.
New York City Inferno (Jacques Scandelari, 1978), a French experimental gay pornography film featuring a licensed soundtrack by the Village People.
The Other Side of Aspen series, beginning in 1978, is among the Adult Video News' top ten all time gay movies.
Joe Gage wrote a trilogy of gay films, collectively referred to as either "The Kansas City Trilogy" or "The Working Man Trilogy" in the late 1970s. The films, Kansas City Trucking Co. (1976), El Paso Wrecking Corp. (1978) and L.A. Tool & Die (1979) were praised for their consistent portrayals of male/male sex occurring between rugged, masculine men who came from blue-collar and rural backgrounds and who related as "equal partners" – avoiding the frequent stereotypes of such men as effeminate inhabitants of urban gay neighborhoods, or who were caught up in a constraining "you play the woman, I'll be the man" mindset of dominant/submissive roles.
=== 1980s ===
The Bigger The Better (Matt Sterling, 1984); one of Adult Video News' 10 Great Gay Movies.
Les Minets Sauvages (Jean-Daniel Cadinot, 1984) is one of the biggest films by the French pornographic director.
My Masters (Christopher Rage, 1986) is one movie by a director who has influenced numerous gay artists.
Powertool (John Travis, 1986) is one of Adult Video News' 10 Great Gay Movies.
Big Guns (William Higgins, 1988) Catalina Video; is one of Adult Video News' 10 Great Gay Movies.
Carnival in Rio (Kristen Bjorn, 1989); see History, 1990s section above.
=== 1990s ===
Idol Eyes (Matt Sterling, 1990) Huge Video is a movie with Ryan Idol. Read Dyer, 1994 for more.
More of a Man (Jerry Douglas, 1994) All Worlds Video is a popular film with Joey Stefano (see History, 1980s section) also featuring Chi Chi LaRue in a non-sexual role. Read Burger, 1995 chapter for an extensive analysis.
Flashpoint (John Rutherford, 1994) Falcon Studios is a film by major director Rutherford. Featured in Unzipped Magazine's The 100 Greatest Gay Adult Films Ever Made (2005).
Frisky Summer 1–4 (George Duroy, 1995–2002) Bel Ami is one of Adult Video News' 10 Great Gay Movies.
Flesh and Blood (Jerry Douglas, 1996) All Worlds Video is one of Adult Video News' 10 Great Gay Movies.
Naked Highway (Wash West, 1997). The narrative and aesthetic qualities of this movie are representative of a new generation of pornographic directors. (Thomas, 2000:66) One of Adult Video News' 10 Great Gay Movies.
Three Brothers (Gino Colbert, 1998) Gino Pictures is a movie by director Colbert, starring the real-life Rockland brothers (Hal, Vince, and Shane). Featured in Unzipped Magazine's The 100 Greatest Gay Adult Films Ever Made (2005).
Descent (Steven Scarborough, 1999) Hot House Entertainment is a popular gay pornographic video with infrequent artistic qualities, by a prominent director and studio. Created legal dispute in Canada when the government tried to forbid its distribution in the name of obscenity rules.
Skin Gang (Bruce LaBruce, 1999) Cazzo Film is a film by art/porn director LaBruce. Aired in gay film festivals around the world.
Fallen Angel (Bruce Cam, 1997) Titan Media is a major film by prominent director and studio. Featured in Unzipped Magazine's The 100 Greatest Gay Adult Films Ever Made (2005).
=== 2000s ===
DreamBoy (Max Lincoln, 2003) Eurocreme. Spawned a whole series of similarly titled films (for example, OfficeBoy, SpyBoy and RentBoy)
Michael Lucas' Dangerous Liaisons (Michael Lucas, 2005) Lucas Entertainment is the biggest production by this director and studio. Variously described as a film adaptation of Les liaisons dangereuses (1782), and a remake of Dangerous Liaisons (1988).
Dawson's 20 Load Weekend (Paul Morris, 2004) Treasure Island Media is a major production by infamous director Paul Morris. Created huge controversy because it is mainly composed of bareback sex.
Michael Lucas' La Dolce Vita (Michael Lucas, 2006) At a budget of $250,000, Lucas Entertainment claims it to be the most expensive gay porn film ever made. It contained celebrity cameos and attracted controversy with a lawsuit.
== See also ==
Bara
Bisexual pornography
Boyd McDonald
Boys' love
David Hurles
Erotic literature
Gay pulp fiction
Gay sex roles
Gay sexual practices
List of actors in gay pornographic films
Sex industry
== Further reading and information ==
=== Academic works ===
Adams-Thies, Brian (July 2015). "Choosing the right partner means choosing the right porn: how gay porn communicates in the home". Porn Studies. 2 (2–3): 123–136. doi:10.1080/23268743.2015.1060007.
Bronski, Michael (2003). Pulp friction: uncovering the golden age of gay male pulps. New York: St. Martin's Griffin. ISBN 9780312252670.
Burger, John R. (1995). One-handed histories: the eroto-politics of gay male video pornography. New York: Haworth Press. ISBN 9781560238522.
Cante, Richard C. (2008), "Chapters 4, 5 and 6", in Cante, Richard C. (ed.), Gay men and the forms of contemporary US culture, Burlington, Vermont: Ashgate Publishing, ISBN 9780754672302.
Delany, Samuel R. (1999). Times Square Red, Times Square Blue. New York: New York University Press. ISBN 9780814719206.
Dyer, Richard (Spring 1994). "Idol thoughts: orgasm and self-reflexivity in gay pornography". Critical Quarterly. 36 (1): 49–62. doi:10.1111/j.1467-8705.1994.tb01012.x.
Dyer, Richard (2002) [1992], "Coming to terms: gay pornography", in Dyer, Richard (ed.), Only entertainment (2nd ed.), New York: Routledge, pp. 138–150, ISBN 9780415254977.
Eisenberg, Daniel (1990), "Pornography (definition)", in Dynes, Wayne R.; Johansson, Warren; Percy, William A; Donaldson, Stephen (eds.), Encyclopedia of homosexuality, Garland reference library of social science 492, New York: Garland Pub, pp. 1023–1028, ISBN 9781558621473, OCLC 835916402. Pdf. Abridged pdf.
Kendall, Christopher N. (2004). Gay male pornography: an issue of sex discrimination. Vancouver, British Columbia, Canada: UBC Press. ISBN 9780774851152.
Kendall, Christopher N.; Funk, Rus Ervin (January 2004). "Gay male pornography's "actors": when "fantasy" isn't". Journal of Trauma Practice. 2 (3–4): 93–114. doi:10.1300/J189v02n03_05. S2CID 141304973.
Kendall, Christopher N. (2011), "The harms of gay male pornography", in Tankard Reist, Melinda; Bray, Abigail (eds.), Big Porn Inc.: exposing the harms of the global pornography industry, North Melbourne, Victoria: Spinifex Press, pp. 53–62, ISBN 9781876756895.
Moore, Patrick (2004). Beyond shame: reclaiming the abandoned history of radical gay sexuality. Boston: Beacon Press. ISBN 9780807079577.
Morrison, Todd G. (2004). Eclectic views on gay male pornography: pornucopia. Binghamton, New York: Harrington Park Press. ISBN 9781560232919.
Neville, Lucy (July 2015). "Male gays in the female gaze: women who watch m/m pornography" (PDF). Porn Studies. 2 (2–3): 192–207. doi:10.1080/23268743.2015.1052937. Archived from the original (PDF) on February 16, 2021. Retrieved January 30, 2019.
Ryberg, Ingrid (July 2015). "Carnal fantasizing: embodied spectatorship of queer, feminist and lesbian pornography". Porn Studies. 2 (2–3): 161–173. doi:10.1080/23268743.2015.1059012.
Slade, Joseph W. (2001). Pornography and sexual representation: a reference guide. Westport, Connecticut: Greenwood Press. ISBN 9780313315213.
Stevenson, Jack (Fall 1997). "From the bedroom to the bijou: a secret history of American gay sex cinema". Film Quarterly. 51 (1): 24–31. doi:10.2307/1213528. JSTOR 1213528.
Thomas, Joe A. (2000). "Gay male video pornography: past, present, and future". In Weitzer, Ronald (ed.). Sex for sale: prostitution, pornography, and the sex industry. New York: Routledge. pp. 49–66. ISBN 9780415922951.
Waugh, Thomas; Walker, Willie (2004). Lust unearthed: vintage gay graphics from the DuBek collection. Vancouver, British Columbia, Canada: Arsenal Pulp Press. ISBN 9781551521657.
Waugh, Thomas (1996). Hard to imagine: gay male eroticism in photography and film from their beginnings to Stonewall. New York: Columbia University Press. ISBN 9780231099981.
Williams, Linda (2004). Porn studies. Durham, North Carolina: Duke University Press. ISBN 9780822333128.
=== Biographies ===
Edmonson, Roger (2000). Clone: The Life and Legacy of Al Parker, Gay Superstar. Alyson Books. ISBN 978-1-55583-529-3.
Edmonson, Roger; Jerry Douglas (1998). Boy in the Sand: Casey Donovan, All-American Sex Star. Alyson Books. ISBN 978-1-55583-457-9.
Isherwood, Charles (1996). Wonder Bread & Ecstasy : The Life and Death of Joey Stefano. Alyson Books. ISBN 978-1-55583-383-1.
Larue, Chi Chi; John Erich (1997). Making It Big: Sex Stars, Porn Films and Me. Alyson Publications. ISBN 978-1-55583-392-3.
=== Documentaries ===
Beyond Vanilla (Claes Lilja, 2001)
Gay Sex in the 70s. (Joseph F. Lovett, 2005)
That Man: Peter Berlin. (Jim Tushinski, 2005)
Island. (Ryan Sullivan, 2010)
== References == | Wikipedia/Gay_pornography |
Parental controls are features which may be included in digital television services, computers and video games, mobile devices and software to assist parents in their ability to restrict certain content viewable by their children. This may be content they deem inappropriate for their age, maturity level or feel is aimed more at an adult audience. Parental controls fall into roughly four categories: content filters, which limit access to age inappropriate content; usage controls, which constrain the usage of these devices such as placing time-limits on usage or forbidding certain types of usage; computer usage management tools, which enforces the use of certain software; and monitoring, which can track location and activity when using the devices.
Content filters were the first popular type of parental controls to limit access to Internet content. Television stations also began to introduce V-Chip technology to limit access to television content. Modern usage controls are able to restrict a range of explicit content such as explicit songs and movies. They are also able to turn devices off during specific times of the day, limiting the volume output of devices, and with GPS technology becoming affordable, it is now possible to easily locate devices such as mobile phones. UNICEF emphases the responsibility of parents and teachers in this role.
The demand for parental control methods that restrict content has increased over the decades due to the rising availability of the Internet. A 2014 ICM survey showed that almost a quarter of people under the age of 12 had been exposed to online pornography. Restricting especially helps in cases when children are exposed to inappropriate content by accident. Monitoring may be effective for lessening acts of cyberbullying within the internet. It is unclear whether parental controls will affect online harassment in children, as little is known about the role the family plays in protecting children from undesirable experiences online. Psychologically, cyberbullying could be more harmful to the victim than traditional bullying. Studies done in the past have shown that about 75% of adolescents were subjected to cyberbullying. A lack of parental controls in the household could enable kids to be a part of cyberbullying or be the victim of cyberbullying.
== Overview ==
Behavioral control consists of controlling the amount of time a child spends online, or how much the child can view. Psychological control involves parents trying to influence children's behavior.
Several techniques exist for creating parental controls for blocking websites. Add-on parental control software may monitor API in order to observe applications such as a web browser or Internet chat application and to intervene according to certain criteria, such as a match in a database of banned words. Virtually all parental control software includes a password or other form of authentication to prevent unauthorized users from disabling it.
Techniques involving a proxy server are also used. A web browser is set to send requests for web content to the proxy server rather than directly to the web server intended. The proxy server then fetches the web page from the server on the browser's behalf and passes on the content to the browser. Proxy servers can inspect the data being sent and received and intervene depending on various criteria relating to content of the page or the URL being requested, for example, using a database of banned words or banned URLs. The proxy method's major disadvantage is that it requires that the client application to be configured to utilize the proxy, and if it is possible for the user to reconfigure applications to access the Internet directly rather than going through the proxy, then this control is easily bypassed. Proxy servers themselves may be used to circumvent parental controls. There are other techniques used to bypass parental controls.
The computer usage management method, unlike content filters, is focused on empowering the parents to balance the computing environment for children by regulating gaming. The main idea of these applications is to allow parents to introduce a learning component into the computing time of children, who must earn gaming time while working through educational contents.
Network-based parental control devices have been develop which work as a firewall router using packet filtering, DNS response policy zone (RPZ) and deep packet inspection (DPI) methods to block inappropriate web content. These methods have been used in commercial and governmental communication networks. Another form of these devices made for home networks has been developed. These devices plug into the home router and create a new wireless network, which is specifically designed for kids to connect to.
== Parental controls on mobile devices ==
The increased use of mobile devices with fully featured web browsers and downloadable applications has created a demand for parental controls on these devices. Some examples of mobile devices that contain parental controls include cell phones, tablets, and e-readers. In November 2007, Verizon was the first carrier to offer age-appropriate content filters as well as the first to offer generic content filters, recognizing that mobile devices were used to access all manner of content from movies and music to short-code programs and websites. In June 2009, in iPhone OS 3.0, Apple was the first company to provide a built-in mechanism on mobile devices to create age brackets for users that would block unwanted applications from being downloaded to the device. In the following years, the developers of all major operating systems have presented in-built tools for parental control, including Linux, Android, Windows, and even the more business-oriented platform Blackberry. There are also applications that allow parents to monitor real-time conversations on their children's phone via access to text messages, browser history, and application history. An example of one of these is Trend Micro which not only offers protection from viruses, but also offers parental controls to phones and tablets of almost all brands. Most of these offer the ability to add extra features to parental controls. These apps have the features mobile devices already have, but have additional features such as, being able to monitor and filter texts/calls, protection while surfing the web, and denied access to specific websites. Applications of this sort have created a rising competition in their market.
Mobile device software enables parents to restrict which applications their child can access while also allowing parents to monitor text messages, phone logs, MMS pictures, and other transactions occurring on their child's mobile device; to enable parents to set a time limit on the usage of mobile devices; and to track the exact location of their children as well as monitor calls and the content of texts. This software also allows parents to monitor social media accounts. Parents are able to view posts, pictures, and any interactions in real time. Another function of this software is to keep track of bullying.
Most internet service providers offer no-cost filtering options to limit internet browsing options and block unsuitable content. Implementing parental controls and discussing internet safety are useful steps to protect children from inappropriate information.
Although parental controls can protect children, they also come with some negative factors. Children's anxiety may increase due to parental controls. In extreme cases, a child may become so angry that they destroy their device, defeating the purpose of parental controls entirely. In that case, it might be a better idea to forgo installing parental controls.
== Bypassing parental controls ==
If the filtering software is located locally within the computer, all Internet software can be easily bypassed by booting up the computer in question from alternative media, with an alternative operating system or (on Windows) in Safe Mode. However, if the computer's BIOS is configured to disallow booting from removable media, and if changes to the BIOS are prohibited without proper authentication, then booting into an alternative operating system is not available without circumventing BIOS security by partially disassembling the computer and resetting BIOS configuration using a button or jumper, or removing and replacing the internal button cell battery.
Using external proxy servers or other servers. The user sends requests to the external server which retrieves content on the user's behalf. Filtering software may then never be able to know which URLs the user is accessing, as all communications are with the one external server and filtering software never sees any communications with the web servers from which content really originated. To counter this, filtering software may also block access to popular proxies. Additionally, filtering systems which only permit access to a set of allowed URLs (whitelisting) will not permit access anything outside this list, including proxy servers.
Resetting passwords using exploits
Modifying the software's files
Brute-force attacks on software passwords
'Incognito/InPrivate' modes with the 'image' tab: Users, parental control software, and parental control routers may use 'safe search' (SafeSearch) to enforce filtering at most major search engines. However, in most browsers a user may select 'Incognito' or 'InPrivate' browsing, enter search terms for content, and select the 'image' tab to effectively bypass 'safe search' and many parental control filters. See below for router based considerations and solutions.
Filtering that occurs outside of the individuals computer (such as at the router) cannot be bypassed using the above methods (except for 'Incognito/InPrivate' modes). However,
The major search engines cache and serve content on their own servers. As a result, domain filters such as many third party DNS servers, also fail to filter the 'Incognito/InPrivate' with 'image' tab.
Most commercially available routers with parental controls do not enforce safe search at the router, and therefore do not filter the 'Incognito/InPrivate' with 'image' tab.
== Criticism ==
While parental controls have been added to various electronic media and have increased in popularity, the question has been raised if they are enough to protect and deter children from exposure to inappropriate material. It has been speculated by researchers that the strict focus on control may hinder a child's ability to learn self-governing skills and restricting the growth of open communication between parent and child.
== Operating systems with parental controls ==
Below is a list of popular operating systems which currently have built-in parental control features:
Android operating system
iOS (12 or later)
macOS (10.3 and later)
DoudouLinux (built-in web filter)
sabily (built-in web filter)
Ubuntu Christian edition (built-in web filter)
Windows (Vista, 7, 10 and later)
ChromeOS (65 or later)
== See also ==
Adultism
David Burt
Internet censorship
List of parental control software
Motion picture rating system
Television rating system
Videogame Rating Council
Retina-X Studios
Smart Sheriff
Restricted to Adults
== References == | Wikipedia/Parental_controls |
Softcore pornography or softcore porn is commercial still photography, film, imagery, or even audio that has a pornographic or erotic component but is less sexually graphic or intrusive than hardcore pornography, defined by a lack of sexual penetration or other sexual activity. It typically contains nude or semi-nude actors involved in suggestive poses or scenes, and is intended to be sexually arousing and aesthetically beautiful.
The distinction between softcore pornography and erotic photography, or erotic art such as Vargas girl pin-ups, is largely a matter of debate. When the subject is naked, the image must be differentiated from nude art, and photos belong within the broader category of nude photography.
== Components ==
Softcore pornography may include sexual activity between two people or masturbation. It does not contain explicit depictions of sexual penetration, cunnilingus, fellatio, fingering, handjobs, or ejaculation. Depictions of erections of the penis may not be allowed, although attitudes towards this are ever-changing.
Commercial pornography can be differentiated from erotica, which has high-art standards and aspirations.
Portions of an image that are considered too graphic may be hidden or obscured in a variety of ways, as by hair or clothing, intentionally-positioned hands or other body parts, artfully located foreground elements such as plants, pillows, furniture, or drapery, or by carefully chosen camera angles.
Pornographic filmmakers sometimes make both hardcore and softcore versions of a given film, with the softcore version using less explicit views of sex scenes or using other techniques to tone down any objectionable features. For example, the softcore version of a given film may have been edited for the in-house hotel pay-per-view market.
Total nudity is currently commonplace in several magazines, as well as in photography and on the Internet.
== Regulation and censorship ==
Softcore films are commonly less regulated and restricted than hardcore pornography, and cater to a different market. In most countries, softcore films are eligible for movie ratings, usually on a restricted rating, though many such films are also released unrated. As with hardcore films, availability of softcore films varies depending on local laws. Also, the exhibition of such films may be restricted to those above a certain age, typically 18. At least one country, Germany, has different age limits for hardcore and softcore pornography, softcore material usually receives a FSK-16 rating (no one under 16 is allowed to buy) and hardcore material receiving a FSK-18 (no one under 18 allowed to buy). In some countries, broadcasting of softcore films is widespread on cable television networks, with some such as Cinemax producing their own in-house softcore films and television series.
In some countries, images of women's genitals are digitally manipulated so that they are not too "detailed". An Australian pornographic actress says that images of her own genitals sold to pornographic magazines in different countries are digitally manipulated to change the size and shape of the labia according to censorship standards in different countries.
== History ==
Originally, softcore pornography was presented mainly in the form of men's magazines, in both still photos and art drawings (such as Vargas girls), when it was barely acceptable to show a glimpse of a woman's nipple in the 1950s. By the 1970s, mainstream magazines such as Playboy, Penthouse, and especially Hustler showcased nudity.
After the formation of the MPAA rating system in the United States and prior to the 1980s, numerous softcore films, with a wide range of production costs, were released to mainstream movie theatres, especially drive-ins. Emmanuelle and Alice in Wonderland received positive reviews from noted critics such as Roger Ebert.
== See also ==
Ecchi
Erotic photography
Fan service
Sexploitation film
== References == | Wikipedia/Softcore_pornography |
The X-Rated Critics Organization (XRCO) is a group of writers and editors from the American adult entertainment industry who each year present awards in recognition of achievement within the industry. After the controversy and criticism of the Best Erotic Scene win for the movie Virgin in 1984 at the Adult Film Association of America awards, the XRCO and its "Heart-On Awards" were founded.
== History ==
The organization was founded in 1984, consisting of writers from Los Angeles, New York City and Philadelphia. Jim Holliday, AVN Award-winning producer and historian, is considered the founding father of the X-Rated Critics Organization. After Holliday's death, the position of XRCO Historian was temporarily filled by XRCO founding member Bill Margold until 2006. James Avalon, a former editor of Adam Film World’s special editions, was also a founding member of XRCO.
XRCO's original Chairman, Jared Rutter, stepped down in 2004 and is now recognized as an "Honorary Chairman". The current co-Chairmen are "Dirty Bob" Krotts and Dick Freeman.
== Members and management ==
In 2005, XRCO added its first European members. Its members now include writers from a wide range of adult publications and Internet sites. Many members work full-time at this occupation; some have university degrees with emphasis on Film Criticism. XRCO members remain active members after being evaluated yearly to determine if they are still active in the adult business, still qualified, and still participating in the XRCO Award nomination and voting processes. Anyone not participating is placed on an "inactive" list for one year and, if found to still be lacking and not participating after that time, are then dropped. There is no membership fee to be an XRCO member.
There are currently 27 award categories, including the XRCO Hall of Fame, which honor the achievements of performers, directors and movies.
== Award program ==
The first XRCO Awards were presented in Hollywood on February 14, 1985. Until 1991, the awards were presented on Valentine's Day each year.
The award program also includes a Hall of Fame ceremony and inductions.
== References ==
== External links ==
Official website
X-Rated Critics Organization, USA at IMDb
Krotts, Bob (March 2, 2007). "XRCO 2006 Nominations Announced". 23rd Annual | Wikipedia/X-Rated_Critics_Organization |
The Survivors Network of those Abused by Priests, known as SNAP, established in 1989, is a 501(c)(3) non-profit organization support group of survivors of clergy sexual abuse and their supporters, founded in the United States. Barbara Blaine, a survivor of sex abuse by a priest, was the founding president. SNAP, which initially focused on the Roman Catholic Church, had 12,000 members in 56 countries as of 2012. It has branches for religious groups, such as SNAP Baptist, SNAP Orthodox, and SNAP Presbyterian, for non-religious groups (Scouts, families), and for geographic regions, e.g., SNAP Australia and SNAP Germany.
Shaun Dougherty was elected to serve as the president in July 2021 and remained president as of April 2024. Tim Lennon was a past president.
== History ==
SNAP's history, and list of current staff and directors, are on their Web site.
== Activities ==
On June 13, 2002, SNAP's David Clohessy addressed the U.S. Conference of Catholic Bishops at its high-profile meeting in Dallas, Texas. He asserted that many church-going Catholics had strong concerns about the way in which bishops were handling the growing child sexual abuse scandal. Clohessy said, "We're not here because you want us to be. We're not here because we've earned it or have fought hard for it. We're here because children are a gift from God, and Catholic parents know this! That's why 87% of them think that if you've helped molesters commit their crimes, you should resign." In 2004, SNAP acknowledged accepting donations from leading attorneys who had represented clients in abuse cases, but maintained that it did not direct clients to these attorneys.
On August 8, 2009, former Oklahoma Governor Frank Keating, who served as the first chair of the National Review Board established by the U.S. Catholic bishops to investigate clergy sex abuse, addressed SNAP's annual gathering. He admitted he was at first naïve about the scope of child sexual abuse in the Catholic Church and urged bishops who covered up crimes to be prosecuted.
In 2009 SNAP supported a legislative bill in New York that would push Catholic Church dioceses to disclose the names of all clergy who have been transferred or retired due to "credible allegations" of abuse.
On June 9, 2009, a group of survivors of clergy abuse protested the appointment of Joseph Cistone as bishop of the Saginaw, Michigan diocese.
Retired Auxiliary Bishop Thomas Gumbleton of the Archdiocese of Detroit is a member and strong supporter of SNAP and has helped SNAP do fundraising work. According to the National Catholic Reporter, Gumbleton was punished by the Vatican and removed as a parish pastor because of work he did with SNAP and concerns he had about the Church's response to child sexual abuse.
SNAP's president, Barbara Blaine, and national director, David Clohessy, resigned from their SNAP positions, effective February 4, 2017, and December 31, 2016, respectively. According to the Chicago Tribune, "Barbara Dorris, SNAP's outreach director, has become the managing director". Three other longtime leaders, board president Mary Ellen Kruger and outreach director Barbara Dorris, both of St. Louis, and board member Mary Dispenza, left in March 2018.
In 2025 SNAP launched a database on cardinals’ records on clergy sex abuse.
== Defamation lawsuit and sanctions ==
In 2015 SNAP was ordered by US District Court Judge Carol E. Jackson to release information on alleged sex abuse victims, during the discovery process of a defamation suit by an accused priest against whom charges were dropped.
According to David Clohessy, the director and spokesman, it is the most significant legal battle facing the organization in its 23 years and that he personally may be fined or jailed. SNAP refused to fully comply with the judge's order, claiming "rape crisis center privilege". In August 2016, Judge Jackson found that no such privilege exists and imposed sanctions against SNAP. The judge found that SNAP had defamed him and conspired against the priest, and order that SNAP pay the priest's legal fees. SNAP's attorney stated they were considering an appeal.
== Hammond v. SNAP ==
On January 18, 2017, a former fundraiser for SNAP, Gretchen Rachel Hammond, filed a whistleblower lawsuit against the organization in Cook County, Illinois. Hammond had been employed by SNAP as a Director of Development from July 2011 through February 2013. In the lawsuit, Hammond alleged that SNAP fired her in retaliation for confronting the organization for "colluding with survivors' attorneys." The lawsuit stated that "SNAP does not focus on protecting or helping survivors—it exploits them. SNAP routinely accepts financial kickbacks from attorneys in the form of 'donations.' In exchange for the kickbacks, SNAP refers survivors as potential clients to attorneys, who then file lawsuits on behalf of the survivors against the Catholic Church." According to the Catholic News Agency, the lawsuit claimed that SNAP "receives 'substantial contributions' from attorneys sometimes totaling more than 40 or 50 percent of its annual contributions. A prominent Minnesota attorney who represents clergy abuse survivors reportedly donated several six-figure annual sums, including over $415,000 in 2008. Other unnamed attorney-donors who represent abuse survivors reportedly came from California, Chicago, Seattle, and Delaware." The lawsuit also cited emails sent by David Clohessy and Barbara Blaine to survivors and "prominent attorneys".
In one such email, Clohessy urges a survivor to sue the Wisconsin archdiocese "i sure hope you DO pursue the WI [Wisconsin] bankruptcy ... Every nickle (sic) they don't have is a nickle (sic) that they can't spend on defense lawyers, PR staff,gay-bashing, women-hating, contraceptive-battling, etc."
SNAP denied the allegations. Outreach Director Barbara Dorris told the St. Louis Post-Dispatch, "That's simply just not true," outreach director Barbara Dorris said about misrepresenting the best interest of abuse victims. "We have been and always will be a self-help support group for victims." Dorris added that she couldn't remember if Hammond, who is currently a journalist for the LGBT paper Windy City Times in Chicago, had been fired or not. SNAP president Barbara Blaine issued a statement which read "The allegations are not true. This will be proven in court. SNAP leaders are now, and always have been, devoted to following the SNAP mission: To help victims heal and to prevent further sexual abuse." On January 24, 2017, the Chicago Sun Times reported that Clohessy "voluntarily resigned" from SNAP "effective Dec. 31", according to a two-paragraph email from SNAP Board Chairwoman Mary Ellen Kruger. Clohessy told the Kansas City Star "that the lawsuit had nothing to do with his resignation and called the allegations in the case 'preposterous.'" Blaine died in 2017. The lawsuit was settled in early 2018. Clohessy returned to SNAP as a spokesperson.
== See also ==
Sexual abuse cases in church
Abuses in the Baptist Faith
Jehovah's Witnesses and child sex abuse
Catholic Church sex abuse cases
Catholic Church sex abuse cases in the United States
Critique and consequences related topics
Debate on the causes of clerical child abuse
Ecclesiastical response to Catholic sex abuse cases
Instruction Concerning the Criteria for the Discernment of Vocations with Regard to Persons with Homosexual Tendencies in View of Their Admission to the Seminary and to Holy Orders
Settlements and bankruptcies in Catholic sex abuse cases
Sex Crimes and the Vatican, BBC documentary
Spotlight, a 2015 film about The Boston Globe's "Spotlight" team, and its 2001 investigation into cases of widespread and systemic child sex abuse in the Boston area by numerous Catholic priests. It features Phil Saviano, founder of the New England chapter of SNAP.
Investigation, prevention and victim support related topics
Charter for the Protection of Children and Young People, US
National Review Board, US
Pontifical Commission for the Protection of Minors, Vatican
Virtus (program), church initiative in US
Vos estis lux mundi, church procedure for abuse cases
== References ==
== External links ==
Home Page of SNAP
Hammond v. SNAP
Rev. Xiu Hui "Joseph" Jiang vs. Tonya Levette Porter, et al. | Wikipedia/Survivors_Network_of_those_Abused_by_Priests |
Rape pornography is a subgenre of pornography involving the description or depiction of rape. Such pornography either involves simulated rape, wherein sexually consenting adults feign rape, or it involves actual rape. Victims of actual rape may be coerced to feign consent such that the pornography produced deceptively appears as simulated rape or non-rape pornography. The depiction of rape in non-pornographic media is not considered rape pornography. Simulated scenes of rape and other forms of sexual violence have appeared in mainstream cinema, including rape and revenge films, almost since its advent.
The legality of simulated rape pornography varies across legal jurisdictions. It is controversial because of the argument that it encourages people to commit rape. However, studies of the effects of pornography depicting sexual violence produce conflicting results. The creation of real rape pornography is a sex crime in countries where rape is illegal. Real rape pornography, including statutory rape in child pornography, is created for profit and other reasons. Rape pornography, as well as revenge porn and other similar subgenres depicting violence, has been associated with rape culture.
== Legality ==
=== United Kingdom ===
The possession of rape pornography is illegal in Scotland, England and Wales.
In Scotland, the Criminal Justice and Licensing (Scotland) Act 2010 criminalised possession of "extreme" pornography. This included depictions of rape, and "other non-consensual penetrative sexual activity, whether violent or otherwise", including those involving consenting adults and images that were faked. The maximum penalty is an unlimited fine and 3 years imprisonment. The law is not often used, and it resulted in only one prosecution during the first four years that it was in force.
In England and Wales, it took another five years before pornography which depicts rape (including simulations involving consenting adults) was made illegal in England and Wales, bringing the law into line with that of Scotland. Section 63 of the Criminal Justice and Immigration Act 2008 had already criminalised possession of "extreme pornography" but it did not explicitly specify depictions of rape. At that time it was thought that the sale of rape pornography might already be illegal in England and Wales as a result of the Obscene Publications Act 1959, but the ruling in R v Peacock in January 2012 demonstrated that this was not the case. The introduction of a new law was first announced in 2013 by the UK Prime Minister David Cameron. In a speech to the NSPCC he stated that pornography that depicts simulated rape "normalise(s) sexual violence against women", although the Ministry of Justice criminal policy unit had previously stated that "we have no evidence to show that the creation of staged rape images involves any harm to the participants or causes harm to society at large".
In February 2015, Section 16 of the Criminal Justice and Courts Act 2015 amended the Criminal Justice and Immigration Act 2008 to criminalise the possession of pornographic imagery depicting acts of rape. The law only applies to consensual, simulated, fantasy material. The possession of an image capturing an actual rape, for example CCTV footage, is not illegal; but a "make believe" image created by and for consenting adults is open to prosecution. In January 2014 sexual freedom campaign groups criticised Section 16 as being poorly defined and liable to criminalise a wider range of material than originally suggested. However, in April 2014 the BBFC's presentation to Parliament suggested that the proposed legislation would not cover "clearly fictional depictions of rape and other sexual violence in which participants are clearly actors, acting to a script".
=== Germany ===
In Germany, the distribution of pornography featuring real or faked rape is illegal.
=== United States ===
There are few practical legal restrictions on rape pornography in the United States. Law enforcement agencies concentrate on examples where they believe a crime has been committed in the production. "Fantasy" rape pornography depicting rape simulations involving consenting adults are not a priority for the police.
In response to the verdict of the People v. Turner sexual assault case, xHamster instituted a "Brock Turner rule", which banned videos involving rape, including those involving sex with an unconscious or hypnotised partner.
== Real rape cases ==
=== Non-internet ===
American porn actress Linda Lovelace wrote in her autobiography, Ordeal, that she was coerced and raped in pornographic films in the 1970s.
=== Internet ===
Internet policing with respect to investigating actual crime has been made increasingly difficult by rape pornography websites operating anonymously, ignoring ICANN regulations and providing false information for the Whois database.
From 2009 to 2020, the pornographic company GirlsDoPorn created hundreds of pornographic videos in which the women depicted were manipulated, coerced, lied to, given marijuana or other drugs or physically forced to have sex, according to the accounts of victims and material from a lawsuit against the company. Six people involved in the website were charged with sex trafficking by force, fraud and coercion in November 2019. Official videos from the company were viewed over a billion times, including a paid subscription service on its website, and an estimated 680 million views on the tube site Pornhub, where the official channel was among the site's top 20 most viewed. Pirated copies of the videos were also viewed hundreds of millions of times. According to a lawsuit, the videos could still be found on mainstream pornography websites up to at least December 2020.
Japanese women were forced to be in pornographic videos in the 2010s.
Real rape videos of women and girls were filmed in the Doctor's Room and Nth Room cases in South Korea in late 2010s and early 2020s.
Videos showing real rape have been hosted on popular pornographic video sharing and pornography websites. These websites have been criticized by petitioners.
==== Cybersex trafficking ====
Victims of cybersex trafficking have been forced into live streaming rape pornography, which can be recorded and later sold. They are raped by traffickers in front of a webcam or forced to perform sex acts on themselves or other victims. The traffickers film and broadcast the sex crimes in real time. Victims are frequently forced to watch the paying consumers on shared screens and follow their orders. It occurs in locations, commonly referred to as ‘cybersex dens,’ that can be in homes, hotels, offices, internet cafes, and other businesses.
== References ==
== Further reading ==
Bridges, Ana J. (October 2019). "Chapter 7: Pornography and Sexual Assault". In O'Donohue, Yvonne; William T., Paul A. (eds.). Handbook of Sexual Assault and Sexual Assault Prevention. Routledge. pp. 129–149. ISBN 978-3030236441.
Diamond, Milton (October 2009). "Pornography, public acceptance and sex related crime: A review". International Journal of Law and Psychiatry. 32 (5): 304–314. doi:10.1016/j.ijlp.2009.06.004. PMID 19665229. Abstract.
Diamond, Milton & Uchiyama, Ayako (1999). "Pornography, Rape and Sex Crimes in Japan". International Journal of Law and Psychiatry. 22 (1): 1–22. doi:10.1016/s0160-2527(98)00035-1. PMID 10086287. Abstract.
Makin, David A.; Morczek, Amber L. (June 2015). "The dark side of internet searches: a macro level assessment of rape culture". International Journal of Cyber Criminology. 9 (1): 1–23. Abstract.
Makin, David A. & Morczek, Amber L. (February 2015). "X Views and Counting: Interest in Rape-Oriented Pornography as Gendered Microaggression". Journal of Interpersonal Violence. 25 (3): 244–257. Abstract.
Malamuth, Neal M. (2014). Pornography and Sexual Aggression. Elsevier Science. ISBN 9781483295794.
Mowlabocus, Sharif & Wood, Rachel (September 2015). "Introduction: audiences and consumers of porn". Porn Studies. 2 (3): 118–122. doi:10.1080/23268743.2015.1056465. Abstract.
Palermo, Alisia M. & Dadgardoust, Laleh (May 2019). "Examining the role of pornography and rape supportive cognitions in lone and multiple perpetrator rape proclivity". Journal of Sexual Aggression. 31 (12): 2131–2155. Abstract.
Purcell, Natalie (2012). Violence and the Pornographic Imaginary: The Politics of Sex, Gender, and Aggression in Hardcore Pornography. Routledge. ISBN 9780415523127. | Wikipedia/Rape_pornography |
Amateur pornography is a category of pornography that features models, actors or non-professionals performing without pay, or actors for whom this material is not their only paid modeling work. Reality pornography is professionally made pornography that seeks to emulate the style of amateur pornography. Amateur pornography has been called one of the most profitable and long-lasting genres of pornography.
== History ==
=== Photographs ===
The introduction of Polaroid cameras in 1948 allowed amateurs to self-produce pornographic photographs immediately and without the need for sending them to a film processor, who might have reported them as violations of obscenity laws. One of the more significant increases in amateur pornographic photography came with the advent of the internet, image scanners, digital cameras, and more recently camera phones. These have enabled people to take private photos and then share the images almost instantly, without the need for expensive distribution, and this has resulted in an ever-growing variety and quantity of material. It has also been argued that in the Internet age it has become more socially acceptable to make and view amateur porn. Starting in the 1990s, pornographic images were shared and exchanged via online services such as America Online (AOL). Photo sharing sites such as Flickr and social networking sites such as MySpace have also been used to share amateur pornographic photographs – usually nudes but also hardcore photos. A more private and easy to control method of sharing photos is through Yahoo or Google Groups which have access restricted to group members.
The general public has become more aware in recent years of the potential dangers to teenagers or children, who may be unaware of the consequences, using their camera phones to make videos and images which are then shared amongst their friends, as in sexting. Images initially meant to be shared between couples can now be spread around the world. The result is now a small but growing amount of online amateur porn depicting underage models, created by the young people themselves.
=== Home movies and videos ===
Before the advent of camcorders and VHS tapes couples had to film themselves using Super 8 film which then had to be sent for film processing. This was both expensive and risky as the processing laboratory might report the film to the police depending on their local laws.
Amateur pornography began to rapidly increase in the 1980s, with the camcorder revolution, when people began recording their sex lives and watching the results on VCRs. These home movies were initially shared for free, often under the counter at the local video store. Homegrown Video was the first company to release and distribute these types of amateur adult videos commercially. They were established in 1982, and AVN magazine ranked Homegrown Video #1 among the 50 most influential adult titles ever made because it resulted in the creation of the amateur pornography genre in adult video. Several people who sent their tapes to Homegrown Video became professional porn stars, including Stephanie Swift, Melissa Hill, Rayveness, and Meggan Mallone. In 1991, in response to a Boston Globe investigation, video store proprietors reported that between 20 and 60% of video rentals and sales were of adult amateur home video films.
One highly publicized case was that of Kathy Willets and her husband Jeffrey in 1991. Jeffrey was a deputy sheriff in Broward County, Florida who had recorded his "nymphomaniac" wife's sexual exploits with up to eight men a day. He was charging up to $150 an hour and had also taped some significant local figures, so the two were arrested and charged with prostitution. Ellis Rubin acted as defense counsel and contended that Willets' nymphomania was caused by the use of Prozac. In the end, they pleaded guilty and both were convicted, although Kathy has gone on to a career in the adult film industry.
The term "realcore" has been used to describe digital amateur porn, which arose due to the combination of cheap digital cameras and the World Wide Web in the late 90s. The term refers both to how porn is made, with simple cameras and a documentary style, and how it is distributed, mostly for free, in web communities or Usenet newsgroups. The term was invented by Sergio Messina, who first used it at the Ars Electronica Symposium in 2000, and was subsequently adopted by a number of authors and experts. Messina has written a book on the subject, entitled Realcore, the digital porno revolution.
Amateur porn has also influenced the rise of the celebrity sex tape, featuring stars like Scott Stapp, Kid Rock, Pamela Anderson, Paris Hilton, and Kim Kardashian. The increase of free amateur porn "tube sites" has allowed homemade films to be uploaded across multiple tube sites on the internet, like Pornhub or XVideos. Due to the popularity of social networks, people can also connect with other amateur porn enthusiasts to discuss and share their sex life on platforms solely for this purpose. There are sites with an open or "closed until verification" community where people can freely share their own pictures or watch amateurs videos directly from those who record them.
=== Literature: sex stories ===
The internet has also affected amateur authors sharing their pornographic stories. Text is much easier to disseminate than images and so from the early 1990s amateurs were contributing stories to usenet groups such as alt.sex.stories and also to online repositories. While most commercial sites charge for image content, story content is usually free to view and is funded by pop-up or banner advertising. Story submission and rating depends on registration as a user, but this is also usually free. Example sites include Literotica, True Dirty Stories and Lust Library.
=== Revenge porn ===
The advent of amateur and self-produced pornography has given rise to civil suits and newly identified and defined criminal activity. So called "revenge porn" gained awareness in the late 2000s in the press through initial lawsuits by victims who had images and video of them either nude or in intimate acts posted on the internet.
=== Minors ===
If the video or images in question are of individuals who are minors, including material created by the subject (ex. selfies, etc.), investigation by law enforcement can lead to charges for child pornography as has happened in cases involving sexting.
== User-generated online content ==
Like traditional magazine and VHS/DVD-based pornography, Internet pornography has long been a profitable venture. However, with the rise of Web 2.0 ventures and amateur pornography, websites based upon the YouTube platform of user-generated content and video sharing have become highly popular. By January 2008 a search for "porn" and "tube" returned 8.3 million results on Yahoo and 8.5 million on MSN (By October 2017 searches for "porn" and "tube" returned 23 million results on Google. By March 2017 searches for "porn" and "tube" returned 1420 million results on Google.). Video hosting service "tube" websites featuring free user-uploaded amateur pornography became the most visited pornography websites on the internet.
Since the content of these websites is entirely free and of reasonably high quality, and because most of the videos are full-length instead of short clips, these websites sharply cut into the profits of pornographic paysites and traditional magazine and DVD-based pornography. The profits of tube-site owners have also been squeezed in an increasingly crowded market, with the number of sites constantly growing.
== See also ==
Alt porn
== References ==
== External links ==
Media related to Amateur pornography at Wikimedia Commons | Wikipedia/Amateur_pornography |
Cartoon pornography, or animated pornography, is the portrayal of illustrated or animated fictional cartoon characters in erotic or sexual situations. Animated cartoon pornography or erotic animation, is a subset of the larger field of adult animation, not all of which is sexually explicit.
Because historically most cartoons have been produced for child and all-ages audiences, cartoon pornography has sometimes been subject to criticism and extra scrutiny compared to live-action erotic films or photographs. It is somewhat common in Japan, where it is part of a genre of entertainment commonly referred to outside of Japan as hentai.
Cartoon pornography has significantly increased in production since the introduction of the internet with the creation of websites dedicated to the adult animation. The internet has also led to animated pornography being distributed on social media.
== History ==
One of the earliest examples of erotic animation is The Virgin with the Hot Pants, a stag film that opens with an animated sequence featuring an independent penis and testicles pursuing a naked woman and having sex with her, then another sequence of a mouse sexually penetrating a cat. Another early example is Eveready Harton in Buried Treasure, a 6.5-minute silent black-and-white animated film produced in 1928 by three US animation studios, allegedly for a private party in honor of Winsor McCay. It features a man with a large, perpetually erect penis who has various misadventures with other characters and farm animals, plus his penis detaching and doing things on its own. In 1932, Hakusan Kimura (木村白山)completed Japan's first erotic animation Suzumi bune using touches of Ukiyo-e style.
The Golden Age of Porn, which saw mainstream filmmakers and cinemas tentatively experiment with sexually explicit material with fully developed plots and storytelling themes, also saw some renewed interest in similar erotic animation. Examples include Out of an Old Man's Head (1968) by Per Åhlin and Tage Danielsson, Tarzoon: Shame of the Jungle (1975) by Picha and Boris Szulzinger, and Historias de amor y masacre (1979) by Jorge Amorós. Animator Ralph Bakshi produced Fritz the Cat (1972) (based loosely on the comic of Robert Crumb), which was the first animated film to receive an "X" rating in the US. The Italian film Il nano e la strega (released in English as King Dick, 1973) was a Medieval fantasy story told entirely by hand-drawn animation. Once Upon a Girl (1976) featured live-action framing sequences around pornographic versions of well-known fairy tales. Animerama was a series of animated erotic films begun by Osamu Tezuka: A Thousand and One Nights (1969), Cleopatra (1970), and Belladonna of Sadness (1973). In addition, Known as mockbusters: Maruhi Gekiga, Ukiyoe Senichiya (1969), Do It! Yasuji's Pornorama (1971) released.
Since the 1980s, erotica has been a popular genre of animation in Japan. Erotic Japanese anime – some based on erotic manga, often released as original video animation – feature sexually suggestive and explicit sex scenes. (See also: Hentai)
In the early 21st century, producers began applying digital animation technology to erotic material. In 2000, Playboy TV began running the erotic dystopian sci-fi series Dark Justice, which used 3D animation, and ran for 20 episodes. In 2001, illustrator Joe Phillips released The House of Morecock, a comedic erotic feature film for gay and bisexual male audiences, made using 2D digital animation.
The 2006 short Sex Life of Robots turned to the traditional technique of stop-motion animation to depict the imagined sexual activities of living robots. In 2013 Savita Bhabhi, an Indian Hindi-language animated film directed by Deshmukh (Puneet Agarwal) was released as a web film. It was based on Agarwal's Kirtu webcomic character Savita Bhabhi (published online since 2008) and was the first adult animated film from India.
Animated content has become popular on pornographic video services, which sometimes report terms such as "anime", "hentai", and "cartoon" – all of which are commonly associated with animation – among the top search terms. In November 2020, Ana Valens of The Daily Dot highlighted the popularity of "Source Filmmaker porn", referring to SFM porn which is created using Source Filmmaker by "adult creators."
== Legal status ==
The legal status of cartoon pornography varies from country to country. In addition to the normal legal status of pornography, some cartoon pornography depicts potentially minor (that is, underage) characters engaging in sexual acts. One of the primary reasons for this may be due to the many cartoons featuring major characters who are not adults. Cartoon pornography does not always have depictions of minors in sexual acts or situations, but that which does may fall under the jurisdiction laws concerning child pornography. Drawings of pre-existing characters can in theory be in violation of copyright law, no matter what the situation the characters are shown in.
== See also ==
Adult animation
Clop – Cartoon pornography that depicts anthropomorphic animals from the show My Little Pony: Friendship is Magic
Erotic comics
Ecchi
Elsagate
Fan service
Hentai – Cartoon pornography that depicts anime and manga characters
Rule 34
Rule 63
Yiff – Cartoon pornography that depicts anthropomorphic animals
== References ==
== External links ==
Brunker, Mike. "'Toon porn' pushes erotic envelope online". NBC News. NBC News. Retrieved 10 December 2018. | Wikipedia/Cartoon_pornography |
Generative AI pornography or simply AI pornography is a digitally created pornography produced through generative artificial intelligence (AI) technologies. Unlike traditional pornography, which involves real actors and cameras, this content is synthesized entirely by AI algorithms. These algorithms, including Generative adversarial network (GANs) and text-to-image models, generate lifelike images, videos, or animations from textual descriptions or datasets.
== History ==
The use of generative AI in the adult industry began in the late 2010s, initially focusing on AI-generated art, music, and visual content. This trend accelerated in 2022 with Stability AI's release of Stable Diffusion (SD), an open-source text-to-image model that enables users to generate images, including NSFW content, from text prompts using the LAION-Aesthetics subset of the LAION-5B dataset. Despite Stability AI's warnings against sexual imagery, SD's public release led to dedicated communities exploring both artistic and explicit content, sparking ethical debates over open-access AI and its use in adult media. By 2020, AI tools had advanced to generate highly realistic adult content, amplifying calls for regulation.
=== AI-generated influencers ===
One application of generative AI technology is the creation of AI-generated influencers on platforms such as OnlyFans and Instagram. These AI personas interact with users in ways that can mimic real human engagement, offering an entirely synthetic but convincing experience. While popular among niche audiences, these virtual influencers have prompted discussions about authenticity, consent, and the blurring line between human and AI-generated content, especially in adult entertainment.
=== The growth of AI porn sites ===
By 2023, websites dedicated to AI-generated adult content had gained traction, catering to audiences seeking customizable experiences. These platforms allow users to create or view AI-generated pornography tailored to their preferences. These platforms enable users to create or view AI-generated adult content appealing to different preferences through prompts and tags, customizing body type, facial features, and art styles. Tags further refine the output, creating niche and diverse content. Many sites feature extensive image libraries and continuous content feeds, combining personalization with discovery and enhancing user engagement. AI porn sites, therefore, attract those seeking unique or niche experiences, sparking debates on creativity and the ethical boundaries of AI in adult media.
== Ethical concerns and misuse ==
The growth of generative AI pornography has also attracted some cause for criticism. AI technology can be exploited to create non-consensual pornographic material, posing risks similar to those seen with deepfake revenge porn and AI-generated NCII (Non-Consensual Intimate Image). A 2023 analysis found that 98% of deepfake videos online are pornographic, with 99% of the victims being women. Some famous celebrities victims of deepfake include Scarlett Johansson, Taylor Swift, and Maisie Williams.
OpenAI is exploring whether NSFW content, such as erotica, can be responsibly generated in age-appropriate contexts while maintaining its ban on deepfakes. This proposal has attracted criticism from child safety campaigners who argue it undermines OpenAI's mission to develop "safe and beneficial" AI. Additionally, the Internet Watch Foundation has raised concerns about AI being used to generate sexual abuse content involving children.
=== AI-generated non-consensual intimate imagery (AI Undress) ===
Several US states are taking actions against using deepfake apps and sharing them on the internet. In 2024, San Francisco filed a landmark lawsuit to shut down "undress" apps that allow users to generate non-consensual AI nude images, citing violations of state laws. The case aligns with California's recent legislation—SB 926, SB 942, and SB 981—championed by Senators Aisha Wahab and Josh Becker and signed by Governor Gavin Newsom. These bills aim to protect individuals from AI-generated explicit images by criminalizing non-consensual distribution, mandating disclosures, and empowering victims to report and remove harmful content from platforms.
=== Differences from deepfake pornography ===
While both generative AI pornography and deepfake pornography rely on synthetic media, they differ in their methods and ethical considerations. Deepfake pornography typically involves altering existing footage of real individuals, often without their consent, using AI to superimpose faces or modify scenes. In contrast, generative AI pornography is created using algorithms, producing hyper-realistic content without the need to upload real pictures of people. Hany Farid, digital image analysis expert, also described the difference between "AI porn" and "deepfake porn."
== References == | Wikipedia/Generative_AI_pornography |
Ethnic pornography is a genre of pornography featuring performers of specific ethnic groups, or depictions of interracial sexual activity.
Productions can feature any type of ethnic group; however, the most commonly marketed ethnic genres involve Asian women, Latino women, and black women, most often paired with white men.
== Demographics ==
The most prevalent form of ethnic pornography is that which involves Asian females. According to Christopher Mcgahan, pornographic websites depicting Asian female actresses outnumber almost all other forms of hardcore pornography. Websites explicitly depicting Latina or black women are also commonly found; however, ethnic pornography featuring white women tends to be more obscure and found within the ambiguous "interracial" category; few websites mark "white" as a distinct racial category.
According to a 2019 study published in Archives of Sexual Behavior, the most common form of interracial pornography involves white men paired with either Asian, Latina, or black women. In terms of "most watched" videos, the most common form was white men with Latina women. Interracial videos involving black and white individuals were equally distributed across gender pairings, at 15.1%. White women were present in 37.2% of all videos (including non-interracial pornography), while white male actors were present in 55.2% of all videos.
=== Hijab pornography ===
==== 2022 content analysis study ====
In a 2022 study published in Violence Against Women (Sage Journals), Mirzaei et al. investigated the increase in interest for HPVs ("hijab pornographic videos", defined by the authors as films portraying at least one female performer as wearing a head covering in a way that "accentuates Muslim women's culturally specific way of dressing"). The authors contended this type of pornography to be distinct from "race porn", due to the potency of head coverings in such films as a marker of religious identity rather than what the authors identify as inherently racial (skin colour given as the example).
Using the search term "hijab porn", the authors gathered 50 professional-looking HPVs from four named popular porn sites. The videos were restricted to exclude non-English-language speech.: 1435 The authors analysed aggression, objectification, exploitation, and agency (tables below contain specific numerical data). The following comparisons against previous studies were made:
With regard to aggression, the authors noted that the eminence of spanking and gagging as the most frequent acts was consistent with similar studies. In contrast, the authors noted that the proportion of aggression targets being women was "much higher than figures given by recent studies".: 1431–1432
With regard to objectification, the authors noted that the prevalence of fellatio was "significantly higher than the proportions reported in previous studies" and that in "almost all" depictions of a cum shot, semen was ejaculated onto the female's face whilst she was wearing a headscarf. The authors argued that "hijab seems to be the target of objectification".: 1432–1433
With regard to exploitation, the authors contended that the results differed from those of other studies. In reference to a 2015 study, they noted the higher likelihood for women to be portrayed as lower in status. They noted that depictions of female submission in HPVs comprised housewives, cleaners, shoplifters, and impoverished people, but – unlike a referenced 2014 study – not as students, models, tenants, waitresses, or employees. They noted that 16% of the depictions contained survival sex and 32% contained nonconsensual sex. They noted that nonconsensual sex "rarely occurred" in other studies.: 1433
With regard to agency, the authors noted that the androcentrism was "in accordance with the current literature". They noted that unlike in previous studies, female self-touch was "rarely observed" in the HPVs. They noted that the gap between male and female orgasm aligned with results in other studies. They noted that sex initiation skewed towards men more than in other studies. They noted that the gender disparity in sexual experience differed from that in other studies, referring to one female assertion of virginity, and one instance of male anger at a female performer "too clumsy at fellatio".: 1433–1434
== Interracial pornography in the United States ==
Interracial pornography features performers of differing racial and ethnic backgrounds and often employs ethnic and racial stereotypes in its depiction of performers.
American stag films dated to the 1930s depict acts between black and white performers: Di Lauro and Rabin point to The Handy Man, The Hypnotist, and A Stiff Game, the last of which identifies its only male character as "Sambo".
Behind the Green Door (1972) was one of the first pornographic films to feature sex between a white actress (Marilyn Chambers) and a black actor (Johnnie Keyes).
In the past, some of American pornography's white actresses were allegedly warned to avoid African American males, both on-screen and in their personal lives. One rationale was the purportedly widespread belief that appearing in interracial pornography would ruin a white performer's career, although some observers have said that there is no evidence that this is true. Adult Video News critic Sheldon Ranz wrote in 1997 that:
We keep hearing a lot about "the powers that be" that tell white women that it's not in their "interest" to work with blacks. Is there any proof that Ginger's scene with Tony El-Lay in Undressed Rehearsal hurt her career? Nina Hartley still gets lots of bookings in Southern strip clubs, especially Texas, even though she is an avowed interracialist.
Lexington Steele told The Root in a 2013 interview that white female performers who appear in interracial pornography may conceal their careers due to social pressure from their intimates. According to a survey by Jon Millward, while 87% of porn actresses are willing to take a facial, only 53% will do interracial porn.
=== Alleged role of agents ===
Sophie Dee, prominent figure of the genre, said in a 2010 interview that she thought agents often pressure white female performers not to appear in interracial pornography. Dee said that they will be paid better for performing with black men and their careers will not be damaged in any way, pointing at positive examples of some Vivid Entertainment actresses.
Aurora Snow noted in a 2013 article that the major factor preventing several white actresses from doing interracial scenes is "career anxiety" imposed by agents rather than their own racial bias. Tee Reel, male porn star and one of the few black agents in the U.S. industry, had a concurring opinion, saying, "In the business, some girls who say they don't do interracial, I've actually had sex with, off-camera." Porn star Kristina Rose has alleged that some agents tell younger actresses that they will earn less from performing in interracial pornography to bar their involvement, although the opposite is true on a global level.
== Scholarly criticism ==
In Chapter 3 of her book Porn Studies, Linda Williams, professor at the University of California, Berkeley, examines the film Crossing the Color Line starring Sean Michaels, a black actor, and Christi Lake, a white actress.: 273 In the interviews portion of the film, Michaels and Lake express how being "color-blind" is a progressive approach to interracial porn.: 273 Williams identifies a contradiction between these interviews and the subsequent performance, in which both actors make several references to the differences in skin color between them.: 273–277 For example, Lake refers to Michaels' genitalia as a "big black dick".: 274 Williams argues that by pointing out racial differences, race is being made the main point of intrigue for the audience, which perpetuates the exotification of racial differences.: 275–276 She argues that the eroticized sexual tension in interracial pornography dates back in American history to slavery.: 271
Mireille Miller-Young, professor of feminist studies at University of California, Santa Barbara, argues that while the porn industry hypersexualizes African-American pornographic actresses, they are often paid less, hired less, and given less attention during health checks than their white counterparts.
== See also ==
Asian fetish
Cuckold fetish
Miscegenation
Misogynoir
Pornography by region
Racial fetishism
== References ==
== External links ==
American Porn
Mireille Miller-Young, Hardcore Desire: Black Women Laboring in Porn | Wikipedia/Ethnic_pornography |
Fans of X-Rated Entertainment (F.O.X.E., also known as FOXE) is a United States-based pornography fan organization founded by adult film actor, director, and critic William Margold and actress Viper. It advocates against censorship of pornography and gives annual adult film awards.
== Awards ==
The annual FOXE awards ceremony presents three standard awards decided by fan vote: Male Fan Favorite, Female Fan Favorite, and Video Vixen for a new female performer. Additional special awards, including Fan of the Year are presented in some years. In the 1990s, the Fan Favorite awards were often shared but Vixen was always for one recipient. Since the 11th FOXE awards, the ceremony has included a "Broast" (a "benign roast") of a well-known performer who also receives a lifetime achievement award. Any performer winning Fan Favorite three times is "retired" with the FOXE X award, and they become ineligible for further awards. Holders of the FOXE X include Tera Patrick, Nina Hartley, Ashlyn Gere, and Jill Kelly. Ceremony attendance fees go to support anti-censorship causes, like the Protecting Adult Welfare Foundation. The FOXE award winners are decided by a vote from members of FOXE.
== 1990 ==
The first awards were presented on February 14, 1990, at the XRCO Awards ceremony:
Female Fan Favorite: Nina Hartley
Male Fan Favorite: Peter North
== 1991 ==
Vixen: Selena Steele
Female Fan Favorite: Christy Canyon, Nina Hartley & Tori Welles
Male Fan Favorite: Tom Byron & Peter North
== 1992 ==
Starlet of the Year (Vixen): Teri Weigel
Female Fan Favorite: Christy Canyon, Ashlyn Gere & Nina Hartley
Male Fan Favorite: Tom Byron & Peter North
== 1993 ==
Video Vixen: Alex Jordan
Female Fan Favorite: Ashlyn Gere, Hyapatia Lee & Madison
Male Fan Favorite: Rocco Siffredi & Randy Spears
== 1994 ==
Male Fan Favorite: Randy West & Rocco Siffredi
Female Fan Favorite: Nikki Dial, Ashlyn Gere & Tiffany Mynx
Vixen: Danyel Cheeks
== 1995 ==
Male Fan Favorite: Randy West & Rocco Siffredi
Vixen: Kylie Ireland
Female Fan Favorite: Debi Diamond & Leena
== 1996 ==
Video Vixen: Jenna Jameson
Male Fan Favorite: Randy West & Sean Michaels
Female Fan Favorite: Kylie Ireland, Alicia Rio & Shane
== 1997 ==
Male Fan Favorite: T. T. Boy & Sean Michaels
Female Fan Favorite: Jeanna Fine, Jenna Jameson & Shane
Vixen: Stephanie Swift
== 1998 ==
Held at the Mayan Theater in Los Angeles:
Male Fan Favorite: Sean Michaels & T. T. Boy
Female Fan Favorite: Jenna Jameson, Stacy Valentine, Tiffany Mynx & Stephanie Swift
Special awards were the "Lady Liberty" award for free speech activist Mara Epstein, the "Friend of F.O.X.E." award for Kitty Foxx, and the "Fan of the Year" for Jay Holnar
== 1999 ==
Male Fan Favorite: T. T. Boy & Tom Byron
Female Fan Favorite: Alisha Klass, Christi Lake & Stacy Valentine
Vixen: Cherry Mirage
== 2001 ==
Held on July 8, 2001, at the Mayan Theater with the Mistress of Ceremonies being Christi Lake:
Male Fan Favorites: Mr. Marcus & Randy Spears
Female Fan Favorites: Kim Chambers, Bridgette Kerkove & Jill Kelly
Vixen: Tera Patrick
== 2002 ==
Date: June 9, 2002
Location: Mayan Theater
Host: Christi Lake
Broast subject: Ron Jeremy
=== Winners ===
Male Fan Favorites: Mr. Marcus & Evan Stone
Female Fan Favorites: Tera Patrick, Jill Kelly, & Christi Lake
Vixen: Monica Mayhem
Friend of FOXE: Ron Jeremy
== 2003 ==
Date: June 21, 2003
Location: Mayflower Ballroom
Host: Bill Margold
Emcee: Christi Lake
Broast subject: Amber Lynn
=== Winners ===
Male Performer of the Year: Lexington Steele
Female Performer of the Year: Belladonna & Jill Kelly
Vixen of the Year: Taylor Rain
== 2004 ==
The 13th awards presentation was held June 17, 2004, in Inglewood, California. Seka was the guest of honor, and Broast subject. Teri Weigel was Mistress of Ceremonies. Tera Patrick won Female Fan Favorite, Mary Carey won the FOXE Vixen award, and Lexington Steele won Male Fan Favorite.
== 2005 ==
The 14th F.O.X.E. Awards were held on Sunday, February 20, 2005, at the Mayflower Ballroom in Inglewood, California:
Video Vixen: Teagan Presley
Male Fan Favorite: Lexington Steele
Female Fan Favorite: Tera Patrick
Marilyn Chambers was the Broast subject and received a lifetime achievement award.
== 2006 ==
The 15th award ceremonies were held on Sunday, February 19, 2006:
Male Fan Favorite: Randy Spears
Female Fan Favorite: Jesse Jane
Vixen: Sunny Lane
Randy Spears, Lexington Steele, and Tera Patrick were all formally "retired", after winning three previous FOXE awards. Christy Canyon was the Broast subject.
== References ==
"Dieter Gone Wild" Archived 2007-09-30 at the Wayback Machine, Harmon Leon, January 11, 2006, from SF Weekly
Bill Margold; John B. (2004). "GOD CREATED MAN, WILLIAM MARGOLD CREATED HIMSELF: AN INTERVIEW WITH THE RENAISSANCE MAN OF PORN" (Interview). Interviewed by Ian Jane. Sherman Oaks, CA: dvdmaniacs.net. Archived from the original on 10 June 2007.
"Fans of X-Rated Entertainment (F.O.X.E.) Awards", 2008 Adam Film World Guide Directory, pg. 304
== External links ==
"Canada's Best Adult Entertainment Super Site - Listing of 1990-99 "FANS OF X-RATED ENTERTAINMENT" Award winners". canbest.com. Archived from the original on 14 October 2007.
Archived AVN article: "FOXE Awards Showcase Porn’s Past and Future", February 20, 2006 | Wikipedia/Fans_of_X-Rated_Entertainment |
Pokémon, a media franchise developed by Game Freak and published by Nintendo, has received a notable amount of fan-made pornography (also known as poképorn and poképhilia). The Pokémon games feature Pokémon trainers and creatures known as Pokémon; both are subject to pornography. The content can be usually found in imageboards and Pornhub. In 1999, in what was named the Pokémon doujinshi incident, a Japanese artist was arrested for producing erotic doujinshi of the Pokémon characters, inciting media furor. In the late 2010s, Pokémon-themed live-action porn parodies received media attention and, after the release of Pokémon Go in 2016, searches for pornography of the franchise increased significantly. Pokémon species such as Lucario, Lopunny, Eevee's evolutions, and Gardevoir are Pokémon particularly known for being sexualized.
== Context ==
Developed by Game Freak and published by Nintendo, the Pokémon franchise began in Japan in 1996 with the release of the video games Pokémon Red and Blue for the Game Boy. In these games, the player assumes the role of a Pokémon Trainer whose goal is to capture and train Pokémon.
Pokémon pornography, also shortened as "poképorn", can involve either the Pokémon trainers or the Pokémon themselves, and can present interspecies acts between both. The Pokémon can either show human characteristics, known as "anthro", or show more animalistic features, known as "feral". The content can be in the form of drawings, on imageboards like Gelbooru or e621, or animated, live-action and 3D videos, on websites like Pornhub. Erotic Adobe Flash games related to Pokémon also exist. Collections of Pokémon pornography can be found on websites like DeviantArt, Twitter, Pixiv, and Newgrounds, and could be found on Tumblr before its nudity ban. There are also Reddit communities dedicated to sharing Pokémon pornography, the main subreddit being r/PokePorn. Lolicon and shotacon content related to Pokémon also exists, though it is banned on many websites. Erotic Pokémon-related fan fictions also exist. Artists may use Patreon to monetize their work and share exclusive content.
== History and popularity ==
In 1999, in what was named the Pokémon doujinshi incident, an artist was arrested on suspicion of violating Japan's copyright laws for creating a manga featuring erotic acts between Pikachu and Ash Ketchum, the main characters of the Pokémon anime. According to copyright holder Nintendo, the manga was "destructive of the Pokémon image". The incident incited media furor as well as an academic analysis in Japan on the copyright issues around doujinshi. In the late 2010s, live-action porn parodies received media attention, such as the 2015 parody Strokémon, the 2016 Pokémon Go parody by Brazzers, Pornstar Go XXX Parody, and the VR porn parody of Pokémon Go, Poke a Ho: Misty. Chuck Tingle, an author of gay erotica, released a book called Pokebutt Go: Pounded By 'Em All.
After the release of Pokémon Go on July 6, 2016, Pornhub reported on the 11th that searches for Pokémon porn had increased to 136 %, with men being 62% more likely to search for the term and it being 336 % more popular among 18–24 year olds. The data showed that Latin America countries were far more likely to search for Pokémon porn. Porn website xHamster reported the same day that the most searched terms since the release of Pokémon Go were "Pokemon", "Pikachu", "Hentai", and "Anime". YouPorn said that "Pokemon" became a more popular search term than "porn" on their website.
In December 2018, Pornhub Communications Director Chris Jackson said that the top three Pokémon-related searches were Gardevoir, Eevee, and Lopunny, with top humans being trainers Misty and Serena, as well as Team Rocket's Jessie. According to Vice, the dominance of Gardevoir and Lopunny is reflected on other websites, with Lucario not far behind. In June 2023, data compiled on Rule 34 websites Rule34.xxx and Sankaku Channel showed that Pokémon was the most pornified media franchise, with a large lead after the others. It also showed that the Pokémon themselves were preferred over the trainers, with the top choice being Lucario, followed by Gardevoir. Fan made erotica often depicts Gardevoir with human sexual aspects, such as breasts and/or fetish attire.
== See also ==
Cartoon pornography
Clop (erotic fan art)
Doujinshi
Hentai
Overwatch and pornography
Rule 34
Yiff
== References == | Wikipedia/Pokémon_and_pornography |
A pornographic parody film is a subgenre of the pornographic film industry genre where the basis for the production's story or plotline is the parody of a mainstream television show, feature film, public figure, video game or literary works. This subgenre also includes parody of historical or contemporary events such as political scandals. The subgenre has gained acceptance by the adult industry to the extent that major awards are presented in this category by organizations such as AVN and XRCO.
== Origin ==
PornParody.com, a website dedicated to reviewing porn parodies, cites the Batman spoof Bat Pussy (c. 1970) as possibly the earliest known pornographic parody film, a title which is also shared with the 1973 German animated short Snow White and the Seven Perverts. The subgenre began taking off in the 1990s, experiencing a surge in popularity during the 2000s and 2010s. William Shakespeare's works have been used as inspiration, titles include Hamlet: For the Love of Ophelia and Othello: Dangerous Desire.
== Types ==
In 2002, Shaving Ryan's Privates, a movie starring actor-turned-infosec analyst Jeff Bardin, was released. The movie documented porn films that parody classic Hollywood movies. There have been porn parodies produced of sitcoms such as Who's The Boss and Parks and Recreation, horror and drama movies such as Edward Scissorhands and The Silence of the Lambs, sci-fi and action movie blockbusters such as Star Wars and The Avengers, and of period drama such as Downton Abbey, titled Down on Abby. The genre includes both heterosexual and homosexual versions, such as a "gay porn parody" of the reality television series Duck Dynasty. Parodies also include non-story or event specific productions such as the porn parody film Who's Nailin' Paylin? that features characters impersonating the real life politicians Sarah Palin, Hillary Clinton, and Condoleezza Rice as well as media figure Bill O'Reilly.
== Production ==
Some parodies use techniques such as CGI. One movie reviewer commented that the costume of a particular comic book character in a porn parody was a "better version" of the one used in the mainstream movie.
== Reception and review ==
Journalist and author Charlie Jane Anders wrote, before blogging on the subject, "I didn't realize just how many porn parodies there are — and how terrible most of them actually are."
A New York magazine writer commented that although the parody of the PBS series Downton Abbey titled Down on Abby starring Lexi Lowe is humorous, it is "rife with historical inaccuracies" and "not recommended for those who get distracted by historically inaccurate details like squared-off French tips and thongs."
=== Industry acceptance ===
Acknowledging its origins and public appeal, several industry publications and trade groups have created award categories for the genre. Two of the industry's major award programs offer recognition for this category of adult film.
The AVN Award added the categories of "Best Parody - Comedy" and "Best Parody - Drama" in 2009. In 2013, the Star Wars parody Star Wars XXX: A Porn Parody was that year's most nominated production as well as the winner of the Comedy category. In 2012, the category of "Best Director" for a "Parody and Best Celebrity Sex Tape" was added.
The XRCO Award added its category of "Best Parody" in 2003. Coincidentally, the award went to the Wicked Pictures-produced and Jonathan Morgan-directed film Space Nuts, a parody of the Mel Brooks film Spaceballs - itself a parody of Star Wars. Space Nuts also won the 2004 AVN award for Best Parody - Comedy.
=== Mainstream media ===
The genre has gained the attention of the mainstream press. In 2009 director Jeff Mullen (a.k.a. Will Ryder) was interviewed by Newsweek writer Joshua Alston about his leading role in the resurgence of porn parodies.
In 2009, Adult Video News writer Tom Hymes observed, "I have come to the realization that a very real expectation is being created among the celebrity elite: Will my show be the next XXX parody... please?" "What is kind of brilliant about the whole thing is the fact that these mainstream shows and the actors on them now get to rub shoulders with the porn industry without any actual rubbing. That's why I call them the new celebrity-safe sex tapes for safe-sex celebrities." Alan Ball, creator of the HBO series True Blood stated in 2010 that "We just found out that they're doing a porn parody of True Blood. That's certainly a moment of going 'Wow, we've arrived.'"
In 2010, actress Anna Paquin talked about the porn parody of True Blood on the late night talk show Lopez Tonight. Paquin said "You know, actually what's interesting is that [fellow cast member] Steve [Moyer] and I were so amused by it that we got it for everyone on our cast and crew as a wrap gift so I think we probably bought every copy in existence." Copies of Tru: A XXX Parody were given to the entire cast and crew at the wrap party for the show's third season.
== Fictional pornography ==
Fictional pornography is sometimes used for plot or humor in mainstream film and TV. In Friends, Phoebe's twin sister starred in Buffay, the Vampire Layer among others. Lyndsey in Two and a Half Men was the title role of Cinnamon's Buns, and Wilson in House acted in Feral Pleasures. Peter Stormare's character in The Big Lebowski can be seen in Logjammin'.
== See also ==
Axel Braun, director known for parody productions
Tijuana bible, pornographic comic books that often featured parodies of comic strips, cartoons and celebrities
Rule 34 (Internet meme)
Slash fiction
== References ==
== External links ==
The Ultimate Guide to Science Fiction and Fantasy Porn Parodies (NSFW) by Gizmodo
Ultimate Guide to Porn Spoofs Pt. 2: Superheroes, Star Trek, Star Wars and Horror (NSFW) by Gizmodo | Wikipedia/Pornographic_parody_film |
MILF pornography (short for mother I'd like/love to fuck) is a genre of pornography in which the actresses are usually women ages 30 to 60, though many actresses have started making this type of pornographic film at age 25. Central to the typical MILF narrative is an age-play dynamic of older women and younger lovers of any gender. A related term is cougar, which implies an older woman is acting as a predator.
== History ==
The term MILF was first documented in Internet newsgroups during the 1990s. The earliest known online reference is a 1995 Usenet post about a Playboy pictorial of attractive mothers. It was popularized by the 1999 film American Pie referring to Jennifer Coolidge's character, "Stifler's mom".
In the UK, the term yummy mummy is used as well as MILF. Oxford Dictionaries defines a yummy mummy as "an attractive and stylish young mother".
== Awards ==
Actresses of the MILF porn genre have their own awards: the X-Rated Critics Organization "MILF of the Year Award", the AVN "MILF/Cougar Performer of the Year Award", and the Urban X Award for "Best MILF Performer" among others. In Japan, Sky PerfecTV! Adult Broadcasting Awards also has an award for the "Best Mature Actress".
=== Notable performers ===
== See also ==
Age disparity in sexual relationships
== References ==
== External links == | Wikipedia/MILF_pornography |
Hardcore pornography or hardcore porn is pornography that features detailed depictions of sexual organs or sexual acts such as vaginal, anal, oral, or manual intercourse; ejaculation; or fetish play. The term is in contrast with less-explicit softcore pornography. Hardcore pornography usually takes the form of magazines, photographs, films, and cartoons. Since the mid-1990s, hardcore pornography has become widely available on the internet, much of it without cost, making it more accessible than ever before.
== Etymology ==
A distinction between "hardcore pornography" and "borderline pornography" (or "borderline obscenity") was made in the 1950s and 1960s by American jurists discussing obscenity laws. "Borderline pornography" appealed to sexual prurience, but had other positive qualities, such as literary or artistic merit, and so was arguably permitted by obscenity laws; "hardcore pornography" lacked such merits and was definitely prohibited. In Roth v. United States (1957), the government brief distinguished three classes of sexual material: "novels of apparently serious literary intent"; "borderline entertainment ... magazines, cartoons, nudist publications, etc."; and "hard core pornography, which no one would suggest had literary merit". Eberhard and Phyllis Kronhausen in 1959 distinguished "erotic realism" from "pornography"; in the latter "the main purpose is to stimulate erotic response in the reader. And that is all." Most famously, in Jacobellis v. Ohio (1964), Potter Stewart wrote:
I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description ["hard-core pornography"], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case [The Lovers] is not that.
In Jacobellis v. Ohio and other cases, the United States Supreme Court ruled that only "hardcore" pornography could be prohibited by obscenity laws, with the rest protected by the First Amendment. The category of "borderline obscenity" thus became obsolete. The 1970 report of the President's Commission on Obscenity and Pornography said:
[M]ailers dealing in sexually oriented materials define "hard-core pornography" as "photographic depictions of actual sexual intercourse with camera focus on the genitals and no accompanying text to provide a legal defense". This, of course, is not a legal definition.... Some judges have employed the term "hard-core pornography" as a synonym for "material which can be legally suppressed". In this Report, the term is used as a synonym for "under-the-counter" or covertly sold materials. This is, in effect, the definition of hard-core applied in the marketplace. It can be argued that because of the confusion about the meaning of the term, which stems primarily from an undefined legal concept, it would be well to avoid the use of the term altogether.... There is one genre of sexually oriented material which is almost universally sold under-the-counter in the United States: wholly photographic reproductions of actual sexual intercourse graphically depicting vaginal and/or oral penetration.... A[t] present, distinctions between materials sold openly and those sold covertly have become extremely unclear.
From the 1970s, the salient distinction was between hardcore pornography and softcore pornography, which may use simulated sex and limits the range and intensity of depictions of sexual activities. For example, William Rotsler's 1973 classification subdivided the X rating for erotic films: "The XXX-rating means hard-core, the XX-rating is for simulation, and an X-rating is for comparatively cool films."
== History ==
The prehistory of modern pornography is the classical American stag film, also known as blue movies, a body of clandestine short pornographic films produced during the first two-thirds of the 20th century. While the exact corpus of the distinctive stag film remains unknown, scholars at the Kinsey Institute believe there are approximately 2000 films produced between 1915 and 1968. Stag cinema is a form of hardcore film and is characterized as silent, usually filling a single reel or less, and was illegally made and exhibited because of censorship laws in America. Women were excluded from these private screenings that were shown in American "smoker" houses such as fraternities or other exclusive institutions. In Europe, films of the same kind were screened in brothels. The mode of reception of the all-male audience of stag films was raucous, collective sexual banter and sexual arousal. Film historians describe stag films as a primitive form of cinema because they were produced by anonymous and amateur male artists who failed to achieve narrative coherence and continuity. Today, many of these films have been archived by the Kinsey Institute, but most are in a state of decay and have no copyright, real credits, or acknowledged authorship. The stag film era inevitably ended with the beginnings of the sexual revolution in the fifties in combination with the new technologies of the post-war era, such as 16 mm, 8 mm, and the Super 8. American stag cinema in general received scholarly attention first in the mid-seventies by heterosexual males, e.g. Di Lauro and Gerald Rabkin's Dirty Movies (1976) and more recently by feminist and queer cultural historians, e.g. Linda M. Williams' Hard Core: Power Pleasure, and the "Frenzy of the Visible" (1989) and Thomas Waugh's Homosociality in the Classical American Stag Film: Off-Screen, On-screen (2001).
== Legality ==
The distribution of hardcore pornography had been widely prohibited in many countries until the second half of the 20th century, when many countries began to allow some dissemination of softcore material. Supply is now usually regulated by a motion picture rating system as well as by direct regulation of points of sale. Restrictions, as applicable, apply to the screening, or rental, sale, or giving of a movie, in the form of a DVD, video, computer file, etc. Public display and advertising of hardcore pornography is often prohibited, as is its supply to minors.
Most countries have eased the restrictions on the distribution of pornography, either by general or restricted legalization or by failure to enforce prohibitive legislation. Most easing of restrictions has been by way of changes to the criteria of a country's movie classification system. The anti-pornography movement often vigorously opposes legalization. In 1969, Denmark became the first country in the world to legalize pornography. In the U.S., legal interpretations of pornography in relation to the constitutional right to free speech differ from state to state and from city to city. Hardcore pornography was legalized in the UK in May 2000.
=== United Kingdom ===
The Independent reported in 2006 that Nielsen NetRatings found that more than nine million British male adults used Internet porn services. The study also reported a one-third rise in the number of women visiting X-rated sites, from 1.05 million to 1.38 million. A 2003 study found that one third of all British Internet users accessed hardcore porn.
=== United States ===
A 2005 study by Eric Schlosser estimated that revenues from hardcore porn matched Hollywood's domestic box office takings. Hardcore porn videos, Internet sites, live sex acts and cable TV programming generated US$10 billion, roughly equal to U.S. domestic box office receipts.
== Impact on society ==
Berl Kutchinsky's Studies on Pornography and Sex Crimes in Denmark (1970), a scientific report commissioned by the United States' Presidential Commission on Obscenity and Pornography, found that the legalizing of pornography in Denmark had not (as had been expected) resulted in an increase of sex crimes.
A study conducted in Denmark in 2003 and later published in Archives of Sexual Behavior found that Danish men and women generally believe that hardcore pornography has a positive influence on their lives.
== See also ==
List of pornography laws by country
== References ==
== Further reading ==
O'Toole, L. (1998). Pornocopia: Sex, Technology and Desire. London: Serpent's Tail. ISBN 1-85242-395-1. | Wikipedia/Hardcore_pornography |
Religious views on pornography are based on the broader views of religions on topics such as modesty, dignity, and sexuality. Different religious groups view pornography and sexuality differently.
== Christianity ==
=== Biblical scholarship ===
There is no direct prohibition of pornography in the Bible. However, many Christians base their views on pornography on Matthew 5:27–28 (part of the Expounding of the Law):
Ye have heard that it was said by them of old time, Thou shalt not commit adultery: But I say unto you, That whosoever looketh on a woman to lust after her hath committed adultery with her already in his heart.
It contains one of the Ten Commandments, Exodus 20:14 or Deuteronomy 5:18, which is also used often as a supporting verse to condemn pornography.
Michael Coogan reports that, based on his past experience, the campaign against lustful thoughts had the consequence of razoring the Song of Songs from the Bibles available for seminarians.
Ken Stone cites Origen's words to say that reading the Song of Songs may stimulate lust to 'fleshly' readers: "But if any man who lives only after the flesh shall approach [the Song of Songs], to such a one the reading of this Scripture will be the occasion of no small hazard and danger. For he, not knowing how to hear love's language in purity and with chaste ears, will twist the whole manner of his hearing of it away from the inner spiritual man and on to the outward and carnal; and he will be turned away from the spirit to the flesh, and will foster carnal desires in himself, and it will seem to be the Divine Scriptures that are thus urging and egging him on to fleshly lust!" Stone adds that "the heavy use of food imagery in the book is no barrier to a positive 'pornographic' interpretation."
Richard Hess explains Carey E. Walsh's view as: "Walsh has determined that the emphasis of the Song lies in the expression of desire between two lovers. It is not sexual consummation that is most important, but the desire itself that drives the lovers together. In this she distinguishes erotica from pornography. The latter is concerned only with sex, and in this it is qualitatively different from the Song. Here sex plays a secondary role to desire. Whether there is any sexual activity at all in the poem—and as a fantasy there may be no such reality here—the key to the Song remains with the desire that drives the reader to appreciate the time of waiting. Hebrew experience placed the greatest value on passion." Saying that the Song of Songs is "erotica", not pornography, she argues that both are different in at least three ways: (1) Erotica inclines toward "emotions and internal worlds" of a subject to seek empathy, while pornography's "emotional flatness" aims at sexual gratification; (2) Erotica focuses on yearning to reach "consummation", which may occur in "tortuous delay", while pornography is only about the "acts" to reach it as soon as possible and "a frenzy of repetition"; (3) Erotica uses imagination as the "invisible and ever-active participant" without revealing the "mystery of love", while pornography is no more than an explicit story of sexual intercourse.
=== Roman Catholicism ===
The magisterium of the Catholic Church interprets Matthew 5:27–28 to mean that since the purpose of pornography is to create lust, it is sinful, because lusting is equivalent to adultery. As the Catechism of the Catholic Church explains:Pornography consists in removing real or simulated sexual acts from the intimacy of the partners, in order to display them deliberately to third parties. It offends against chastity because it perverts the conjugal act, the intimate giving of spouses to each other. It does grave injury to the dignity of its participants (actors, vendors, the public), since each one becomes an object of base pleasure and illicit profit for others. It immerses all who are involved in the illusion of a fantasy world. It is a grave offense. Civil authorities should prevent the production and distribution of pornographic materials.
Cardinal Karol Wojtyla, before he became Pope John Paul II, wrote in Love and Responsibility: "Pornography is a marked tendency to accentuate the sexual element when reproducing the human body or human love in a work of art, with the object of inducing the reader or viewer to believe that sexual values are the only real values of the person, and that love is nothing more than the experience, individual or shared, of those values alone." Edward Sri explains about the topic of art and pornography, which is discussed in the book by contrasting Michelangelo's works with Playboy, by saying that "good art leads us to a peaceful contemplation of the true, the good and the beautiful, including the truth, goodness and beauty of the human body", while pornography "stirs in us a sensuous craving for the body of another person as an object to be exploited for our own pleasure" and, if it is left uncontrolled, "we will become enslaved to everything that stimulates our sensual desire". When one constantly views pornography, which is focused merely on "the visible and the erotic", and reduces the human person to what is visible with the eyes, he or she will have difficulties relating to people of different gender in real life, since he or she would have become accustomed to seeing them as "objects to be used".
In a series of lectures called the Theology of the Body, Pope John Paul II argues that some works of art depict naked individuals without evoking lust, but "makes it possible to concentrate, in a way, on the whole truth of man, and the dignity and beauty—also the 'suprasensual' beauty—of his masculinity and feminity", and that such works "bear within them, almost hidden, an element of sublimation". He insists that pornography is problematic since "it fails to portray everything that is human".
=== Eastern Orthodox ===
Eastern Orthodox Church forbids pornography along with premarital sex. Looking lustfully is equal to adultery by Christ's teaching, and linked to prostitution too.
=== Protestantism ===
Harry Reid notes that, during the Reformation, "Calvin's aim was straightforward, if ambitious; he wanted to create a perfect Christian community where everyone looked after everyone else... He persuaded the council to legislate against adultery, prostitution, pornography, gambling, drunkenness and much else."
According to Manetsch, the Genevan Consistory regularly censured literature they deemed dangerous for public morals, including pornography. Oboler also notes that pornography was censored in Geneva and Puritan New England in order to defend against "ungodly eroticism".
Max Weber argued that there was a concern that eroticism was a kind of idolatry that went against God's rational regulation of sexuality through marriage alone.
Martin E. Marty notes that, today, both mainstream and evangelical Protestants remain overwhelmingly opposed to pornography and that you will find "very, very few theological statements that go light on pornography."
According to Addicted to Lust: Pornography in the Lives of Conservative Protestants (2019) written by Samuel L. Perry, professor of Sociology and Religious studies at the University of Oklahoma, Conservative Protestants in the United States are characterized by a "sexual exceptionalism" related to their consumption of pornography due to certain pervasive beliefs within the Conservative Protestant subculture, which entails cognitive dissonance associated with the unfounded conviction to be addicted to pornography, psychological distress, and intense feelings of guilt, shame, self-loathing, depression, and sometimes withdrawal from faith altogether.
Perry's book received widespread media coverage and his findings were criticized by Lyman Stone of the Evangelical magazine Christianity Today, which asserted that both the quantitative and qualitative statistical data collected by Perry demonstrate that the consumption of pornography in the United States is significantly lower among church-attending Protestant Christians compared to other religious groups, and declared that "Protestant men today who attend church regularly are basically the only men in America still resisting the cultural norm of regularized pornography use".
==== Lutheranism ====
In 1990, the Lutheran Church of Australia condemned pornography by publishing an official position on "X-rated videos".
Doctor John Kleinig, Lecturer Emeritus at the Australian Lutheran College, argues that, "The regular use of pornography for masturbation is a kind of sexual addiction. When Paul speaks about impurity and sexual greed as idolatry in Ephesians 5:3-7 and Colossians 3:5, he accurately describes how it works. It begins with sexual impurity, the defilement of our imagination by depictions of sexual intercourse that present naked bodies as idols for us to admire. Our fixation on these images arouses disordered desires and make us more and more greedy for sexual satisfaction from things that God has not given to us for our enjoyment. Yet they fail to satisfy us and serve only to feed our growing appetite for them... Where masturbation is involved... the more ashamed we become, the more secretive we become; the more secretive we become and the more we hide in the darkness, the more vulnerable we become to the accusation and condemnation of Satan... You need to be careful that Satan does not distort your perception by making a fool of you and getting you to focus on the wrong thing. Nowhere in the Bible is masturbation explicitly forbidden. There is good reason for this because the problem does not come from masturbation, which is in itself neither good or bad, but the adulterous sexual fantasies that accompany it, as Christ makes clear in Matthew 5:28. That’s the problem spiritually! ... That’s how Satan gets a hold on us through our imagination. If you use pornography to masturbate, you put another woman, an idol that promises heaven and gives you hell, sexually, in the place of your wife. It arouses your greed for what you don’t have, greed for what God has not given for you to enjoy, greed that increases as you give in to it. The more you indulge it, the more dissatisfied and empty you become."
The Evangelical Church in Germany (EKD) condemns pornography. The Church refuses to invest in any company producing pornography, stating that, "Human dignity is based on the belief that women and men are created in God's image. This results in the mission to protect this dignity against derogatory, denigrating, or degrading portrayals. The analysis of this criterion should not only consider pornographic products, but also the producers of videos depicting violence and likewise computer games glamourizing violence."
In a statement on the role of the media, the EKD's 1997 synod stated, "According to the churches, public broadcasting plays a vital role in guaranteeing the freedom of opinion and access to a variety of information in Germany. Maintaining and strengthening this system, which is "almost unique" in the world, is "imperative", said Bishop Engelhardt... In view of its greater freedom in producing programmes, it has "a special obligation to practise voluntary control". Now that "violations of taboos and norms" are constantly increasing, public control and self-regulation of the media must be increased. This also applies to the protection of "religious convictions", said Bishop Lehmann. First attempts made by German Internet providers to introduce voluntary self-regulation need to be improved. Nothing glorifying violence, instigating racial hatred, violating human dignity, glorifying war or pornography must be available on-line..."
The EKD's 1997 synod also "spoke out for action against child pornography on the Internet. All legal methods must be applied to counteract it. Additionally, the German government should pursue sharper criminal proceedings against people who traffic in women."
The Church of Sweden declared, "It is not permissible [for us] to invest in media companies that have a clear link to pornography."
==== Calvinism ====
The United Protestant Church of France (EPUdf) teaches that pornography is a sin. According to Pastor Gilles Boucomont, "The Bible does not talk about pornography. It says that sexuality is lived properly as part of a commitment, marriage. Outside of this context, sexuality is adultery if one is already married or sexual misconduct if one is not yet married. Misconduct and adultery break the unity that God wants for the married man and woman (1 Corinthians 6 and 7, Mark 10: 1-12)... Watching pornography is not doing a sexual act with anyone, certainly. However, Jesus considers that adultery begins in the eye (Matthew 5,28). There is also no need for a large dissertation to explain that viewing such films does not favor the formation or continuity of stable couples. The second aspect of our problem lies in our responsibility for the people who shoot such films. Jesus asks us to love our neighbor as ourselves. If we do not think we have to surrender our bodies for money, because that is not God's good will for the human, we can not support this practice." Masturbation is also considered sinful.
On the topic of non-sexual nudity, the EPUdf teaches, "Sin is what separates me from God or comes from my separation from Him. Therefore, nudity, whether sexual or not, must be considered in the context of relationships with God and others. Going to a mixed sauna ... if it's only with my partner, in my opinion, I do not see the problem... If it is with other people, it seems to me legitimate to ask questions about the temptation that can arise and the reasons that make me go. If there is already a habit of nudity in the civilization to which I belong, it may also not be of a nature to distract me from God. I think it's the same for the nude."
The Calvinist Protestant Church of Switzerland also condemns pornography. Pastor Jean-Charles Bichet writes, "Pornography... gives a distorted image of sexuality. It can cause more problems than it solves... And the sad thing about all this is that it gives the opportunity for a juicy trade . People who agree to pose for photos or to play X rated scenes have not all chosen to do so. By consuming porn, humans are also taking responsibility for this kind of trafficking. In this sense, we are sinners because we find ourselves involved in this general problem of society. The Bible does not specifically refer to pornography, but it often makes harsh judgments about the exploitation of people and prostitution. It reminds us that we are all created in the image of God, that we are little brothers and sisters of Jesus, the Son of God, live in liberated and happy people. The Bible constantly reminds us that none of us is a commodity. None of us should be treated like a shirt that is thrown away."
In the 1980s, the liberal Presbyterian Church (USA) produced a report entitled, Pornography: Far from the Song of Songs, which states, "Through words and images, pornography debases God’s intended gifts of love and dignity in human sexuality... we live in an age also marked by the shattering of many norms of behavior and the subsequent loss of moral restraints. In such a time pornography has proliferated. The task force believes that the church is called to give serious attention to this issue... Reflected in the title of this report is the conviction that pornography represents human discord, far from the mutual sexual delight depicted biblically in the Song of Songs. Pornography is a striking sign of human brokenness and alienation from God and from one another... From the perspective of biblical understanding and the Reformed tradition, pornography represents a vivid expression of human alienation: from the creator God who makes covenants and from one another as covenant partners of God."
In its church magazine, the Presbyterian Church in Canada argues against pornography on the grounds that it is addictive, it alters the brain, that pornographers are trying to normalize the industry and that it can distract Christians who are trying to perform nightly devotions.
==== Methodism ====
The United Methodist Church teaches that pornography is "about violence, degradation, exploitation, and coercion" and it "deplore[s] all forms of commercialization, abuse, and exploitation of sex". It defines pornography as "sexually explicit material that portrays violence, abuse, coercion, domination, humiliation, or degradation for the purpose of arousal. In addition, any sexually explicit material that depicts children is pornographic". The Sexual Ethics Task Force of The United Methodist Church states that "Research shows that [pornography] is not an 'innocent activity.' It is harmful and is generally addictive. Persons who are addicted to pornography are physiologically altered, as is their perspective, relationships with parishioners and family, and their perceptions of girls and women."
The liberal-leaning Uniting Church of Australia also condemns pornography and works in society to address the problem.
==== Quakers ====
In 1990, Quakers declared, "Since pornographic materials promote and propagate a lifestyle that includes activities which are condemned by God's Word and tempt viewers to commit the sin of lust (Matthew 5:27-28: Romans 13:14; II Peter 2:14, 18-19), Friends therefore are urged to carefully avoid exposure to such materials. Because of our responsibility as Christian citizens (Matthew 5:13; Proverbs 14:34) and in view of the evil, exploitative, and destructive effects of pornography on individuals, families, and our society, Friends are encouraged to prayerfully and boldly oppose the production and distribution of pornographic materials in their local communities, as well as at the state and national levels (Ephesians 5:11)."
==== Mennonites ====
Mennonites believe pornography is sinful. In a major report, the Mennonite Central Committee notes, "One misconception is that adult pornography has no victims: it’s a harmless, pleasurable activity which damages no one. Yet, research and experience increasingly show that pornography does cause harm: to one’s relationship with God, to human relationships, to the user, to those in the industry and to society in general." The report goes on to elaborate how it causes this harm.
==== Evangelicalism ====
In 2002, researcher Martin E. Marty noted that, "... most of the anti-porn activity is indeed from fundamentalist and evangelical groups... What keeps [American] mainstream Protestants from being consistently up front on an issue that... quite possibly leads to rape and other crimes, demeans the "user," and benefits the billions-per-year exploiter, has little to do with pornography. Instead it connects with their heavy commitment to free speech, their fear lest countering pornography in communications might erode the defenses against intrusions on precious liberties. Such Protestants are slow to promote boycotts and extremely cautious about legislation in areas so necessarily ill-defined as pornography. Caught between their abhorrence of pornography and their passion for liberties and rights, mainline Protestants, most Catholics, Reform Jews, and others have not found effective ways to be up front on this important front."
Two other American researchers, Sherkat and Ellison, produced a study which shows, "that Conservative Protestant opposition to pornography is rooted in commitments to Biblical inerrancy and solidified by high rates of religious participation. Inerrancy serves as a cognitive resource informing two separate paths to pornography opposition: moral absolutism and beliefs in the threat of social contamination."
Jerry Falwell has criticized pornography, saying sex is reserved for heterosexual married couples, to be used only in accordance with God's will (more specifically, to both solidify the emotional bonds between the man and his lawfully wedded wife, and to help propagate the human race ["Be fruitful, and multiply."]), and asserts that use of pornography involves indulgence in lust towards people other than one's spouse (which in Christianity is a sin) and leads to an overall increase in sexually immoral behavior (including, for example, adultery, rape, and/or even child molestation).
William M. Struthers in his book, Wired for Intimacy, has criticized pornography from a scientific viewpoint, suggesting that the viewing and use of pornography embeds abnormal neural pathways in the brain such that the desire for physical sexual relations may become subverted over time.
==== Anglicanism ====
The 1998 Lambeth Conference Resolution I.10 states, "... Clearly some expressions of sexuality are inherently contrary to the Christian way and are sinful. Such unacceptable expression of sexuality include promiscuity, prostitution, incest, pornography, paedophilia, predatory sexual behaviour, and sadomasochism (all of which may be heterosexual and homosexual), adultery, violence against wives, and female circumcision. From a Christian perspective these forms of sexual expression remain sinful in any context. We are particularly concerned about the pressures on young people to engage in sexual activity at an early age, and we urge our Churches to teach the virtue of abstinence."
The Anglican Diocese of Melbourne published an article which noted that, "Pornography is prostitution, as men pay money, time and dignity for gratification. This payment might not be direct, but the support of pornography feeds the human trafficking industry, in which some 27 million women and children are trapped worldwide. The harlot that used to be on the street corner is now our computer."
The Anglican Diocese of Sydney has established an "Archbishop's Taskforce for Resisting Pornography", which at the end of 2017, was seeking to establish a "resistingporn.org" website. The taskforce chairman has noted, "We know porn can alter desires and can affect self-control and compulsive behaviours. Studies have shown that the brains of long-term porn users behave similarly to those of drug addicts. There are also other effects... such as increasing incidences of adultery and earlier first-time sexual activity. It also goes without saying pornography is incompatible with God’s good purposes for the world."
=== Mormonism ===
Gordon B. Hinckley, president of the Church of Jesus Christ of Latter-day Saints (LDS Church) from 1995 to 2008, was known within the faith for expounding his organization's sentiments against pornography.
The LDS Church teaches that pornography is "any material depicting or describing the human body or sexual conduct in a way that arouses sexual feelings. It is as harmful to the spirit as tobacco, alcohol, and drugs are to the body. Members of the Church should avoid pornography in any form and should oppose its production, distribution, and use."
As part of teaching the law of chastity, LDS Church leaders have repeatedly condemned the use of sexually arousing literature and visual material for decades.
== Other Abrahamic religions ==
=== Judaism ===
Maimonides, in his Mishneh Torah, writes, based on the Talmud, that "A person who stares at even a small finger of a woman with the intent of deriving pleasure is considered as if he looked at her genitalia. It is even forbidden to hear the voice of a woman with whom sexual relations are prohibited, or to look at her hair." This is further codified in the Code of Jewish Law, which includes further prohibitions (based on the Talmud) such as "watching women as they do the laundry." Accordingly, pornography would be forbidden a fortiori.
Additionally, Jewish laws of modesty and humility (tzniut) require Jewish men and women to dress modestly. Jewish law thus precludes Jewish men & women from engaging in pornographic modelling or acting, besides other acts of immodesty.
The issue is also, according to Chabad.org, one of personal control over one's urges, which pornography, it is asserted, takes away.
Michael Coogan stated that the Tanakh does not have any specific laws relating to pornography and Judaism has always had a positive attitude to sex. Some contemporary thinkers opine that the Bible itself contains erotica, such as the Song of Songs. However, most commentators understand that Song of Songs was meant as an allegory, and that it is forbidden to think of such holy things as erotica, which can arouse impure thoughts.
=== Islam ===
In the 151st verse of the chapter Al-An'am in the Qur'an, among the five chief commandments of Allah, the fourth states: "do not even draw to things shameful - be they open or secret." Nudity is considered to be shameful and fahisha in Islam.
The Qur'an 24:30 states: "Say to the believers to lower the gaze and to guard their carnal desires."
The Qur'an 24:31 states: "And tell the believing women to lower their gaze and keep covered their private parts, and that they should not show-off their beauty except what is apparent, and let them cast their shawls over their cleavage. And let them not show off their beauty except to their husbands... "
Muhammad is quoted as saying: "Looking at women is a poisoned arrow from the arrows of the devil. Whoever restrains his eye out of fear of God Most High is given a faith that he perceives in his soul…. The eye commits adultery just as the genitals. The adultery of the eye is looking."
According to Indonesia's foremost Islamic preacher, Abdullah Gymnastiar, shame is a noble emotion commanded in the Qur'an and was held high by Muhammad himself who was quoted as saying, "Faith is compiled of seventy branches… and shame is one of them." In order to cultivate shame in the believers heart, sexual gaze needs to be checked as unchecked gaze is believed to be the door through which Satan enters and soils the believers heart. In 2006, during the anti-pornography protests in Indonesia (the world's most populous Muslim-majority country) over publication of the inaugural Indonesian edition of Playboy magazine, Abdullah called for legislation to ban pornography and embarked on a mission to shroud the state with a sense of shame saying "the more shameful, the more faithful."
Indonesia's foremost Islamic newspaper Republika ran daily front page editorials featuring a logo of the word pornografi crossed out with a red X. Playboy's Jakarta office was ransacked by the members of Islamic Defenders Front (Front Pembela Islam or FPI), bookstore owners were threatened to not sell any issue of the magazine. Consequently in December 2008, Indonesian lawmakers signed an anti-pornography bill into law with overwhelming political support.
According to some Shafi‘i jurists, it is not forbidden to look at the image of women's body in the water or in the mirror even with lust. Ibn Abidin, a Hanafi jurist, wrote: "I couldn't find anything about the disadvantage of looking at pictured private parts, let them be investigated." It is speculated that some of the Ottoman sultans considered as caliphs had obscene miniatures in their miniature collections.
== Indian religions ==
=== Sikhism ===
Although there is no direct prohibition of pornography in Sikhism, Sikhs argue that pornographic books and films, prostitution, and lust leads to adultery. Pornography is said to encourage lust (Kaam), which is a concept described as an unhealthy obsession for sex and sexual activity. Kaam is classed as one of the 'Five Thieves', personality traits which are heavily discouraged for Sikhs, as they "can build barriers against God in their lives".
=== Buddhism ===
In the Buddhist Pali Canon, Gautama Buddha renounced (Pali: nekkhamma) sensuality (kama) as a route to Enlightenment. Some Buddhists recite daily the Five Precepts, a commitment to abstain from "sexual misconduct" (kāmesu micchacara กาเมสุ มิจฺฉาจารา). The Dhammika Sutta (Sn 2.14) includes a precept in which the Buddha enjoins a follower to "observe celibacy"
=== Hinduism ===
One of the central concepts of Hinduism is that of Purushartha, which is understood as the meaning or purpose of human existence. It essentially advocates the pursuit of the four main proper goals for a happy life, namely: Dharma (righteous living, performance of ones duty), Artha (money, wealth), Kama (sensual delight, sensory pleasures) and Moksha (spiritual knowledge, self-actualization).
The pursuit of Kama is elaborated by Vatsyayana in his treatise Kamasutra; where he opined that just as good food is necessary for the well being of the body, good pleasure is necessary for the healthy existence of a human being. A life devoid of pleasure and enjoyment—sexual, artistic, of nature—is hollow and empty. Just as no one stops from farming crops even though everyone knows about the existence of birds, animals and insects that will try to eat the crop, in the same way claims Vatsyayana, one should not stop one's pursuit of kama just because dangers exist. Kama should be pursued with thought, care, caution and enthusiasm, like farming or any other pursuit in life.
== Others ==
=== Satanism ===
Contemporary Satanism is generally supportive of pornography production and consumption, and Satanists have even been involved in production of pornographic films. However, many members of the Church of Satan criticise mainstream contemporary pornography, but on aesthetic rather than moral grounds.
== See also ==
Chastity
Fornication
Religious views on masturbation
Religion and sexuality
== References == | Wikipedia/Religious_views_on_pornography |
In late January 2024, sexually explicit AI-generated deepfake images of American musician Taylor Swift were proliferated on social media platforms 4chan and X (formerly Twitter). Several artificial images of Swift of a sexual or violent nature were quickly spread, with one post reported to have been seen over 47 million times before its eventual removal. The images led Microsoft to enhance Microsoft Designer's text-to-image model to prevent future abuse. Moreover, these images prompted responses from anti-sexual assault advocacy groups, US politicians, Swifties, Microsoft CEO Satya Nadella, among others, and it has been suggested that Swift's influence could result in new legislation regarding the creation of deepfake pornography.
== Background ==
American musician Taylor Swift has reportedly been the target of misogyny and slut-shaming throughout her career. American technology corporation Microsoft offers AI image creators called Microsoft Designer and Bing Image Creator, which employ censorship safeguards to prevent users from generating unsafe or objectionable content. Members of a Telegram group discussed ways to circumvent these censors to create pornographic images of celebrities. Graphika, a disinformation research firm, traced the creation of the images back to a 4chan community.
== Reactions ==
For some, the deepfake images of Swift immediately became a source of controversy and outrage. Other internet users found them humorous and absurd, such as the image making it appear as though Swift was to engage in sexual intercourse with Oscar the Grouch. The images drew condemnations from Rape, Abuse & Incest National Network and SAG-AFTRA. The latter group, who had been following issues regarding AI-generated media prior to Swift's involvement, considered the images "upsetting, harmful and deeply concerning." Microsoft CEO Satya Nadella, whose company's products were believed to be used to make these images, responded to the controversy as "alarming and terrible", further stating his belief that "we all benefit when the online world is a safe world." The content also sparked race-relations debates with some questioning whether it was racist to be offended by deepfaked images where Swift is appearing ready for sexual acts with the entire Kansas City Chiefs, most of whom are African American.
=== Taylor Swift ===
A source close to Swift told the Daily Mail that she would be considering legal action, saying, "Whether or not legal action will be taken is being decided, but there is one thing that is clear: These fake AI-generated images are abusive, offensive, exploitative, and done without Taylor's consent and/or knowledge."
=== Politicians ===
White House press secretary Karine Jean-Pierre expressed concern over the counterfeit images, deeming them "alarming", and emphasized the obligation of social media platforms to curb the dissemination of misinformation. Several members of American politics called for legislation against AI-generated pornography. Later in the month, a bipartisan bill was introduced by US senators Dick Durbin, Lindsey Graham, Amy Klobuchar and Josh Hawley. The bill would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made without consent. The European Union struck a deal in February 2024 on a similar bill that would criminalize deepfake pornography, as well as online harassment and revenge porn, by mid-2027.
=== Social media platforms ===
X responded to the sharing of these images on their own website with claims they would suspend accounts that participated in their spread. Despite this, the photos continued to be reshared among accounts of X, and spread to other platforms including Instagram and Reddit. X enforces a "synthetic and manipulated media policy", which has been criticized for its efficacy. They briefly blocked searches of Swift's name on January 27, 2024, reinstating them two days later.
=== Swifties ===
Fans of Taylor Swift, known as Swifties, responded to the circulation of these images by pushing the hashtag #ProtectTaylorSwift to trend on X. They also flooded other hashtags related to the images with more positive images and videos of her live performances.
== Cultural significance ==
Deepfake pornography has remained highly controversial and has affected figures from other celebrities to ordinary people, most of whom are women. Journalists have opined that the involvement of a prominent public figure such as Swift in the dissemination of AI-generated pornography could bring public awareness and political reform to the issue.
== References == | Wikipedia/Taylor_Swift_deepfake_pornography_controversy |
Pornographic magazines or erotic magazines, sometimes known as adult magazines or sex magazines, are magazines that contain content of an explicitly sexual nature. Publications of this kind may contain images of attractive naked subjects, as is the case in softcore pornography, and, in the usual case of hardcore pornography, depictions of masturbation, oral, manual, vaginal, or anal sex.
They primarily serve to stimulate sexual arousal and are often used as an aid to masturbation. Some magazines are general in their content, while others may be more specific and focus on a particular pornographic niche, part of the anatomy, or model characteristics. Examples include Asian Babes which focuses on Asian women, or Leg Show which concentrates on women's legs. Well-known adult magazines include Playboy, Penthouse, Playgirl, and Hustler. Magazines may also carry articles on topics including cars, humor, science, computers, culture, and politics. With the continued progression of print media to digital, retailers have also had to adapt. Software such as Apple's discontinued Newsstand enabled the downloading and displaying of digital versions of magazines but did not allow pornographic material. However, there are specific digital newsstands for pornographic magazines.
== History ==
Pornographic magazines form a part of the history of erotic depictions. It is a form for the display and dissemination of these materials.
In 1880, halftone printing was used to reproduce photographs inexpensively for the first time. The invention of halftone printing took pornography and erotica in new directions at the beginning of the 20th century. The new printing processes allowed photographic images to be reproduced easily in black and white, whereas printers were previously limited to engravings, woodcuts, and line cuts for illustrations. It allowed pornography to become a mass-market phenomenon, it is becoming more affordable and more easily acquired than any previous form.
First appearing in France, the new magazines featured nude and semi-nude photographs on the cover and throughout; often, burlesque actresses were hired as models. While they would now be termed softcore, they were quite shocking for the time. The publications soon either masqueraded as "art magazines" or publications celebrating the new cult of naturism, with titles such as Photo Bits, Body in Art, Figure Photography, Nude Living, and Modern Art for Men. The British magazine Health & Efficiency (now H&E naturist, often known simply as H&E) was first published in 1900, and began to include articles about naturism in the late 1920s. Gradually, this material came to dominate – particularly as other magazines were taken over and absorbed. At times in its post-WWII history, H&E has catered primarily to the soft-porn market.
Another early form of pornography were comic books known as Tijuana bibles that began appearing in the U.S. in the 1920s and lasted until the publishing of glossy colour men's magazines. They were crude hand-drawn scenes often using popular characters from cartoons and culture.
In the 1940s, the word "pinup" was coined to describe pictures torn from men's magazines and calendars and "pinned up" on the wall by U.S. soldiers in World War II. While the 1940s images focused mostly on legs, by the 1950s, the emphasis shifted to breasts. Betty Grable and Marilyn Monroe were two of the most popular pinup models. Monroe continued to be a popular model for the men's magazines in the 1950s.
The 1950s saw the rise of the first mass market softcore pornographic magazines: Modern Man in 1952 and Playboy in 1953. Hugh Hefner's Playboy started a new style of the men's glossy magazine (or girlie magazine). Hefner coined the term centerfold, and in the first edition of his Playboy used a photograph of a nude Monroe, despite her objections. Another term that became popular with Playboy readers was the "Playboy Playmate". These new-style magazines featured nude or semi-nude women, sometimes simulating masturbation, although their genitals or pubic hair were not actually displayed.
In 1963, Lui started in France to compete against Playboy, while Bob Guccione did the same in the United Kingdom in 1965 with Penthouse. Penthouse's style was different from other magazines, with women looking indirectly at the camera, as if they were going about their private idylls. This change of emphasis influenced erotic depictions of women. Penthouse was also the first magazine to publish pictures that included pubic hair and full frontal nudity, both of which were considered beyond the bounds of the erotic and in the realm of pornography at the time. In 1965, Mayfair was launched in the UK in competition to Playboy and Penthouse. In September 1969 Penthouse was launched in the U.S., bringing new competition to Playboy. In order to retain its market share Playboy followed Penthouse in the display of pubic hair, risking obscenity charges, and launching the "Pubic Wars". As competition between the two magazines escalated, their photos became increasingly more explicit. In the late 1960s, some magazines began to move into more explicit displays often focusing on the buttocks as standards of what could be legally depicted and what readers wanted to see.
By the 1970s magazines containing images of the pubic area became increasingly common. In the UK, Paul Raymond acquired and then relaunched Men Only in 1971 as a pornographic magazine, and then launched Club International in 1972. Playboy was the first to clearly show visible pubic hair in January 1971. The first full frontal nude centerfold was Playboy's Miss January 1972. In 1974, Larry Flynt first published Hustler in the US, which contained more explicit material. Some researchers have detected increasingly violent images in magazines like Playboy and Penthouse over the course of the 1970s, with them then returning to their more upscale style by the end of the decade. Paul Raymond Publications relaunched Escort in 1980 in the UK, Razzle in 1983, and Men's World in 1988.
Sales of pornographic magazines in the U.S. have declined significantly since 1979, with a nearly 50% reduction in circulation between 1980 and 1989. The fact that the U.S. incidence of rape had increased over the same period has cast doubt on any correlation between magazine sales and sex crimes. Studies from the mid-1980s to the early 1990s nearly all confirmed that pornographic magazines contained significantly less violent imagery than pornographic films.
In the 1990s, magazines such as Hustler began to feature more hardcore material such as sexual penetration, lesbianism and homosexuality, group sex, masturbation, and fetishes. In the late 1990s and 2000s the pornographic magazines market declined, as they were challenged by new "lad mags" such as FHM and Loaded, which featured softcore photos. The availability of pornographic DVDs and internet pornography also led to a decline in magazine sales. Many magazines developed their own websites which also show pornographic films. Despite falling sales, the top-selling U.S. adult magazines still maintain high circulations compared to most mainstream magazines, and are amongst the top-selling magazines of any type.
Paul Raymond Publications dominates the British adult magazine market today, distributing eight of the ten top selling adult magazines in the UK. There were about 100 adult magazine titles in the UK in 2001.
== Common features ==
Several magazines feature photos of "ordinary" women submitted by readers, for example the Readers Wives sections of several British magazines, and Beaver Hunt in the US. Many magazines also feature supposed stories of their reader's sexual exploits, many of which are actually written by the magazines' writers. Many magazines contain a high number of advertisements for phone sex lines, which provide them with an important source of revenue.
== Gay magazines ==
An early example of borderline gay pornography was the physique magazine, a genre which had wide circulation in the 1950s and 1960s. Physique magazines mostly consisted of photographs of attractive, scantily-clad young men, and occasionally homoerotic illustrations by gay artists like George Quaintance and Tom of Finland. The magazines contained no overt mentions or depictions of homosexuality and used the pretense of demonstrating bodybuilding techniques or providing photos as visual references for artists, but it was widely understood that they were purchased almost exclusively by gay men. Major examples of the genre include Physique Pictorial (the first of its kind, debuting in 1951), Tomorrow's Man, and Grecian Guild Pictorial.
Shifts in the judicial interpretation of obscenity in the US and elsewhere led to physique magazines being supplanted in the mid-to-late 1960s by new publications which openly acknowledged a gay audience and featured nudity, and later hardcore sex.
== Production, distribution, and retail ==
A successful magazine requires significant investment in production facilities and a distribution network. They require large printing presses and numerous specialized employees, such as graphic designers and typesetters. Today a new magazine start-up can cost as much as $20 million, and magazines are significantly more expensive to produce than pornographic films, and more expensive than internet pornography.
Like all magazines, pornographic magazines are dependent on advertising revenue, which may force a magazine to tone down its content.
Depending on the laws in each jurisdiction, pornographic magazines may be sold in convenience stores, newsagents and petrol stations. They may need to be sold on the top shelf of a retail display to prevent children reaching them, hence their euphemistic name top shelf magazines. Alternatively it may be necessary to sell them under the counter or in plastic wrappers. Some retail chains and many independent retail outlets do not stock pornographic magazines. They may also be sold in sex shops or by mail order.
== See also ==
Fetish magazine
Glamour photography
History of erotic depictions
List of pornographic magazines
Pubic Wars
== References ==
== Bibliography ==
Hanson, Dian (2004). Dian Hanson's The History of Men's Magazines vol. 1 From 1900 to Post WW II. Taschen. ISBN 978-3822822296.
Hanson, Dian (2004). Dian Hanson's The History of Men's Magazines vol. 2 From Post-War to 1959. Taschen. ISBN 978-3822826256.
Hanson, Dian (2005). Dian Hanson's The History of Men's Magazines vol. 3 1960s at the newsstand. Taschen. ISBN 978-3822829769.
Hanson, Dian (2005). Dian Hanson's The History of Men's Magazines vol. 4 1960s under the counter. Taschen. ISBN 978-3822836354.
Hanson, Dian (2005). Dian Hanson's The History of Men's Magazines vol. 5 1970s at the newsstand. Taschen. ISBN 978-3822836361.
Hanson, Dian (2005). Dian Hanson's The History of Men's Magazines vol. 6 1970s under the counter. Taschen. ISBN 978-3822836378.
Kimmel, Michael S. (2005). The gender of desire: essays on male sexuality. SUNY Press. ISBN 978-0-7914-6337-6.
Pendergast, Tom (2000). Creating the Modern Man: American Magazines and Consumer Culture, 1900-1950. University of Missouri Press. ISBN 9780826262240. | Wikipedia/Pornographic_magazine |
Women Against Pornography (WAP) was a radical feminist activist group based out of New York City that was influential in the anti-pornography movement of the late 1970s and the 1980s.
WAP was the most well known feminist anti-pornography group out of many that were active throughout the United States and the anglophone world, primarily from the late 1970s through the early 1990s. After previous failed attempts to start a broad feminist anti-pornography group in New York City, WAP was finally established in 1978. WAP quickly drew widespread support for its anti-pornography campaign, and in late 1979 held a March on Times Square that included over 5000 supporters. Through their march as well as other means of activism, WAP was able to bring in unexpected financial support from the Mayor's office, theater owners, and other parties with an interest in the gentrification of Times Square.
WAP became known because of their anti-pornography informational tours of sex shops and pornographic theaters held in Times Square. In the 1980s, WAP began to focus more on lobbying and legislative efforts against pornography, particularly in support of civil-rights-oriented antipornography legislation. They were also active in testifying before the Meese Commission and some of their advocacy of a civil-rights based anti-pornography model found its way into the final recommendations of the commission. In the late 1980s, the leadership of WAP changed their focus again, this time more on the issue of international sex trafficking, which led to the founding of the Coalition Against Trafficking in Women. In the 1990s WAP became less active and eventually faded out of existence in the mid '90s.
The positions of Women Against Pornography were controversial. Civil liberties advocates opposed WAP and similar groups, holding that the legislative approaches WAP advocated amounted to censorship. In addition to this, WAP faced conflict with sex-positive feminists, who held that feminist campaigns against pornography were misdirected and ultimately threatened sexual freedoms and free speech rights in a way that would be detrimental toward women and sexual minorities. WAP and sex-positive feminists were involved in conflict in the events surrounding the 1982 Barnard Conference. These events were battles in what became known as the Feminist Sex Wars of the late 1970s and 1980s.
== Formation ==
The group that eventually became Women Against Pornography emerged from the efforts of New York radical activists in fall 1976, after the public controversy and pickets organized by Andrea Dworkin and other radical feminists over the public debut of Snuff. It was part of a larger wave of radical feminist organizing around the issue of pornography, which included protests by the Los Angeles group Women Against Violence Against Women against The Rolling Stones' sadomasochistic advertisements for their album Black and Blue (see below). Founding members of the New York group included Adrienne Rich, Grace Paley, Gloria Steinem, Shere Hite, Lois Gould, Barbara Deming, Karla Jay, Andrea Dworkin, Letty Cottin Pogrebin, and Robin Morgan. These initial efforts stalled after a year of meeting and resolutions over a position paper, which they hoped to place as a paid advertisement in The New York Times, expressing feminist objections to pornography, and distinguishing them from conservative complaints against "obscenity".
In November 1978, a group of New York feminists participated in a national feminist antipornography conference, organized by Women Against Violence in Pornography and Media (WAVPM) in San Francisco. After the conference, Susan Brownmiller approached WAVPM organizers Laura Lederer and Lynn Campbell, and encouraged them to come to New York City to help with anti-pornography organizing there. Lederer decided to stay in San Francisco to edit an anthology based on the conference presentations, but Campbell took up the offer. She arrived in New York in April 1979, with Brownmiller, Adrienne Rich, and Frances Whyatt contributing money to help her cover her living expenses while the organizing work progressed. Dolores Alexander was soon recruited as a fundraiser, and Barbara Mehrhof was hired as an organizer soon thereafter with the money that Alexander was able to raise. Brownmiller soon took an unpaid position as the fourth organizer.
== Membership and support ==
The original organizers of Women Against Pornography came primarily from the New York radical feminist groups that had developed during the 1970s, but once their organization began they found unexpected sources of membership and support from across New York. According to Susan Brownmiller,
The group that became Women Against Pornography was livelier and more diverse than any I'd ever worked with in the movement. Maggie Smith's bar, with "I Will Survive" blaring on the jukebox, was a pit stop for the neighborhood prostitutes she was trying to keep off junk. Amina Abdur Rahman, education director for the New York Urban League, had been with Malcolm X in the Audubon Ballroom on the night he was murdered. Dianne Levitt was the student organizer of an anti-Playboy protest at Barnard, Dorchen Leidholdt had founded New York WAVAW, Frances Patai was a former actress and model, Marilyn Kaskel was a TV production assistant, Angela Bonavoglia did freelance magazine writing, Jessica James was starring off Broadway, Janet Lawson was a jazz singer, Alexandra Matusinka's family ran a nearby plumbing supply store, Sheila Roher was a playwright, Ann Jones was writing Women Who Kill, Anne Bowen had played guitar with The Deadly Nightshade, and Myra Terry was an interior decorator and a NOW chapter president in New Jersey.
The diversity in perspectives within the group was the source of considerable debate and some acrimony. WAP originally did not take a stance on the issue of prostitution, for example, since there was a division between members who opposed prostitution as a form of male domination and those who wanted to bring prostitutes into the movement. (WAP later came to strongly oppose prostitution as a form of exploitation of women, and critiqued pornography as a "system of prostitution".) There was also considerable tension between heterosexual feminists and lesbian separatists.
WAP's decision to focus attention on pornography and prostitution in Times Square drew unexpected support from Broadway theater owners and city development agencies despairing at the increasing crime and urban blight in the neighborhood of Times Square. Carl Weisbrod, the head of the Mayor's Midtown Enforcement Project, helped them secure rent-free office space from the 42nd Street Redevelopment Corporation, in an empty bar and restaurant storefront that they were able to use until a buyer could be found (they occupied the storefront for more than two years, until two adjacent buildings collapsed during a renovation). St. Malachy's, a Midtown actors' chapel, contributed surplus desks. When Bob Guccione tried to buy the storefront space (in order to open an establishment to be named the Meat Rack), WAP alerted neighborhood residents, who protested and defeated the proposed deal.
However, the wider involvement sometimes created conflicts with supporters who did not realize that the group's goals extended beyond Times Square:
The League of New York Theater Owners wrote us a check for ten thousand dollars, although Gerry Schoenfeld of the Shubert Organization, the czar behind the generous gift, threw a fit when he saw that our mission was somewhat broader than "clean up Times Square." "Playboy?" he yelled one day, barging into the office. "You're against Playboy? Where's Gloria Steinem? Does she know what you're doing?
== March on Times Square ==
Women Against Pornography also organized a March on Times Square, held October 20, 1979. The march drew between five and seven thousand demonstrators, who marched behind a huge stitched banner reading "Women Against Pornography / Stop Violence Against Women," including Brownmiller, Alexander, Campbell, Mehrhof, Bella Abzug, Gloria Steinem, Robin Morgan, Andrea Dworkin, Charlotte Bunch, Judy Sullivan, and Amina Abdur-Rahman. The march drew extensive coverage of the CBS evening news and in the morning papers.
== Later history ==
After the March on Times Square, Lynn Campbell resigned her position as an organizer (due to her failing health) and Brownmiller resigned to finish work on her book Femininity, while Dorchen Leidholdt took on a new leadership role in the organization.
In 1988, WAP organized a conference titled "Trafficking in Women", co-sponsored with Evelina Giobbe's feminist anti-prostitution group Women Hurt in Systems of Prostitution Engaged in Revolt (WHISPER). The conference explored the alleged role of sex trafficking in bringing women into the sex industry. As a result of this conference, Leidholdt felt it would be more productive to focus on combatting the international sex industry, and founded the Coalition Against Trafficking in Women (CATW) for that purpose. She also soon stepped down as leader of Women Against Pornography in order to focus her efforts on this new campaign.
After the departure of Leidholdt, WAP became much less active. The group was led by Norma Ramos, who continued to make appearances in the name of WAP through the early 1990s. WAP faded out of existence during the mid-1990s, closing in 1996–'97, though Leidholt and Ramos both continued to be active in CATW into the 2000s.
== Campaigns ==
Throughout the late 1970s and early 1980s, Women Against Pornography focused on educational campaigns to raise awareness of what they viewed as the harms caused by pornography and the sex industry. Their activism took on many forms, including expose slide-shows, tours of sex industry outlets in Times Square, conferences, and public demonstrations.
=== Slide shows ===
The group's earliest educational efforts were a series of slide shows of hardcore and softcore pornography, which were shown with critical commentary by a WAP presenter. The format of a slide show with critical commentary had been used earlier by Julia London of the Los Angeles group Women Against Violence Against Women to illustrate soft-core pornographic themes in rock album covers; WAP adapted the format to discuss pornography in general, including hardcore pornography. Slide shows were generally organized by local feminist groups, and held in women's homes as part of consciousness-raising meetings. The anti-pornography movement has continued to use slide shows as an educational tactic for feminist group meetings and public events.
Opponents of anti-pornography feminism have criticized the slide shows of WAP and similar groups, claiming that they disproportionately emphasized violent and sadomasochistic materials and presented these themes as being typical of all pornography.
=== Times Square tours ===
Women Against Pornography's best-known tactic was a guided tour of the pornography and prostitution outlets in Times Square, which they led twice a week for a suggested contribution of $5.00. (In San Francisco, WAVPM had conducted similar tours in the red-light districts of that city.) Lynn Campbell suggested that people who did not consume pornography knew very little about the content of the pornography or the atmosphere in sex shops and live sex shows, and that actual guided tours of the sex industry in Times Square would provide an excellent educational tool. Susan Brownmiller planned an itinerary for the tour and wrote a script for the guides (with the help of information supplied by Carl Weisbrod, a police officer tasked with finding and closing down underground brothels in Midtown, and Maggie Smith, the owner of a neighborhood bar). The tours often involved unplanned encounters—being physically thrown out by enraged store managers, watching businessmen try to hide from the tourists, or talking briefly with nude performers while they took their breaks. After a reporter for The New York Times took one of the first tours and wrote a feature article for the Style section, WAP received coverage in People, Time, The Philadelphia Inquirer, European newspapers, local TV news programs and talk shows in New York City, and The Phil Donahue Show in Chicago.
=== Demonstrations ===
Women Against Pornography also organized a number of large demonstrations against pornography, most notably the March on Times Square (see above).
=== Later campaigns ===
During the era of Dorchen Leidholdt's leadership, the group continued the Times Square tours and slide shows, organized smaller-scale protest demonstrations, sent out speakers and held public panel discussions on pornography, and announced "WAP zaps," a series of publicly announced awards and condemnations focused on the advertising industry, and expressed public support for Linda Boreman after she publicly stated that Chuck Traynor had violently coerced her into making Deep Throat and other pornographic films as "Linda Lovelace". WAP also became more active in political lobbying during this time.
WAP was among several groups that protested the release of pornographic video games by Mystique during the 1980s, especially against their game Custer's Revenge, which was seen by many as racist.
=== Lobbying ===
WAP also focused on lobbying for anti-pornography legislation, particularly legislation such as the Dworkin-MacKinnon Antipornography Civil Rights Ordinance that adhered to the feminist "civil rights" approach rather than the older "obscenity" approach. In accordance with this, in 1984 WAP lobbied to change a proposed Suffolk County, New York anti-pornography ordinance to reflect their approach; when these changes were not forthcoming, WAP, along with several anti-censorship groups, successfully lobbied against passage of the measure.
In 1986, the group played an important role in the Meese Commission hearings, helping the commission locate witnesses and having Dorchen Leidholdt testify during the commission hearings. In spite of this, WAP sought to distance itself from the commission, which took a conservative anti-obscenity approach to pornography, even holding a demonstration against the commission immediately before Leidholdt's appearance as a friendly witness. Much of their language of pornography as a civil rights violation against women found its way into the final report of the Meese Commission.
=== Advertising awards ===
WAP held an annual awards ceremony in which plastic pigs were handed out for advertising campaigns that WAP considered "demeaning to women and girls" and "Ms. Liberty awards" were awarded for "prowoman ads". Many advertisers disagreed with WAP's interpretation of their ad campaigns, though at least one recipient of a "pig" award, the shoemaker Famolare, responded by changing its ads, and was rewarded with a "Ms. Liberty" award the next year.
=== Conferences ===
In 1987, WAP organized a conference titled "The Sexual Liberals and the Attack on Feminism", a forum in which various notable radical feminist writers stated their opposition to the newly emerging school of sex-positive feminism. In 1988, WAP (along with WHISPER), organized a conference titled "Trafficking in Women" (see above), addressing the question of the role of trafficking in the international sex industry.
=== Case support ===
According to Dworkin, in ca. 1988, WAP established a criminal defense fund for Jayne Stamen, who was convicted of manslaughter for arranging a beating of her husband (who died) which followed experience with her husband using pornography and of criminal solicitation for trying to have him murdered after he threatened violence, but the fund was unable to raise bail money for her appeal.
== Opposition and controversies ==
The late former magazine editor and porn actress Gloria Leonard was an outspoken advocate for the adult industry and for several years in the 1980s debated representatives from WAP at numerous college campuses.
=== Civil liberties and sexual liberalism ===
Many of Women Against Pornography's campaigns for legal remedies against pornography brought them into direct confrontation with civil liberties advocates such as the ACLU, who argued that laws such as the Dworkin/Mackinnon Ordinance were simply another form of censorship. WAP was particularly criticized for what was seen by many as its friendly stance toward the Meese Commission, which was viewed by many as a government attack on civil liberties. For its part, WAP argued that an absolutist free speech doctrine ended up compromising the civil rights of women. WAP also charged that monetary contributions from pornographers to groups like the ACLU had compromised the ability of such groups to view legal tactics against pornography objectively.
From its beginnings, the group was controversial in feminist circles, many of whom felt that feminist campaigns against pornography were misdirected and ultimately threatened sexual freedoms and free speech rights in a way that would be ultimately detrimental toward women, gay people, and sexual minorities. Ellen Willis was particularly outspoken in her criticism of WAP and other feminist anti-pornography campaigns. Opposition to the kind of feminist anti-pornography politics espoused by WAP led to the rise of an opposing movement within feminism known as "pro-sex feminism" (a term coined by Willis). For its part, WAP viewed sex positive feminists as "sexual liberals" and "sexual liberationists" who were not real feminists and were blind to (or possibly even in collusion with) male sexual oppression of women and the central role of such oppression in the upholding male dominance.
These controversies came to a head in an event known as the Barnard Conference on Sexuality, a 1982 academic conference on feminist perspectives on sexuality. The conference was organized by "pro-sex" and other feminists who felt that their perspectives were excluded by the dominance of the anti-pornography radical feminist position in feminist circles. The latter were in turn excluded from participation in the Barnard Conference. WAP responded by picketing the conference. It is also alleged that WAP engaged in a campaign of harassment against several of the conference organizers (among them author Dorothy Allison), publishing their home addresses and phone numbers on leaflets that were distributed publicly, engaging in telephone harassment, and calling the employers of these individuals in an attempt to get them fired from their jobs. In 1984, feminists opposed to Women Against Pornography and feminist anti-pornography politics coalesced in the group, Feminist Anti-Censorship Taskforce (FACT).
The often-acerbic confrontations between sex-positive and anti-porn feminists (in which WAP played a central role) during the 1980s became known as the Feminist Sex Wars.
=== Coalescing with nonfeminists ===
A criticism is that by coalescing with nonfeminists, specifically the Christian right against pornography feminists are co-opted and the movement becomes itself nonfeminist. According to Alice Echols in 1983, "[t]he cultural feminists of WAP appeal to women's sense of sexual vulnerability and the resilience of gender stereotypes in their struggle to organize all women into a grand and virtuous sisterhood to combat male lasciviousness. Thus, when Judith Bat-Ada argues that to fight pornography 'a coalition of all women needs to be established, regardless of ... political persuasion,' she abandons feminism for female moral outrage."
== Similar groups ==
A number of feminist anti-pornography groups sprang up throughout the United States, as well as internationally, particularly during the late 1970s and early 1980s. Some histories of the anti-pornography movement mistakenly refer to the activities of these groups as those of "Women Against Pornography", which was by far the best-known of these groups.
Among the first such groups was Women Against Violence Against Women (WAVAW), which was founded in Los Angeles in 1976 and was led by Marcia Womongold. This group was best known for holding a demonstration in 1977 in response to a BDSM-themed billboard for the Rolling Stones album Black and Blue, which showed a bound and bruised woman with the caption "I'm 'Black and Blue' from the Rolling Stones — and I love it!". The billboard was removed in response to the WAVAW's protests. WAVAW went on to start a number of chapters in several cities throughout North America and the United Kingdom, with a particularly active chapter in Boston. (A New York City chapter headed by Dorchen Leidholdt also existed prior to the founding of WAP.) The group was active until 1984.
Women Against Violence in Pornography and Media (WAVPM) was a San Francisco group that played a very important role in the founding of WAP. According to Alice Echols, "the two groups share[d] the same analysis." WAVPM pioneered many of WAP's tactics (such as slide shows, porn shop tours, and mass demonstrations in red light districts). It was active from 1976 to 1983 and led by Lynn Campbell (who went on to become first head of WAP) and Laura Lederer.
Feminists Fighting Pornography, led by Page Mellish, did organizing in New York City.
Feminists Against Pornography was a different group, active in Washington, D.C. during the late 1970s and early 1980s.
The Pornography Resource Center, a Minneapolis group, was founded in 1984 to support Catharine MacKinnon's campaign to pass the Antipornography Civil Rights Ordinance in Minneapolis. The group changed its name to Organizing Against Pornography in 1985 and was active until 1990.
In the United Kingdom, the feminist Campaign Against Pornography (CAP) was launched by British MP Clare Short in 1986 and was best known for its "Off the Shelf" campaign against "Page Three girls" in British tabloids. A breakaway group, Campaign Against Pornography and Censorship (CPC), started by Catherine Itzin in 1989, adhered more closely to the civil rights anti-pornography approach favored by Women Against Pornography. CPC was active in Ireland as well as the UK. Both groups were active until the mid-1990s.
In New Zealand, groups calling themselves "Women Against Pornography" were active during the eighties and early nineties (1983–1995). A Wellington group was formed in 1983 and an Auckland group in 1984. Their work focussed on depictions of sexual exploitation and sexual violence in film, video and art. They had no formal connection to the American group. They are best known for their 1984 attempt to force the resignation of New Zealand Chief Censor Arthur Everard after he allowed the horror film I Spit on Your Grave to be shown in that country. In this national context, the Society for Promotion of Community Standards had tried to prevent the criminalisation of spousal rape in 1982, so there were tensions between the Christian Right and feminist anti-pornography activists, as well as a strengthened movement for LGBT rights in New Zealand that also benefited from prevalent social liberalism, pointing out that gay pornography did not operate according to the same psychological and sociological parameters as its heterosexual equivalent. When it dissolved in 1995, Women Against Pornography had not adopted a strategy that converged with the New Zealand Christian Right, unlike many of its national counterparts abroad. Much of this was due to the weakness of the New Zealand Society for Promotion of Community Standards after co-belligerency against the Homosexual Law Reform Act 1986.
The group Scottish Women Against Pornography (SWAP) was started in 1999 and was still active as of 2008. It also has no formal connection with the American group and was started well after its demise.
In 2002, anti-pornography feminist Diana Russell and several cohorts informally used the name "Women Against Pornography" for a demonstration against the opening of the Hustler Club, a San Francisco strip club.
== See also ==
WAP (song)
Financial Coalition Against Child Pornography
Gail Dines
== Bibliography ==
MacKinnon, Catharine A., & Andrea Dworkin, eds., In Harm's Way: The Pornography Civil Rights Hearings (Cambridge, Mass.: Harvard Univ. Press, pbk. 1997 (ISBN 0-674-44579-1)) (includes discussion of WAP)
== References ==
== External links ==
Women Against Pornography Records.Schlesinger Library Archived 2012-05-09 at the Wayback Machine, Radcliffe Institute, Harvard University.
"Women's War on Porn", Time, August 27, 1979.
"Anti-Porn March in Times Square (1979)", WPIX. (Archived on YouTube.)
The Women Against Violence Against Women records, 1972–1985 are located in the Northeastern University Libraries, Archives and Special Collections Department, Boston, MA.
The papers of the British Campaign Against Pornography (CAP) are held at The Women's Library at London Metropolitan University, ref 5CAP | Wikipedia/Women_Against_Pornography |
Bisexual pornography is a genre of pornography that most typically depicts men and at least one woman who all perform sex acts on each other. While a sex scene involving women and one man who all perform sex acts on each other might occasionally be identified or labeled as bisexual, it typically is not labeled that way.
== History ==
Representations of bisexual eroticism have been found in ancient Etruscan tomb paintings that feature homosexual activity combined with heterosexual activity.
== Pornographic film industry ==
In the first half of the twentieth century, some stag films presented bisexual content. Polissons et galipettes features several French examples, primarily L'heure du thé and Madame Butterfly (both c. 1925), which feature acts performed between both men and women. Kenneth Tynan documents many examples, including the French The Chiropodist from the 1920s and the American A Stiff Game (which features a rare interracial male-male pairing).
Bisexual pornography began to develop as a genre in the early to mid-1980s. While bisexual content featuring lesbianism had been prevalent during the 1960s and 1970s, bisexual content featuring male homosexuality was first introduced by major pornographic studios in the early 1980s. Paul Norman was one of the earliest directors to gain a reputation for creating bisexual films, with his "Bi and Beyond" series debuting in 1988. Content featuring male bisexuality has been a growing trend since the advent of internet pornography. However the genre remains a very small proportion of the pornographic DVD market; for example at porn retailer HotMovies.com, there are only 655 bisexual titles out of a catalogue of more than 90,000 films. Bisexual DVDs sell much better online than in adult video stores, possibly due to customers in stores feeling embarrassed to buy them. Most bisexual pornography is made by small production companies rather than the major studios. Actors are mostly amateur; any well-known actors in bisexual porn tend to be from the gay pornography industry. The sex columnist Violet Blue states that bisexual pornography usually features gay male actors who are straight-for-pay. Blue says that many bisexual productions suffer from poor casting, lack of enthusiasm from homophobic directors, and lackluster performances because "all too many bisexual videos feature two men having sex with each other while desperately trying not to enjoy the female participants."
The market for bisexual pornography is not completely understood. Viewers include self-identified bisexuals, but a larger viewership may be self-identified heterosexual men who are curious about sex between men. Many viewers are also heterosexual women who enjoy gay male pornography. Although including heterosexual content, bisexual pornography is often considered a "gay" genre by the pornographic industry and many of its viewers are gay men. Bisexual pornography is often placed near the gay section in adult video stores, since many consumers are gay men. As the genre has developed, it has become increasingly associated with the gay male pornographic industry. By the 2000s, bisexual productions used less scenes with women together and bisexual scenes now frequently resemble gay scenes with heterosexual content added.
Pornography featuring two or more men and a woman (e.g., gang bang or double penetration pornography) is generally classified as "straight pornography" as long as "there is little to no contact between the men" in order to "straighten" what otherwise could be a homoerotic encounter. Only when the men have sex with each other is the scene considered bisexual pornography. Bisexual pornography is generally not marketed to heterosexual men. According to the writer Jeffrey Escoffier, bisexual pornography is usually considered a gay genre.
Male performers in heterosexual porn who have appeared in bisexual porn, have had their sexuality questioned and have been stigmatized due to homophobia, and have been accused by the gay community of being in denial about their sexual orientation; while male performers in gay porn who have appeared in bisexual porn have been accused of being heteronormative.
In August 2018, MindGeek's gay pornographic website Men.com created controversy by releasing its first scene featuring MMF bisexual porn, sparking a discussion over whether bisexual porn belongs on a gay porn website. In reaction to the controversy, MindGeek decided to stop featuring bisexual pornography on Men.com and created a separate bisexual website instead called WhyNotBi.com.
By 2019, bisexual pornography was a fast growing genre. In 2021, "bisexual male" was one of Pornhub's top trending searches and bisexual videos were viewed more often by women than by men.
Jim Powers, a director of bisexual pornographic films for the studio BiPhoria, says that stigma exists against bisexual pornography. Powers says that the genre is rapidly increasing in popularity, however prejudice exists in the industry against heterosexual men who appear in bisexual pornography, women who work with bisexual or gay men may be shunned due to perceived risk of HIV infection, and gay viewers may object to gay performers working with women because heterosexual sex is perceived as reinforcing a "heteronormative paradigm". Powers says that homophobic straight agents try to stop straight men from appearing in bisexual pornography, because gay sex is considered "verboten", while many gay and bisexual performers have stopped identifying as either gay or straight because the genre can be a "battlefield" due to gay viewers who want to "keep their gay men ‘pure’ and not to mix with women."
== Notable directors and performers ==
Kurt Lockwood
Danny Wylde
Sharon Kane
Chi Chi LaRue
Shy Love
Paul Norman (director)
Ona Zee
== See also ==
Gay pornography
List of LGBT-related films
Pornographic film
== Bibliography ==
Marjorie Garber: Bisexuality and the Eroticism of Everyday Life (Routledge, 2000). ISBN 0-415-92661-0
== References == | Wikipedia/Bisexual_pornography |
Lesbian pornography is a form of adult entertainment that features sexual activity between women. Its primary goal is sexual arousal in its audience and is most often produced as erotic content aimed at heterosexual male, homosexual female, and bisexual audiences. Although it has also been found that many heterosexual females prefer this genre of porn due to its greater focus on women's pleasure.
Homoerotic art and artifacts depicting women have a long history, reaching back to various ancient civilizations. Every medium has been used to represent women having sex with each other. In contemporary mass media, this content is primarily disseminated through home videos (including DVDs), cable broadcasts, emerging video-on-demand and wireless markets, as well as online photo sites and Lesbian pulp fiction.
== Audience ==
Deborah Swedberg, in an analysis published in the NWSA Journal in 1989, argues that it is possible for lesbian viewers to reappropriate lesbian porn. Swedberg notes that, typically, all-women films differ from mixed porn (with men and women) in, among other things, the settings (less anonymous and more intimate) and the very acts performed (more realistic and emotionally involved, and with a focus on the whole body rather than just the genitals): "the subject of the heterosexually produced all-women videos is female pleasure". She argues (against Laura Mulvey's "Visual Pleasure and Narrative Cineman" and Susanne Kappeler's Pornography and Representation, for example) that such movies allow for female subjectivity since the women are more than just objects of exchange. Appropriation by women of male-made lesbian erotica (such as by David Hamilton) was signaled also by Tee Corinne.
Starting in 2013, Pornhub has published annual reports of user activities and found that the lesbian category has been consistently the most popular among female viewers since 2014 when gender statistics were first gathered (except in 2020 when the data was limited), and that women in general regardless of sexual orientation are more likely to search for lesbian-associated terms such as "scissoring" than men. Several articles; including those by Cosmopolitan, Glamour, and Women's Health magazines; have supported these findings through research of their own.
== Mainstream inauthenticity ==
Mainstream lesbian pornography is criticized by some members of the lesbian community for its inauthenticity. According to author Elizabeth Whitney, "lesbianism is not acknowledged as legitimate" in lesbian porn due to the prevalence of "heteronormatively feminine women", the experimental nature, and the constant catering to the male gaze, all of which counter real life lesbianism.
A study conducted by Valerie Webber found that most actors in lesbian porn consider their own pornographic sex somewhere on a spectrum between real and fake sex, depending on several factors. They were more likely to consider it authentic if there was a real attraction between themselves and the other actor(s) in the scene, and if they felt mutual respect between themselves and the producers.
Authenticity in porn is disputed because some assert that the only authentic sex has no motive other than sex itself. Porn sex, being shot for a camera, automatically has other motives than sex itself. On the other side, some assert that all porn sex is authentic since the sex is an occurrence that took place, and that is all that is needed to classify it as authentic.
With regard to the authenticity of their performance, some lesbian porn actors describe their performance as an exaggerated, altered version of their real personality, providing some authenticity to the performance. Authenticity depends on real life experiences, so some lesbian porn actors feel the need to create an entirely different persona to feel safe. Webber writes of Agatha, a queer actor in lesbian porn who "prefers that the activity and ambiance of her performances be very inauthentic, because otherwise it feels 'too close to home'", referring to the oppression and verbal abuse she is subject to by homophobic men in her daily life.
== Penetration ==
Like in straight and gay male porn, there is an emphasis on penetration in lesbian porn. Even though studies have found that dildos have minimal use in real life lesbian sexual activity, lesbian porn prominently features dildos. According to Lydon, the ability to achieve orgasm clitorally, as opposed to penetratively, eliminates the need for a phallus and, by extension, for a man. For this reason, male producers continue to include, and male viewers continue to demand, a phallus as a central feature in lesbian porn.
== See also ==
Bara
Bisexual pornography
Boyd McDonald
David Hurles
Erotic literature
Gay pulp fiction
Gay sex roles
Gay sexual practices
List of actors in gay pornographic films
Sex industry
Yuri
== References == | Wikipedia/Lesbian_pornography |
Aspects of Scientific Explanation and other Essays in the Philosophy of Science is a 1965 book by the philosopher Carl Gustav Hempel. It is regarded as one of the most important works in the philosophy of science written after World War II.
== Reception ==
The historian Peter Gay wrote that Aspects of Scientific Explanation was "seminal" and "indispensable", writing that Hempel persuasively argued that "the logic of history and that of the natural sciences are the same." Gay observed that Hempel's essay "The Function of General Laws in History" is a "much debated classic". The philosopher Michael Friedman described the book as one of the most important works in philosophy of science written after World War II.
== See also ==
Models of scientific inquiry
== References ==
=== Bibliography ===
Books | Wikipedia/Aspects_of_Scientific_Explanation_and_other_Essays_in_the_Philosophy_of_Science |
Atomic theory is the scientific theory that matter is composed of particles called atoms. The definition of the word "atom" has changed over the years in response to scientific discoveries. Initially, it referred to a hypothetical concept of there being some fundamental particle of matter, too small to be seen by the naked eye, that could not be divided. Then the definition was refined to being the basic particles of the chemical elements, when chemists observed that elements seemed to combine with each other in ratios of small whole numbers. Then physicists discovered that these particles had an internal structure of their own and therefore perhaps did not deserve to be called "atoms", but renaming atoms would have been impractical by that point.
Atomic theory is one of the most important scientific developments in history, crucial to all the physical sciences. At the start of The Feynman Lectures on Physics, physicist and Nobel laureate Richard Feynman offers the atomic hypothesis as the single most prolific scientific concept.
== Philosophical atomism ==
The basic idea that matter is made up of tiny indivisible particles is an old idea that appeared in many ancient cultures. The word atom is derived from the ancient Greek word atomos, which means "uncuttable". This ancient idea was based in philosophical reasoning rather than scientific reasoning. Modern atomic theory is not based on these old concepts. In the early 19th century, the scientist John Dalton noticed that chemical substances seemed to combine with each other by discrete and consistent units of weight, and he decided to use the word atom to refer to these units.
== Groundwork ==
Working in the late 17th century, Robert Boyle developed the concept of a chemical element as substance different from a compound.: 293
Near the end of the 18th century, a number of important developments in chemistry emerged without referring to the notion of an atomic theory. The first was Antoine Lavoisier who showed that compounds consist of elements in constant proportion, redefining an element as a substance which scientists could not decompose into simpler substances by experimentation. This brought an end to the ancient idea of the elements of matter being fire, earth, air, and water, which had no experimental support. Lavoisier showed that water can be decomposed into hydrogen and oxygen, which in turn he could not decompose into anything simpler, thereby proving these are elements. Lavoisier also defined the law of conservation of mass, which states that in a chemical reaction, matter does not appear nor disappear into thin air; the total mass remains the same even if the substances involved were transformed.: 293 Finally, there was the law of definite proportions, established by the French chemist Joseph Proust in 1797, which states that if a compound is broken down into its constituent chemical elements, then the masses of those constituents will always have the same proportions by weight, regardless of the quantity or source of the original compound. This definition distinguished compounds from mixtures.
== Dalton's law of multiple proportions ==
John Dalton studied data gathered by himself and by other scientists. He noticed a pattern that later came to be known as the law of multiple proportions: in compounds which contain two particular elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. This suggested that each element combines with other elements in multiples of a basic quantity.
In 1804, Dalton explained his atomic theory to his friend and fellow chemist Thomas Thomson, who published an explanation of Dalton's theory in his book A System of Chemistry in 1807. According to Thomson, Dalton's idea first occurred to him when experimenting with "olefiant gas" (ethylene) and "carburetted hydrogen gas" (methane). Dalton found that "carburetted hydrogen gas" contains twice as much hydrogen per measure of carbon as "olefiant gas", and concluded that a molecule of "olefiant gas" is one carbon atom and one hydrogen atom, and a molecule of "carburetted hydrogen gas" is one carbon atom and two hydrogen atoms. In reality, an ethylene molecule has two carbon atoms and four hydrogen atoms (C2H4), and a methane molecule has one carbon atom and four hydrogen atoms (CH4). In this particular case, Dalton was mistaken about the formulas of these compounds, but he got them right in the following examples:
Example 1 — tin oxides: Dalton identified two types of tin oxide. One is a grey powder that Dalton referred to as "the protoxide of tin", which is 88.1% tin and 11.9% oxygen. The other is a white powder which Dalton referred to as "the deutoxide of tin", which is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. These compounds are known today as tin(II) oxide (SnO) and tin(IV) oxide (SnO2). In Dalton's terminology, a "protoxide" is a molecule containing a single oxygen atom, and a "deutoxide" molecule has two. The modern equivalents of his terms would be monoxide and dioxide.
Example 2 — iron oxides: Dalton identified two oxides of iron. There is one type of iron oxide that is a black powder which Dalton referred to as "the protoxide of iron", which is 78.1% iron and 21.9% oxygen. The other iron oxide is a red powder, which Dalton referred to as "the intermediate or red oxide of iron" which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. These compounds are iron(II) oxide and iron(III) oxide and their formulas are FeO and Fe2O3 respectively. Iron(II) oxide's formula is normally written as FeO, but since it is a crystalline substance one could alternately write it as Fe2O2, and when we contrast that with Fe2O3, the 2:3 ratio stands out plainly. Dalton described the "intermediate oxide" as being "2 atoms protoxide and 1 of oxygen", which adds up to two atoms of iron and three of oxygen. That averages to one and a half atoms of oxygen for every iron atom, putting it midway between a "protoxide" and a "deutoxide".
Example 3 — nitrogen oxides: Dalton was aware of three oxides of nitrogen: "nitrous oxide", "nitrous gas", and "nitric acid". These compounds are known today as nitrous oxide, nitric oxide, and nitrogen dioxide respectively. "Nitrous oxide" is 63.3% nitrogen and 36.7% oxygen, which means it has 80 g of oxygen for every 140 g of nitrogen. "Nitrous gas" is 44.05% nitrogen and 55.95% oxygen, which means there is 160 g of oxygen for every 140 g of nitrogen. "Nitric acid" is 29.5% nitrogen and 70.5% oxygen, which means it has 320 g of oxygen for every 140 g of nitrogen. 80 g, 160 g, and 320 g form a ratio of 1:2:4. The formulas for these compounds are N2O, NO, and NO2.
Dalton defined an atom as being the "ultimate particle" of a chemical substance, and he used the term "compound atom" to refer to "ultimate particles" which contain two or more elements. This is inconsistent with the modern definition, wherein an atom is the basic particle of a chemical element and a molecule is an agglomeration of atoms. The term "compound atom" was confusing to some of Dalton's contemporaries as the word "atom" implies indivisibility, but he responded that if a carbon dioxide "atom" is divided, it ceases to be carbon dioxide. The carbon dioxide "atom" is indivisible in the sense that it cannot be divided into smaller carbon dioxide particles.
Dalton made the following assumptions on how "elementary atoms" combined to form "compound atoms" (what we today refer to as molecules). When two elements can only form one compound, he assumed it was one atom of each, which he called a "binary compound". If two elements can form two compounds, the first compound is a binary compound and the second is a "ternary compound" consisting of one atom of the first element and two of the second. If two elements can form three compounds between them, then the third compound is a "quaternary" compound containing one atom of the first element and three of the second. Dalton thought that water was a "binary compound", i.e. one hydrogen atom and one oxygen atom. Dalton did not know that in their natural gaseous state, the ultimate particles of oxygen, nitrogen, and hydrogen exist in pairs (O2, N2, and H2). Nor was he aware of valencies. These properties of atoms were discovered later in the 19th century.
Because atoms were too small to be directly weighed using the methods of the 19th century, Dalton instead expressed the weights of the myriad atoms as multiples of the hydrogen atom's weight, which Dalton knew was the lightest element. By his measurements, 7 grams of oxygen will combine with 1 gram of hydrogen to make 8 grams of water with nothing left over, and assuming a water molecule to be one oxygen atom and one hydrogen atom, he concluded that oxygen's atomic weight is 7. In reality it is 16. Aside from the crudity of early 19th century measurement tools, the main reason for this error was that Dalton didn't know that the water molecule in fact has two hydrogen atoms, not one. Had he known, he would have doubled his estimate to a more accurate 14. This error was corrected in 1811 by Amedeo Avogadro. Avogadro proposed that equal volumes of any two gases, at equal temperature and pressure, contain equal numbers of molecules (in other words, the mass of a gas's particles does not affect the volume that it occupies). Avogadro's hypothesis, now usually called Avogadro's law, provided a method for deducing the relative weights of the molecules of gaseous elements, for if the hypothesis is correct relative gas densities directly indicate the relative weights of the particles that compose the gases. This way of thinking led directly to a second hypothesis: the particles of certain elemental gases were pairs of atoms, and when reacting chemically these molecules often split in two. For instance, the fact that two liters of hydrogen will react with just one liter of oxygen to produce two liters of water vapor (at constant pressure and temperature) suggested that a single oxygen molecule splits in two in order to form two molecules of water. The formula of water is H2O, not HO. Avogadro measured oxygen's atomic weight to be 15.074.
== Opposition to atomic theory ==
Dalton's atomic theory attracted widespread interest but not everyone accepted it at first. The law of multiple proportions was shown not to be a universal law when it came to organic substances, whose molecules can be quite large. For instance, in oleic acid there is 34 g of hydrogen for every 216 g of carbon, and in methane there is 72 g of hydrogen for every 216 g of carbon. 34 and 72 form a ratio of 17:36, which is not a ratio of small whole numbers. We know now that carbon-based substances can have very large molecules, larger than any the other elements can form. Oleic acid's formula is C18H34O2 and methane's is CH4. The law of multiple proportions by itself was not complete proof, and atomic theory was not universally accepted until the end of the 19th century.
One problem was the lack of uniform nomenclature. The word "atom" implied indivisibility, but Dalton defined an atom as being the ultimate particle of any chemical substance, not just the elements or even matter per se. This meant that "compound atoms" such as carbon dioxide could be divided, as opposed to "elementary atoms". Dalton disliked the word "molecule", regarding it as "diminutive". Amedeo Avogadro did the opposite: he exclusively used the word "molecule" in his writings, eschewing the word "atom", instead using the term "elementary molecule". Jöns Jacob Berzelius used the term "organic atoms" to refer to particles containing three or more elements, because he thought this only existed in organic compounds. Jean-Baptiste Dumas used the terms "physical atoms" and "chemical atoms"; a "physical atom" was a particle that cannot be divided by physical means such as temperature and pressure, and a "chemical atom" was a particle that could not be divided by chemical reactions.
The modern definitions of atom and molecule—an atom being the basic particle of an element, and a molecule being an agglomeration of atoms—were established in the late half of the 19th century. A key event was the Karlsruhe Congress in Germany in 1860. As the first international congress of chemists, its goal was to establish some standards in the community. A major proponent of the modern distinction between atoms and molecules was Stanislao Cannizzaro.
The various quantities of a particular element involved in the constitution of different molecules are integral multiples of a fundamental quantity that always manifests itself as an indivisible entity and which must properly be named atom.
Cannizzaro criticized past chemists such as Berzelius for not accepting that the particles of certain gaseous elements are actually pairs of atoms, which led to mistakes in their formulation of certain compounds. Berzelius believed that hydrogen gas and chlorine gas particles are solitary atoms. But he observed that when one liter of hydrogen reacts with one liter of chlorine, they form two liters of hydrogen chloride instead of one. Berzelius decided that Avogadro's law does not apply to compounds. Cannizzaro preached that if scientists just accepted the existence of single-element molecules, such discrepancies in their findings would be easily resolved. But Berzelius did not even have a word for that. Berzelius used the term "elementary atom" for a gas particle which contained just one element and "compound atom" for particles which contained two or more elements, but there was nothing to distinguish H2 from H since Berzelius did not believe in H2. So Cannizzaro called for a redefinition so that scientists could understand that a hydrogen molecule can split into two hydrogen atoms in the course of a chemical reaction.
A second objection to atomic theory was philosophical. Scientists in the 19th century had no way of directly observing atoms. They inferred the existence of atoms through indirect observations, such as Dalton's law of multiple proportions. Some scientists adopted positions aligned with the philosophy of positivism, arguing that scientists should not attempt to deduce the deeper reality of the universe, but only systemize what patterns they could directly observe.: 232
This generation of anti-atomists can be grouped in two camps.
The "equivalentists", like Marcellin Berthelot, believed the theory of equivalent weights was adequate for scientific purposes. This generalization of Proust's law of definite proportions summarized observations. For example, 1 gram of hydrogen will combine with 8 grams of oxygen to form 9 grams of water, therefore the "equivalent weight" of oxygen is 8 grams. The "energeticist", like Ernst Mach and Wilhelm Ostwald, were philosophically opposed to hypothesis about reality altogether. In their view, only energy as part of thermodynamics should be the basis of physical models.: 237
These positions were eventually quashed by two important advancements that happened later in the 19th century: the development of the periodic table and the discovery that molecules have an internal architecture that determines their properties.
== Isomerism ==
Scientists discovered some substances have the exact same chemical content but different properties. For instance, in 1827, Friedrich Wöhler discovered that silver fulminate and silver cyanate are both 107 parts silver, 12 parts carbon, 14 parts nitrogen, and 16 parts oxygen (we now know their formulas as both AgCNO). In 1830 Jöns Jacob Berzelius introduced the term isomerism to describe the phenomenon. In 1860, Louis Pasteur hypothesized that the molecules of isomers might have the same set of atoms but in different arrangements.
In 1874, Jacobus Henricus van 't Hoff proposed that the carbon atom bonds to other atoms in a tetrahedral arrangement. Working from this, he explained the structures of organic molecules in such a way that he could predict how many isomers a compound could have. Consider, for example, pentane (C5H12). In van 't Hoff's way of modelling molecules, there are three possible configurations for pentane, and scientists did go on to discover three and only three isomers of pentane.
Isomerism was not something that could be fully explained by alternative theories to atomic theory, such as radical theory and the theory of types.
== Mendeleev's periodic table ==
Dmitrii Mendeleev noticed that when he arranged the elements in a row according to their atomic weights, there was a certain periodicity to them.: 117 For instance, the second element, lithium, had similar properties to the ninth element, sodium, and the sixteenth element, potassium — a period of seven. Likewise, beryllium, magnesium, and calcium were similar and all were seven places apart from each other on Mendeleev's table. Using these patterns, Mendeleev predicted the existence and properties of new elements, which were later discovered in nature: scandium, gallium, and germanium.: 118 Moreover, the periodic table could predict how many atoms of other elements that an atom could bond with — e.g., germanium and carbon are in the same group on the table and their atoms both combine with two oxygen atoms each (GeO2 and CO2). Mendeleev found these patterns validated atomic theory because it showed that the elements could be categorized by their atomic weight. Inserting a new element into the middle of a period would break the parallel between that period and the next, and would also violate Dalton's law of multiple proportions.
The elements on the periodic table were originally arranged in order of increasing atomic weight. However, in a number of places chemists chose to swap the positions of certain adjacent elements so that they appeared in a group with other elements with similar properties. For instance, tellurium is placed before iodine even though tellurium is heavier (127.6 vs 126.9) so that iodine can be in the same column as the other halogens. The modern periodic table is based on atomic number, which is equivalent to the nuclear charge, a change had to wait for the discovery of the nucleus.: 228
In addition, an entire row of the table was not shown
because the noble gases had not been discovered when Mendeleev devised his table.: 222
== Statistical mechanics ==
In 1738, Swiss physicist and mathematician Daniel Bernoulli postulated that the pressure of gases and heat were both caused by the underlying motion of particles. Using his model he could predict the ideal gas law at constant temperature and suggested that the temperature was proportional to the velocity of the particles. These results were largely ignored for a century.: 25
James Clerk Maxwell, a vocal proponent of atomism, revived the kinetic theory in 1860 and 1867. His key insight was that the velocity of particles in a gas would vary around an average value, introducing the concept of a distribution function.: 26 Ludwig Boltzmann and Rudolf Clausius expanded his work on gases and the laws of thermodynamics especially the second law relating to entropy. In the 1870s, Josiah Willard Gibbs extended the laws of entropy and thermodynamics and coined the term "statistical mechanics."
Boltzmann defended the atomistic hypothesis against major detractors from the time like Ernst Mach or energeticists like Wilhelm Ostwald, who considered that energy was the elementary quantity of reality.
At the beginning of the 20th century, Albert Einstein independently reinvented Gibbs' laws, because they had only been printed in an obscure American journal. Einstein later commented that had he known of Gibbs' work, he would "not have published those papers at all, but confined myself to the treatment of some few points [that were distinct]." All of statistical mechanics and the laws of heat, gas, and entropy took the existence of atoms as a necessary postulate.
=== Brownian motion ===
In 1827, the British botanist Robert Brown observed that dust particles inside pollen grains floating in water constantly jiggled about for no apparent reason. In 1905, Einstein theorized that this Brownian motion was caused by the water molecules continuously knocking the grains about, and developed a mathematical model to describe it. This model was validated experimentally in 1908 by French physicist Jean Perrin, who used Einstein's equations to measure the size of atoms.
== Discovery of the electron ==
Atoms were thought to be the smallest possible division of matter until 1899 when J. J. Thomson discovered the electron through his work on cathode rays.: 86 : 364
A Crookes tube is a sealed glass container in which two electrodes are separated by a vacuum. When a voltage is applied across the electrodes, cathode rays are generated, creating a glowing patch where they strike the glass at the opposite end of the tube. Through experimentation, Thomson discovered that the rays could be deflected by electric fields and magnetic fields, which meant that these rays were not a form of light but were composed of very light charged particles, and their charge was negative. Thomson called these particles "corpuscles". He measured their mass-to-charge ratio to be several orders of magnitude smaller than that of the hydrogen atom, the smallest atom. This ratio was the same regardless of what the electrodes were made of and what the trace gas in the tube was.
In contrast to those corpuscles, positive ions created by electrolysis or X-ray radiation had mass-to-charge ratios that varied depending on the material of the electrodes and the type of gas in the reaction chamber, indicating they were different kinds of particles.: 363
In 1898, Thomson measured the charge on ions to be roughly 6 × 10−10 electrostatic units (2 × 10−19 Coulombs).: 85 In 1899, he showed that negative electricity created by ultraviolet light landing on a metal (known now as the photoelectric effect) has the same mass-to-charge ratio as cathode rays; then he applied his previous method for determining the charge on ions to the negative electric particles created by ultraviolet light.: 86 By this combination he showed that electron's mass was 0.0014 times that of hydrogen ions. These "corpuscles" were so light yet carried so much charge that Thomson concluded they must be the basic particles of electricity, and for that reason other scientists decided that these "corpuscles" should instead be called electrons following an 1894 suggestion by George Johnstone Stoney for naming the basic unit of electrical charge.
In 1904, Thomson published a paper describing a new model of the atom. Electrons reside within atoms, and they transplant themselves from one atom to the next in a chain in the action of an electrical current. When electrons do not flow, their negative charge logically must be balanced out by some source of positive charge within the atom so as to render the atom electrically neutral. Having no clue as to the source of this positive charge, Thomson tentatively proposed that the positive charge was everywhere in the atom, the atom being shaped like a sphere—this was the mathematically simplest model to fit the available evidence (or lack of it). The balance of electrostatic forces would distribute the electrons throughout this sphere in a more or less even manner. Thomson further explained that ions are atoms that have a surplus or shortage of electrons.
Thomson's model is popularly known as the plum pudding model, based on the idea that the electrons are distributed throughout the sphere of positive charge with the same density as raisins in a plum pudding. Neither Thomson nor his colleagues ever used this analogy. It seems to have been a conceit of popular science writers. The analogy suggests that the positive sphere is like a solid, but Thomson likened it to a liquid, as he proposed that the electrons moved around in it in patterns governed by the electrostatic forces. Thus the positive electrification in Thomson's model was a temporary concept. Thomson's model was incomplete, it could not predict any of the known properties of the atom such as emission spectra or valencies.
In 1906, Robert A. Millikan and Harvey Fletcher performed the oil drop experiment in which they measured the charge of an electron to be about -1.6 × 10−19, a value now defined as -1 e. Since the hydrogen ion and the electron were known to be indivisible and a hydrogen atom is neutral in charge, it followed that the positive charge in hydrogen was equal to this value, i.e. 1 e.
== Discovery of the nucleus ==
Thomson's plum pudding model was challenged in 1911 by one of his former students, Ernest Rutherford, who presented a new model to explain new experimental data. The new model proposed a concentrated center of charge and mass that was later dubbed the atomic nucleus.: 296
Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles usually have much more momentum than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully.
Between 1908 and 1913, Rutherford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with a beam of alpha particles. They spotted alpha particles being deflected by angles greater than 90°. According to Thomson's model, all of the alpha particles should have passed through with negligible deflection. Rutherford deduced that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. This nucleus also carries most of the atom's mass. Only such an intense concentration of charge, anchored by its high mass, could produce an electric field strong enough to deflect the alpha particles as observed. Rutherford's model, being supported primarily by scattering data unfamiliar to many scientists, did not catch on until Niels Bohr joined Rutherford's lab and developed a new model for the electrons.: 304
Rutherford model predicted that the scattering of alpha particles would be proportional to the square of the atomic charge. Geiger and Marsden's based their analysis on setting the charge to half of the atomic weight of the foil's material (gold, aluminium, etc.). Amateur physicist Antonius van den Broek noted that there was a more precise relation between the charge and the element's numeric sequence in the order of atomic weights. The sequence number came be called the atomic number and it replaced atomic weight in organizing the periodic table.
== Bohr model ==
Rutherford deduced the existence of the atomic nucleus through his experiments but he had nothing to say about how the electrons were arranged around it. In 1912, Niels Bohr joined Rutherford's lab and began his work on a quantum model of the atom.: 19
Max Planck in 1900 and Albert Einstein in 1905 had postulated that light energy is emitted or absorbed in discrete amounts known as quanta (singular, quantum). This led to a series of atomic models with some quantum aspects, such as that of Arthur Erich Haas in 1910: 197 and the 1912 John William Nicholson atomic model with quantized angular momentum as h/2π. The dynamical structure of these models was still classical, but in 1913, Bohr abandon the classical approach. He started his Bohr model of the atom with a quantum hypothesis: an electron could only orbit the nucleus in particular circular orbits with fixed angular momentum and energy, its distance from the nucleus (i.e., their radii) being proportional to its energy.: 197 Under this model an electron could not lose energy in a continuous manner; instead, it could only make instantaneous "quantum leaps" between the fixed energy levels. When this occurred, light was emitted or absorbed at a frequency proportional to the change in energy (hence the absorption and emission of light in discrete spectra).
In a trilogy of papers Bohr described and applied his model to derive the Balmer series of lines in the atomic spectrum of hydrogen and the related spectrum of He+.: 197 He also used he model to describe the structure of the periodic table and aspects of chemical bonding. Together these results lead to Bohr's model being widely accepted by the end of 1915.: 91
Bohr's model was not perfect. It could only predict the spectral lines of hydrogen, not those of multielectron atoms. Worse still, it could not even account for all features of the hydrogen spectrum: as spectrographic technology improved, it was discovered that applying a magnetic field caused spectral lines to multiply in a way that Bohr's model couldn't explain. In 1916, Arnold Sommerfeld added elliptical orbits to the Bohr model to explain the extra emission lines, but this made the model very difficult to use, and it still couldn't explain more complex atoms.
== Discovery of isotopes ==
While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one variety of some elements. The term isotope was coined by Margaret Todd as a suitable name for these varieties.
That same year, J. J. Thomson conducted an experiment in which he channeled a stream of neon ions through magnetic and electric fields, striking a photographic plate at the other end. He observed two glowing patches on the plate, which suggested two different deflection trajectories. Thomson concluded this was because some of the neon ions had a different mass. The nature of this differing mass would later be explained by the discovery of neutrons in 1932: all atoms of the same element contain the same number of protons, while different isotopes have different numbers of neutrons.
== Discovery of the proton ==
Back in 1815, William Prout observed that the atomic weights of the known elements were multiples of hydrogen's atomic weight, so he hypothesized that all atoms are agglomerations of hydrogen, a particle which he dubbed "the protyle". Prout's hypothesis was put into doubt when some elements were found to deviate from this pattern—e.g. chlorine atoms on average weigh 35.45 daltons—but when isotopes were discovered in 1913, Prout's observation gained renewed attention.
In 1898, J. J. Thomson found that the positive charge of a hydrogen ion was equal to the negative charge of a single electron.
In an April 1911 paper concerning his studies on alpha particle scattering, Ernest Rutherford estimated that the charge of an atomic nucleus, expressed as a multiplier of hydrogen's nuclear charge (qe), is roughly half the atom's atomic weight.
In June 1911, Van den Broek noted that on the periodic table, each successive chemical element increased in atomic weight on average by 2, which in turn suggested that each successive element's nuclear charge increased by 1 qe. In 1913, van den Broek further proposed that the electric charge of an atom's nucleus, expressed as a multiplier of the elementary charge, is equal to the element's sequential position on the periodic table. Rutherford defined this position as being the element's atomic number.
In 1913, Henry Moseley measured the X-ray emissions of all the elements on the periodic table and found that the frequency of the X-ray emissions was a mathematical function of the element's atomic number and the charge of a hydrogen nucleus (see Moseley's law).
In 1917 Rutherford bombarded nitrogen gas with alpha particles and observed hydrogen ions being emitted from the gas. Rutherford concluded that the alpha particles struck the nuclei of the nitrogen atoms, causing hydrogen ions to split off.
These observations led Rutherford to conclude that the hydrogen nucleus was a singular particle with a positive charge equal to that of the electron's negative charge. The name "proton" was suggested by Rutherford at an informal meeting of fellow physicists in Cardiff in 1920.
The charge number of an atomic nucleus was found to be equal to the element's ordinal position on the periodic table. The nuclear charge number thus provided a simple and clear-cut way of distinguishing the chemical elements from each other, as opposed to Lavoisier's classic definition of a chemical element being a substance that cannot be broken down into simpler substances by chemical reactions. The charge number or proton number was thereafter referred to as the atomic number of the element. In 1923, the International Committee on Chemical Elements officially declared the atomic number to be the distinguishing quality of a chemical element.
During the 1920s, some writers defined the atomic number as being the number of "excess protons" in a nucleus. Before the discovery of the neutron, scientists believed that the atomic nucleus contained a number of "nuclear electrons" which cancelled out the positive charge of some of its protons. This explained why the atomic weights of most atoms were higher than their atomic numbers. Helium, for instance, was thought to have four protons and two nuclear electrons in the nucleus, leaving two excess protons and a net nuclear charge of 2+. After the neutron was discovered, scientists realized the helium nucleus in fact contained two protons and two neutrons.
== Discovery of the neutron ==
Physicists in the 1920s believed that the atomic nucleus contained protons plus a number of "nuclear electrons" that reduced the overall charge. These "nuclear electrons" were distinct from the electrons that orbited the nucleus. This incorrect hypothesis would have explained why the atomic numbers of the elements were less than their atomic weights, and why radioactive elements emit electrons (beta radiation) in the process of nuclear decay. Rutherford even hypothesized that a proton and an electron could bind tightly together into a "neutral doublet". Rutherford wrote that the existence of such "neutral doublets" moving freely through space would provide a more plausible explanation for how the heavier elements could have formed in the genesis of the Universe, given that it is hard for a lone proton to fuse with a large atomic nucleus because of the repulsive electric field.
In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick called this new particle "the neutron" and believed that it to be a proton and electron fused together because the neutron had about the same mass as a proton and an electron's mass is negligible by comparison. Neutrons are not in fact a fusion of a proton and an electron.
== Modern quantum mechanical models ==
In 1924, Louis de Broglie proposed that all particles—particularly subatomic particles such as electrons—have an associated wave. Erwin Schrödinger, fascinated by this idea, developed an equation that describes an electron as a wave function instead of a point. This approach predicted many of the spectral phenomena that Bohr's model failed to explain, but it was difficult to visualize, and faced opposition. One of its critics, Max Born, proposed instead that Schrödinger's wave function did not describe the physical extent of an electron (like a charge distribution in classical electromagnetism), but rather gave the probability that an electron would, when measured, be found at a particular point. This reconciled the ideas of wave-like and particle-like electrons: the behavior of an electron, or of any other subatomic entity, has both wave-like and particle-like aspects, and whether one aspect or the other is observed depend upon the experiment.
A consequence of describing particles as waveforms rather than points is that it is mathematically impossible to calculate with precision both the position and momentum of a particle at a given point in time. This became known as the uncertainty principle, a concept first introduced by Werner Heisenberg in 1927.
Schrödinger's wave model for hydrogen replaced Bohr's model, with its neat, clearly defined circular orbits. The modern model of the atom describes the positions of electrons in an atom in terms of probabilities. An electron can potentially be found at any distance from the nucleus, but, depending on its energy level and angular momentum, exists more frequently in certain regions around the nucleus than others; this pattern is referred to as its atomic orbital. The orbitals come in a variety of shapes—sphere, dumbbell, torus, etc.—with the nucleus in the middle. The shapes of atomic orbitals are found by solving the Schrödinger equation. Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the hydrogen atom and the hydrogen molecular ion. Beginning with the helium atom—which contains just two electrons—numerical methods are used to solve the Schrödinger equation.
Qualitatively the shape of the atomic orbitals of multi-electron atoms resemble the states of the hydrogen atom. The Pauli principle requires the distribution of these electrons within the atomic orbitals such that no more than two electrons are assigned to any one orbital; this requirement profoundly affects the atomic properties and ultimately the bonding of atoms into molecules.: 182
== See also ==
== Footnotes ==
== Bibliography ==
Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). The Feynman Lectures on Physics. Vol. 1. ISBN 978-0-201-02116-5. {{cite book}}: ISBN / Date incompatibility (help)
Andrew G. van Melsen (1960) [First published 1952]. From Atomos to Atom: The History of the Concept Atom. Translated by Henry J. Koren. Dover Publications. ISBN 0-486-49584-1. {{cite book}}: ISBN / Date incompatibility (help)
J. P. Millington (1906). John Dalton. J. M. Dent & Co. (London); E. P. Dutton & Co. (New York).
Jaume Navarro (2012). A History of the Electron: J. J. and G. P. Thomson. Cambridge University Press. ISBN 978-1-107-00522-8.
Trusted, Jennifer (1999). The Mystery of Matter. MacMillan. ISBN 0-333-76002-6.
Bernard Pullman (1998). The Atom in the History of Human Thought. Translated by Axel Reisinger. Oxford University Press. ISBN 0-19-511447-7.
Jean Perrin (1910) [1909]. Brownian Movement and Molecular Reality. Translated by F. Soddy. Taylor and Francis.
Ida Freund (1904). The Study of Chemical Composition. Cambridge University Press.
Thomas Thomson (1807). A System of Chemistry: In Five Volumes, Volume 3. John Brown.
Thomas Thomson (1831). The History of Chemistry, Volume 2. H. Colburn, and R. Bentley.
John Dalton (1808). A New System of Chemical Philosophy vol. 1.
John Dalton (1817). A New System of Chemical Philosophy vol. 2.
Stanislao Cannizzaro (1858). Sketch of a Course of Chemical Philosophy. The Alembic Club.
== Further reading ==
Charles Adolphe Wurtz (1881) The Atomic Theory, D. Appleton and Company, New York.
Alan J. Rocke (1984) Chemical Atomism in the Nineteenth Century: From Dalton to Cannizzaro, Ohio State University Press, Columbus (open access full text at http://digital.case.edu/islandora/object/ksl%3Ax633gj985).
== External links ==
Atomism by S. Mark Cohen.
Atomic Theory – detailed information on atomic theory with respect to electrons and electricity.
The Feynman Lectures on Physics Vol. I Ch. 1: Atoms in Motion | Wikipedia/Atomic_theory_of_matter |
Wave–particle duality is the concept in quantum mechanics that fundamental entities of the universe, like photons and electrons, exhibit particle or wave properties according to the experimental circumstances.: 59 It expresses the inability of the classical concepts such as particle or wave to fully describe the behavior of quantum objects.: III:1-1 During the 19th and early 20th centuries, light was found to behave as a wave then later was discovered to have a particle-like behavior, whereas electrons behaved like particles in early experiments then were later discovered to have wave-like behavior. The concept of duality arose to name these seeming contradictions.
== History ==
=== Wave-particle duality of light ===
In the late 17th century, Sir Isaac Newton had advocated that light was corpuscular (particulate), but Christiaan Huygens took an opposing wave description. While Newton had favored a particle approach, he was the first to attempt to reconcile both wave and particle theories of light, and the only one in his time to consider both, thereby anticipating modern wave-particle duality. Thomas Young's interference experiments in 1801, and François Arago's detection of the Poisson spot in 1819, validated Huygens' wave models. However, the wave model was challenged in 1901 by Planck's law for black-body radiation. Max Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, E, that was proportional to the frequency of its associated electromagnetic wave. In 1905 Albert Einstein interpreted the photoelectric effect also with discrete energies for photons. These both indicate particle behavior. Despite confirmation by various experimental observations, the photon theory (as it came to be called) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light.: 211 The experimental evidence of particle-like momentum and energy seemingly contradicted the earlier work demonstrating wave-like interference of light.
=== Wave-particle duality of matter ===
The contradictory evidence from electrons arrived in the opposite order. Many experiments by J. J. Thomson,: I:361 Robert Millikan,: I:89 and Charles Wilson: I:4 among others had shown that free electrons had particle properties, for instance, the measurement of their mass by Thomson in 1897. In 1924, Louis de Broglie introduced his theory of electron waves in his PhD thesis Recherches sur la théorie des quanta. He suggested that an electron around a nucleus could be thought of as being a standing wave and that electrons and all matter could be considered as waves. He merged the idea of thinking about them as particles, and of thinking of them as waves. He proposed that particles are bundles of waves (wave packets) that move with a group velocity and have an effective mass. Both of these depend upon the energy, which in turn connects to the wavevector and the relativistic formulation of Albert Einstein a few years before.
Following de Broglie's proposal of wave–particle duality of electrons, in 1925 to 1926, Erwin Schrödinger developed the wave equation of motion for electrons. This rapidly became part of what was called by Schrödinger undulatory mechanics, now called the Schrödinger equation and also "wave mechanics".
In 1926, Max Born gave a talk in an Oxford meeting about using the electron diffraction experiments to confirm the wave–particle duality of electrons. In his talk, Born cited experimental data from Clinton Davisson in 1923. It happened that Davisson also attended that talk. Davisson returned to his lab in the US to switch his experimental focus to test the wave property of electrons.
In 1927, the wave nature of electrons was empirically confirmed by two experiments. The Davisson–Germer experiment at Bell Labs measured electrons scattered from Ni metal surfaces. George Paget Thomson and Alexander Reid at Cambridge University scattered electrons through thin nickel films and observed concentric diffraction rings. Alexander Reid, who was Thomson's graduate student, performed the first experiments, but he died soon after in a motorcycle accident and is rarely mentioned. These experiments were rapidly followed by the first non-relativistic diffraction model for electrons by Hans Bethe based upon the Schrödinger equation, which is very close to how electron diffraction is now described. Significantly, Davisson and Germer noticed that their results could not be interpreted using a Bragg's law approach as the positions were systematically different; the approach of Bethe, which includes the refraction due to the average potential, yielded more accurate results. Davisson and Thomson were awarded the Nobel Prize in 1937 for experimental verification of wave property of electrons by diffraction experiments. Similar crystal diffraction experiments were carried out by Otto Stern in the 1930s using beams of helium atoms and hydrogen molecules. These experiments further verified that wave behavior is not limited to electrons and is a general property of matter on a microscopic scale.
== Classical waves and particles ==
Before proceeding further, it is critical to introduce some definitions of waves and particles both in a classical sense and in quantum mechanics. Waves and particles are two very different models for physical systems, each with an exceptionally large range of application. Classical waves obey the wave equation; they have continuous values at many points in space that vary with time; their spatial extent can vary with time due to diffraction, and they display wave interference. Physical systems exhibiting wave behavior and described by the mathematics of wave equations include water waves, seismic waves, sound waves, radio waves, and more.
Classical particles obey classical mechanics; they have some center of mass and extent; they follow trajectories characterized by positions and velocities that vary over time; in the absence of forces their trajectories are straight lines. Stars, planets, spacecraft, tennis balls, bullets, sand grains: particle models work across a huge scale. Unlike waves, particles do not exhibit interference.
Some experiments on quantum systems show wave-like interference and diffraction; some experiments show particle-like collisions.
Quantum systems obey wave equations that predict particle probability distributions. These particles are associated with discrete values called quanta for properties such as spin, electric charge and magnetic moment. These particles arrive one at time, randomly, but build up a pattern. The probability that experiments will measure particles at a point in space is the square of a complex-number valued wave. Experiments can be designed to exhibit diffraction and interference of the probability amplitude. Thus statistically large numbers of the random particle appearances can display wave-like properties. Similar equations govern collective excitations called quasiparticles.
== Electrons behaving as waves and particles ==
The electron double slit experiment is a textbook demonstration of wave-particle duality. A modern version of the experiment is shown schematically in the figure below.
Electrons from the source hit a wall with two thin slits. A mask behind the slits can expose either one or open to expose both slits. The results for high electron intensity are shown on the right, first for each slit individually, then with both slits open. With either slit open there is a smooth intensity variation due to diffraction. When both slits are open the intensity oscillates, characteristic of wave interference.
Having observed wave behavior, now change the experiment, lowering the intensity of the electron source until only one or two are detected per second, appearing as individual particles, dots in the video. As shown in the movie clip below, the dots on the detector seem at first to be random. After some time a pattern emerges, eventually forming an alternating sequence of light and dark bands.
The experiment shows wave interference revealed a single particle at a time—quantum mechanical electrons display both wave and particle behavior. Similar results have been shown for atoms and even large molecules.
== Observing photons as particles ==
While electrons were thought to be particles until their wave properties were discovered, for photons it was the opposite. In 1887, Heinrich Hertz observed that when light with sufficient frequency hits a metallic surface, the surface emits cathode rays, what are now called electrons.: 399 In 1902, Philipp Lenard discovered that the maximum possible energy of an ejected electron is unrelated to its intensity. This observation is at odds with classical electromagnetism, which predicts that the electron's energy should be proportional to the intensity of the incident radiation.: 24 In 1905, Albert Einstein suggested that the energy of the light must occur a finite number of energy quanta. He postulated that electrons can receive energy from an electromagnetic field only in discrete units (quanta or photons): an amount of energy E that was related to the frequency f of the light by
E
=
h
f
{\displaystyle E=hf}
where h is the Planck constant (6.626×10−34 J⋅s). Only photons of a high enough frequency (above a certain threshold value which, when multiplied by the Planck constant, is the work function) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal he used, but photons of red light did not. One photon of light above the threshold frequency could release only one electron; the higher the frequency of a photon, the higher the kinetic energy of the emitted electron, but no amount of light below the threshold frequency could release an electron. Despite confirmation by various experimental observations, the photon theory (as it came to be called later) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light.: 211
Both discrete (quantized) energies and also momentum are, classically, particle attributes. There are many other examples where photons display particle-type properties, for instance in solar sails, where sunlight could propel a space vehicle and laser cooling where the momentum is used to slow down (cool) atoms. These are a different aspect of wave-particle duality.
== Which slit experiments ==
In a "which way" experiment, particle detectors are placed at the slits to determine which slit the electron traveled through. When these detectors are inserted, quantum mechanics predicts that the interference pattern disappears because the detected part of the electron wave has changed (loss of coherence). Many similar proposals have been made and many have been converted into experiments and tried out. Every single one shows the same result: as soon as electron trajectories are detected, interference disappears.
A simple example of these "which way" experiments uses a Mach–Zehnder interferometer, a device based on lasers and mirrors sketched below.
A laser beam along the input port splits at a half-silvered mirror. Part of the beam continues straight, passes though a glass phase shifter, then reflects downward. The other part of the beam reflects from the first mirror then turns at another mirror. The two beams meet at a second half-silvered beam splitter.
Each output port has a camera to record the results. The two beams show interference characteristic of wave propagation. If the laser intensity is turned sufficiently low, individual dots appear on the cameras, building up the pattern as in the electron example.
The first beam-splitter mirror acts like double slits, but in the interferometer case we can remove the second beam splitter. Then the beam heading down ends up in output port 1: any photon particles on this path gets counted in that port. The beam going across the top ends up on output port 2. In either case the counts will track the photon trajectories. However, as soon as the second beam splitter is removed the interference pattern disappears.
== See also ==
Basic concepts of quantum mechanics – Non-mathematical introductionPages displaying short descriptions of redirect targets
Complementarity (physics) – Quantum physics concept
Einstein's thought experiments
Interpretations of quantum mechanics
Wheeler's delayed choice experiment – Quantum physics thought experimentPages displaying short descriptions of redirect targets
Uncertainty principle
Matter wave
Corpuscular theory of light
== References ==
== External links ==
R. Nave. "Wave–Particle Duality". HyperPhysics. Georgia State University, Department of Physics and Astronomy. Retrieved December 12, 2005.
"Wave–particle duality". PhysicsQuest. American Physical Society. Retrieved August 31, 2023.
Mack, Katie. "Quantum 101 – Quantum Science Explained". Perimeter Institute for Theoretical Physics. Retrieved August 31, 2023. | Wikipedia/Particle_theory_of_light |
Newton's law of universal gravitation describes gravity as a force by stating that every particle attracts every other particle in the universe with a force that is proportional to the product of their masses and inversely proportional to the square of the distance between their centers of mass. Separated objects attract and are attracted as if all their mass were concentrated at their centers. The publication of the law has become known as the "first great unification", as it marked the unification of the previously described phenomena of gravity on Earth with known astronomical behaviors.
This is a general physical law derived from empirical observations by what Isaac Newton called inductive reasoning. It is a part of classical mechanics and was formulated in Newton's work Philosophiæ Naturalis Principia Mathematica (Latin for 'Mathematical Principles of Natural Philosophy' (the Principia)), first published on 5 July 1687.
The equation for universal gravitation thus takes the form:
F
=
G
m
1
m
2
r
2
,
{\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}},}
where F is the gravitational force acting between two objects, m1 and m2 are the masses of the objects, r is the distance between the centers of their masses, and G is the gravitational constant.
The first test of Newton's law of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798. It took place 111 years after the publication of Newton's Principia and approximately 71 years after his death.
Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the magnitude of the electrical force arising between two charged bodies. Both are inverse-square laws, where force is inversely proportional to the square of the distance between the bodies. Coulomb's law has charge in place of mass and a different constant.
Newton's law was later superseded by Albert Einstein's theory of general relativity, but the universality of the gravitational constant is intact and the law still continues to be used as an excellent approximation of the effects of gravity in most applications. Relativity is required only when there is a need for extreme accuracy, or when dealing with very strong gravitational fields, such as those found near extremely massive and dense objects, or at small distances (such as Mercury's orbit around the Sun).
== History ==
Before Newton's law of gravity, there were many theories explaining gravity. Philosophers made observations about things falling down − and developed theories why they do – as early as Aristotle who thought that rocks fall to the ground because seeking the ground was an essential part of their nature.
Around 1600, the scientific method began to take root. René Descartes started over with a more fundamental view, developing ideas of matter and action independent of theology. Galileo Galilei wrote about experimental measurements of falling and rolling objects. Johannes Kepler's laws of planetary motion summarized Tycho Brahe's astronomical observations.: 132
Around 1666 Isaac Newton developed the idea that Kepler's laws must also apply to the orbit of the Moon around the Earth and then to all objects on Earth. The analysis required assuming that the gravitation force acted as if all of the mass of the Earth were concentrated at its center, an unproven conjecture at that time. His calculations of the Moon orbit time was within 16% of the known value. By 1680, new values for the diameter of the Earth improved his orbit time to within 1.6%, but more importantly Newton had found a proof of his earlier conjecture.: 201
In 1687 Newton published his Principia which combined his laws of motion with new mathematical analysis to explain Kepler's empirical results.: 134 His explanation was in the form of a law of universal gravitation: any two bodies are attracted by a force proportional to their mass and inversely proportional to their separation squared.: 28 Newton's original formula was:
F
o
r
c
e
o
f
g
r
a
v
i
t
y
∝
m
a
s
s
o
f
o
b
j
e
c
t
1
×
m
a
s
s
o
f
o
b
j
e
c
t
2
d
i
s
t
a
n
c
e
f
r
o
m
c
e
n
t
e
r
s
2
{\displaystyle {\rm {Force\,of\,gravity}}\propto {\frac {\rm {mass\,of\,object\,1\,\times \,mass\,of\,object\,2}}{\rm {distance\,from\,centers^{2}}}}}
where the symbol
∝
{\displaystyle \propto }
means "is proportional to". To make this into an equal-sided formula or equation, there needed to be a multiplying factor or constant that would give the correct force of gravity no matter the value of the masses or distance between them (the gravitational constant). Newton would need an accurate measure of this constant to prove his inverse-square law. When Newton presented Book 1 of the unpublished text in April 1686 to the Royal Society, Robert Hooke made a claim that Newton had obtained the inverse square law from him, ultimately a frivolous accusation.: 204
=== Newton's "causes hitherto unknown" ===
While Newton was able to formulate his law of gravity in his monumental work, he was deeply uncomfortable with the notion of "action at a distance" that his equations implied. In 1692, in his third letter to Bentley, he wrote: "That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it.": 26
Newton's 1713 General Scholium in the second edition of Principia explains his model of gravity, translated in this case by Samuel Clarke:
I have explained the Pharnomena of the Heavens and the Sea, by the Force of Gravity; but the Cause of Gravity I have not yet assigned. It is a Force arising from some Cause, which reaches to the very Centers of the Sun and Planets, without any diminution of its Force: And it acts, not proportionally to the Surfaces of the Particles it acts upon, as Mechanical Causes use to do; but proportionally to the Quantity of Solid Matter: And its Action reaches every way to immense Distances, decreasing always in a duplicate ratio of the Distances. But the Cause of these Properties of Gravity, I have not yet found deducible from Pharnomena: And Hypotheses I make not.: 383
The last sentence is Newton's famous and highly debated Latin phrase Hypotheses non fingo. In other translations it comes out "I feign no hypotheses".
== Modern form ==
In modern language, the law states the following:
Assuming SI units, F is measured in newtons (N), m1 and m2 in kilograms (kg), r in meters (m), and the constant G is 6.67430(15)×10−11 m3⋅kg−1⋅s−2. The value of the constant G was first accurately determined from the results of the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for G. This experiment was also the first test of Newton's theory of gravitation between masses in the laboratory. It took place 111 years after the publication of Newton's Principia and 71 years after Newton's death, so none of Newton's calculations could use the value of G; instead he could only calculate a force relative to another force.
== Bodies with spatial extent ==
If the bodies in question have spatial extent (as opposed to being point masses), then the gravitational force between them is calculated by summing the contributions of the notional point masses that constitute the bodies. In the limit, as the component point masses become "infinitely small", this entails integrating the force (in vector form, see below) over the extents of the two bodies.
In this way, it can be shown that an object with a spherically symmetric distribution of mass exerts the same gravitational attraction on external bodies as if all the object's mass were concentrated at a point at its center. (This is not generally true for non-spherically symmetrical bodies.)
For points inside a spherically symmetric distribution of matter, Newton's shell theorem can be used to find the gravitational force. The theorem tells us how different parts of the mass distribution affect the gravitational force measured at a point located a distance r0 from the center of the mass distribution:
The portion of the mass that is located at radii r < r0 causes the same force at the radius r0 as if all of the mass enclosed within a sphere of radius r0 was concentrated at the center of the mass distribution (as noted above).
The portion of the mass that is located at radii r > r0 exerts no net gravitational force at the radius r0 from the center. That is, the individual gravitational forces exerted on a point at radius r0 by the elements of the mass outside the radius r0 cancel each other.
As a consequence, for example, within a shell of uniform thickness and density there is no net gravitational acceleration anywhere within the hollow sphere.
== Vector form ==
Newton's law of universal gravitation can be written as a vector equation to account for the direction of the gravitational force as well as its magnitude. In this formula, quantities in bold represent vectors.
F
21
=
−
G
m
1
m
2
|
r
21
|
2
r
^
21
=
−
G
m
1
m
2
|
r
21
|
3
r
21
{\displaystyle \mathbf {F} _{21}=-G{m_{1}m_{2} \over {|\mathbf {r} _{21}|}^{2}}{\hat {\mathbf {r} }}_{21}=-G{m_{1}m_{2} \over {|\mathbf {r} _{21}|}^{3}}\mathbf {r} _{21}}
where
F21 is the force applied on body 2 exerted by body 1,
G is the gravitational constant,
m1 and m2 are respectively the masses of bodies 1 and 2,
r21 = r2 − r1 is the displacement vector between bodies 1 and 2, and
r
^
21
=
d
e
f
r
2
−
r
1
|
r
2
−
r
1
|
{\displaystyle {\hat {\mathbf {r} }}_{21}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {\mathbf {r_{2}-r_{1}} }{|\mathbf {r_{2}-r_{1}} |}}}
is the unit vector from body 1 to body 2.
It can be seen that the vector form of the equation is the same as the scalar form given earlier, except that F is now a vector quantity, and the right hand side is multiplied by the appropriate unit vector. Also, it can be seen that F12 = −F21.
== Gravity field ==
The gravitational field is a vector field that describes the gravitational force that would be applied on an object in any given point in space, per unit mass. It is actually equal to the gravitational acceleration at that point.
It is a generalisation of the vector form, which becomes particularly useful if more than two objects are involved (such as a rocket between the Earth and the Moon). For two objects (e.g. object 2 is a rocket, object 1 the Earth), we simply write r instead of r12 and m instead of m2 and define the gravitational field g(r) as:
g
(
r
)
=
−
G
m
1
|
r
|
2
r
^
{\displaystyle \mathbf {g} (\mathbf {r} )=-G{m_{1} \over {{\vert \mathbf {r} \vert }^{2}}}\,\mathbf {\hat {r}} }
so that we can write:
F
(
r
)
=
m
g
(
r
)
.
{\displaystyle \mathbf {F} (\mathbf {r} )=m\mathbf {g} (\mathbf {r} ).}
This formulation is dependent on the objects causing the field. The field has units of acceleration; in SI, this is m/s2.
Gravitational fields are also conservative; that is, the work done by gravity from one position to another is path-independent. This has the consequence that there exists a gravitational potential field V(r) such that
g
(
r
)
=
−
∇
V
(
r
)
.
{\displaystyle \mathbf {g} (\mathbf {r} )=-\nabla V(\mathbf {r} ).}
If m1 is a point mass or the mass of a sphere with homogeneous mass distribution, the force field g(r) outside the sphere is isotropic, i.e., depends only on the distance r from the center of the sphere. In that case
V
(
r
)
=
−
G
m
1
r
.
{\displaystyle V(r)=-G{\frac {m_{1}}{r}}.}
As per Gauss's law, field in a symmetric body can be found by the mathematical equation:
where
∂
V
{\displaystyle \partial V}
is a closed surface and
M
enc
{\displaystyle M_{\text{enc}}}
is the mass enclosed by the surface.
Hence, for a hollow sphere of radius
R
{\displaystyle R}
and total mass
M
{\displaystyle M}
,
|
g
(
r
)
|
=
{
0
,
if
r
<
R
G
M
r
2
,
if
r
≥
R
{\displaystyle |\mathbf {g(r)} |={\begin{cases}0,&{\text{if }}r<R\\\\{\dfrac {GM}{r^{2}}},&{\text{if }}r\geq R\end{cases}}}
For a uniform solid sphere of radius
R
{\displaystyle R}
and total mass
M
{\displaystyle M}
,
|
g
(
r
)
|
=
{
G
M
r
R
3
,
if
r
<
R
G
M
r
2
,
if
r
≥
R
{\displaystyle |\mathbf {g(r)} |={\begin{cases}{\dfrac {GMr}{R^{3}}},&{\text{if }}r<R\\\\{\dfrac {GM}{r^{2}}},&{\text{if }}r\geq R\end{cases}}}
== Limitations ==
Newton's description of gravity is sufficiently accurate for many practical purposes and is therefore widely used. Deviations from it are small when the dimensionless quantities
ϕ
/
c
2
{\displaystyle \phi /c^{2}}
and
(
v
/
c
)
2
{\displaystyle (v/c)^{2}}
are both much less than one, where
ϕ
{\displaystyle \phi }
is the gravitational potential,
v
{\displaystyle v}
is the velocity of the objects being studied, and
c
{\displaystyle c}
is the speed of light in vacuum. For example, Newtonian gravity provides an accurate description of the Earth/Sun system, since
ϕ
c
2
=
G
M
s
u
n
r
o
r
b
i
t
c
2
∼
10
−
8
,
(
v
E
a
r
t
h
c
)
2
=
(
2
π
r
o
r
b
i
t
(
1
y
r
)
c
)
2
∼
10
−
8
,
{\displaystyle {\frac {\phi }{c^{2}}}={\frac {GM_{\mathrm {sun} }}{r_{\mathrm {orbit} }c^{2}}}\sim 10^{-8},\quad \left({\frac {v_{\mathrm {Earth} }}{c}}\right)^{2}=\left({\frac {2\pi r_{\mathrm {orbit} }}{(1\ \mathrm {yr} )c}}\right)^{2}\sim 10^{-8},}
where
r
orbit
{\displaystyle r_{\text{orbit}}}
is the radius of the Earth's orbit around the Sun.
In situations where either dimensionless parameter is large, then general relativity must be used to describe the system. General relativity reduces to Newtonian gravity in the limit of small potential and low velocities, so Newton's law of gravitation is often said to be the low-gravity limit of general relativity.
=== Observations conflicting with Newton's formula ===
Newton's theory does not fully explain the precession of the perihelion of the orbits of the planets, especially that of Mercury, which was detected long after the life of Newton. There is a 43 arcsecond per century discrepancy between the Newtonian calculation, which arises only from the gravitational attractions from the other planets, and the observed precession, made with advanced telescopes during the 19th century.
The predicted angular deflection of light rays by gravity (treated as particles travelling at the expected speed) that is calculated by using Newton's theory is only one-half of the deflection that is observed by astronomers. Calculations using general relativity are in much closer agreement with the astronomical observations.
In spiral galaxies, the orbiting of stars around their centers seems to strongly disobey both Newton's law of universal gravitation and general relativity. Astrophysicists, however, explain this marked phenomenon by assuming the presence of large amounts of dark matter.
=== Einstein's solution ===
The first two conflicts with observations above were explained by Einstein's theory of general relativity, in which gravitation is a manifestation of curved spacetime instead of being due to a force propagated between bodies. In Einstein's theory, energy and momentum distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. This allowed a description of the motions of light and mass that was consistent with all available observations. In general relativity, the gravitational force is a fictitious force resulting from the curvature of spacetime, because the gravitational acceleration of a body in free fall is due to its world line being a geodesic of spacetime.
== Extensions ==
In recent years, quests for non-inverse square terms in the law of gravity have been carried out by neutron interferometry.
== Solutions ==
The two-body problem has been completely solved, as has the restricted three-body problem.
The n-body problem is an ancient, classical problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem – from the time of the Greeks and on – has been motivated by the desire to understand the motions of the Sun, planets and the visible stars. The classical problem can be informally stated as: given the quasi-steady orbital properties (instantaneous position, velocity and time) of a group of celestial bodies, predict their interactive forces; and consequently, predict their true orbital motions for all future times.
In the 20th century, understanding the dynamics of globular cluster star systems became an important n-body problem too. The n-body problem in general relativity is considerably more difficult to solve.
== See also ==
Bentley's paradox – Cosmological paradox involving gravity
Gauss's law for gravity – Restatement of Newton's law of universal gravitation
Jordan and Einstein frames – different conventions for the metric tensor, in a theory of a dilaton coupled to gravityPages displaying wikidata descriptions as a fallback
Kepler orbit – Celestial orbit whose trajectory is a conic section in the orbital plane
Newton's cannonball – Thought experiment about gravity
Newton's laws of motion – Laws in physics about force and motion
Social gravity – Social theory
Static forces and virtual-particle exchange – Physical interaction in post-classical physics
== Notes ==
== References ==
== External links ==
Media related to Newton's law of universal gravitation at Wikimedia Commons
Feather and Hammer Drop on Moon on YouTube
Newton's Law of Universal Gravitation Javascript calculator | Wikipedia/Newton's_theory |
Folk science (also known as "folk knowledge" or "folk classification" (not to be confused with Folk classification, a method of classifying sedimentary rocks named after Robert L. Folk) describes ways of understanding and predicting the natural and social world, without the use of formalized, rigorous, methodologies (see Scientific method). One could label all understanding of nature predating the Greeks as "folk science". Folk science is often positioned in contrast to mechanistic or “clockwork” understandings of the world, where the function of each part and the relationship of all parts to each other is known in detail.
It is unclear how folk science develops in humans. However, even children as young as 8 months old have been shown to understand some root concepts of folk biology. Children's understanding does shift as they age, with the system of inferences they use developing as they grow.
Folk science is often accepted as "common wisdom" in a given culture, and people often don’t realize that their explanations and understandings rely on folk science. While this is common in children, even adults tend to believe they have a more complete understanding of mechanisms than they really do. Because folk science is something people, even children, do naturally, scientists are not exempt. Folk science makes appearances in the theories of professional scientists. Anthropological studies of scientists show that their theories often stem from models with gaps, deductions, and analogies. While these simplifications and gaps are not part of the scientific method, they do often work and scientific advances continue to use folk scientific methods. In some cases, researchers might look deliberately to folk methods to augment or improve their own. Several notable of folk science intuition (such as the world being flat, or the sun revolving around the Earth, clarify why it is important to continue to use the scientific method to gather data to confirm or deny folk scientific theories.
Formal sciences, due to their thorough permeation of society, can ultimately influence folk sciences. An example would be the concept of genetics, which is familiar to most adults in the 21st century, but at the level of a layperson. This leads to different inferences and folk scientific conclusions than those that would have been reached by a population without that knowledge. However, some kinds of folk science exist in all cultures. Folk biology, for example, is similar in all human societies. These similarities include what groups plants and animals are grouped into, and the hierarchies of these groups (such as “oaks” being a group of plants which is within the “tree” group). It also includes the ability to make inferences about some organisms based on other, similarly categorized, organisms.
== Some examples of folk science ==
Folk biology
Folk history
Folk linguistics
Folk psychology
Folk taxonomy
Informal mathematics
Naïve physics
Physiognomy
Weather lore
== See also ==
Ethnobiology
Pseudoscience
== References == | Wikipedia/Folk_science |
Timeline of electromagnetism and classical optics lists, within the history of electromagnetism, the associated theories, technology, and events.
== Early developments ==
28th century BC – Ancient Egyptian texts describe electric fish. They refer to them as the "Thunderer of the Nile", and described them as the "protectors" of all other fish.
6th century BC – Greek philosopher Thales of Miletus observes that rubbing fur on various substances, such as amber, would cause an attraction between the two, which is now known to be caused by static electricity. He noted that rubbing the amber buttons could attract light objects such as hair and that if the amber was rubbed sufficiently a spark would jump.
424 BC Aristophanes' "lens" is a glass globe filled with water.(Seneca says that it can be used to read letters no matter how small or dim)
4th century BC Mo Di first mentions the camera obscura, a pin-hole camera.
3rd century BC Euclid is the first to write about reflection and refraction and notes that light travels in straight lines
3rd century BC – The Baghdad Battery is dated from this period. It resembles a galvanic cell and is believed by some to have been used for electroplating, although there is no common consensus on the purpose of these devices nor whether they were, indeed, even electrical in nature.
1st century AD – Pliny in his Natural History records the story of a shepherd Magnes who discovered the magnetic properties of some iron stones, "it is said, made this discovery, when, upon taking his herds to pasture, he found that the nails of his shoes and the iron ferrel of his staff adhered to the ground".
130 AD – Claudius Ptolemy (in his work Optics) wrote about the properties of light including: reflection, refraction, and color and tabulated angles of refraction for several media
8th century AD – Electric fish are reported by Arabic naturalists and physicians.
== Middle Ages ==
1021 – Ibn al-Haytham (Alhazen) writes the Book of Optics, studying vision.
1088 – Shen Kuo first recognizes magnetic declination.
1187 – Alexander Neckham is first in Europe to describe the magnetic compass and its use in navigation.
1269 – Pierre de Maricourt describes magnetic poles and remarks on the nonexistence of isolated magnetic poles
1282 – Al-Ashraf Umar II discusses the properties of magnets and dry compasses in relation to finding qibla.
1305 – Theodoric of Freiberg uses crystalline spheres and flasks filled with water to study the reflection and refraction in raindrops that leads to primary and secondary rainbows
14th century AD – Possibly the earliest and nearest approach to the discovery of the identity of lightning, and electricity from any other source, is to be attributed to the Arabs, who before the 15th century had the Arabic word for lightning (raad) applied to the electric ray.
1550 – Gerolamo Cardano writes about electricity in De Subtilitate distinguishing, perhaps for the first time, between electrical and magnetic forces.
== 17th century ==
1600 – William Gilbert publishes De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure ("On the Magnet and Magnetic bodies, and on that Great Magnet the Earth"), Europe's then current standard on electricity and magnetism. He experimented with and noted the different character of electrical and magnetic forces. In addition to known ancient Greeks' observations of the electrical properties of rubbed amber, he experimented with a needle balanced on a pivot, and found that the needle was non-directionally affected by many materials such as alum, arsenic, hard resin, jet, glass, gum-mastic, mica, rock-salt, sealing wax, slags, sulfur, and precious stones such as amethyst, beryl, diamond, opal, and sapphire. He noted that electrical charge could be stored by covering the body with a non-conducting substance such as silk. He described the method of artificially magnetizing iron. His terrella (little earth), a sphere cut from a lodestone on a metal lathe, modeled the earth as a lodestone (magnetic iron ore) and demonstrated that every lodestone has fixed poles, and how to find them. He considered that gravity was a magnetic force and noted that this mutual force increased with the size or amount of lodestone and attracted iron objects. He experimented with such physical models in an attempt to explain problems in navigation due varying properties of the magnetic compass with respect to their location on the earth, such as magnetic declination and magnetic inclination. His experiments explained the dipping of the needle by the magnetic attraction of the earth, and were used to predict where the vertical dip would be found. Such magnetic inclination was described as early as the 11th century by Shen Kuo in his Meng Xi Bi Tan and further investigated in 1581 by retired mariner and compass maker Robert Norman, as described in his pamphlet, The Newe Attractive. The gilbert, a unit of magnetomotive force or magnetic scalar potential, was named in his honor.
1604 – Johannes Kepler describes how the eye focuses light
1604 – Johannes Kepler specifies the laws of the rectilinear propagation of light
1608 – first telescopes appear in the Netherlands
1611 – Marko Dominis discusses the rainbow in De Radiis Visus et Lucis
1611 – Johannes Kepler discovers total internal reflection, a small-angle refraction law, and thin lens optics,
c1620 – the first compound microscopes appear in Europe.
1621 – Willebrord van Roijen Snell states his Snell's law of refraction
1630 – Cabaeus finds that there are two types of electric charges
1637 – René Descartes quantitatively derives the angles at which primary and secondary rainbows are seen with respect to the angle of the Sun's elevation
1646 – Sir Thomas Browne first uses the word electricity is in his work Pseudodoxia Epidemica.
1657 – Pierre de Fermat introduces the principle of least time into optics
1660 – Otto von Guericke invents an early electrostatic generator.
1663 – Otto von Guericke (brewer and engineer who applied the barometer to weather prediction and invented the air pump, with which he demonstrated the properties of atmospheric pressure associated with a vacuum) constructs a primitive electrostatic generating (or friction) machine via the triboelectric effect, utilizing a continuously rotating sulfur globe that could be rubbed by hand or a piece of cloth. Isaac Newton suggested the use of a glass globe instead of a sulfur one.
1665 – Francesco Maria Grimaldi highlights the phenomenon of diffraction
1673 – Ignace Pardies provides a wave explanation for refraction of light
1675 – Robert Boyle discovers that electric attraction and repulsion can act across a vacuum and do not depend upon the air as a medium. Adds resin to the known list of "electrics".
1675 – Isaac Newton delivers his theory of light
1676 – Ole Rømer proves that speed of light is finite, by observing Jupiter's moons
1678 – Christiaan Huygens states his principle of wavefront sources and demonstrates the refraction and diffraction of light rays.
== 18th century ==
1704 – Isaac Newton publishes Opticks, a corpuscular theory of light and colour
1705 – Francis Hauksbee improves von Guericke's electrostatic generator by using a glass globe and generates the first sparks by approaching his finger to the rubbed globe.
1728 – James Bradley discovers the aberration of starlight and uses it to determine that the speed of light is about 283,000 km/s
1729 – Stephen Gray and the Reverend Granville Wheler experiment to discover that electrical "virtue", produced by rubbing a glass tube, could be transmitted over an extended distance (nearly 900 ft (about 270 m)) through thin iron wire using silk threads as insulators, to deflect leaves of brass. This has been described as the beginning of electrical communication. This was also the first distinction between the roles of conductors and insulators (names applied by John Desaguliers, mathematician and Royal Society member, who stated that Gray "has made greater variety of electrical experiments than all the philosophers of this and the last age".) Georges-Louis LeSage built a static electricity telegraph in 1774, based upon the same principles discovered by Gray.
1732 – C. F. du Fay Shows that all objects, except metals, animals, and liquids, can be electrified by rubbing them and that metals, animals and liquids could be electrified by means of an electrostatic generators
1734 – Charles François de Cisternay DuFay (inspired by Gray's work to perform electrical experiments) dispels the effluvia theory by his paper in Volume 38 of the Philosophical Transactions of the Royal Society, describing his discovery of the distinction between two kinds of electricity: "resinous", produced by rubbing bodies such as amber, copal, or gum-lac with silk or paper, and "vitreous", by rubbing bodies as glass, rock crystal, or precious stones with hair or wool. He also posited the principle of mutual attraction for unlike forms and the repelling of like forms and that "from this principle one may with ease deduce the explanation of a great number of other phenomena". The terms resinous and vitreous were later replaced with the terms "positive" and "negative" by William Watson and Benjamin Franklin.
1737 – C. F. du Fay and Francis Hauksbee the younger independently discover two kinds of frictional electricity: one generated from rubbing glass, the other from rubbing resin (later identified as positive and negative electrical charges).
1740 – Jean le Rond d'Alembert, in Mémoire sur la réfraction des corps solides, explains the process of refraction.
1745 – Pieter van Musschenbroek of Leiden (Leyden) independently discovers the Leyden (Leiden) jar, a primitive capacitor or "condenser" (term coined by Volta in 1782, derived from the Italian condensatore), with which the transient electrical energy generated by current friction machines could now be stored. He and his student Andreas Cunaeus used a glass jar filled with water into which a brass rod had been placed. He charged the jar by touching a wire leading from the electrical machine with one hand while holding the outside of the jar with the other. The energy could be discharged by completing an external circuit between the brass rod and another conductor, originally his hand, placed in contact with the outside of the jar. He also found that if the jar were placed on a piece of metal on a table, a shock would be received by touching this piece of metal with one hand and touching the wire connected to the electrical machine with the other.
1745 – Ewald Georg von Kleist of independently invents the capacitor: a glass jar coated inside and out with metal. The inner coating was connected to a rod that passed through the lid and ended in a metal sphere. By having this thin layer of glass insulation (a dielectric) between two large, closely spaced plates, von Kleist found the energy density could be increased dramatically compared with the situation with no insulator. Daniel Gralath improved the design and was also the first to combine several jars to form a battery strong enough to kill birds and small animals upon discharge.
1746 – Leonhard Euler develops the wave theory of light refraction and dispersion
1747 – William Watson, while experimenting with a Leyden jar, observes that a discharge of static electricity causes electric current to flow and develops the concept of an electrical potential (voltage).
1752 – Benjamin Franklin establishes the link between lightning and electricity by the flying a kite into a thunderstorm and transferring some of the charge into a Leyden jar and showed that its properties were the same as charge produced by an electrical machine. He is credited with utilizing the concepts of positive and negative charge in the explanation of then known electrical phenomenon. He theorized that there was an electrical fluid (which he proposed could be the luminiferous ether, which was used by others before and after him, to explain the wave theory of light) that was part of all material and all intervening space. The charge of any object would be neutral if the concentration of this fluid were the same both inside and outside of the body, positive if the object contained an excess of this fluid, and negative if there were a deficit. In 1749 he had documented the similar properties of lightning and electricity, such as that both an electric spark and a lightning flash produced light and sound, could kill animals, cause fires, melt metal, destroy or reverse the polarity of magnetism, and flowed through conductors and could be concentrated at sharp points. He was later able to apply the property of concentrating at sharp points by his invention of the lightning rod, for which he intentionally did not profit. He also investigated the Leyden jar, proving that the charge was stored on the glass and not in the water, as others had assumed.
1753 – C. M. (of Scotland, possibly Charles Morrison, of Greenock or Charles Marshall, of Aberdeen) proposes in 17 February edition of Scots Magazine, an electrostatic telegraph system with 26 insulated wires, each corresponding to a letter of the alphabet and each connected to electrostatic machines. The receiving charged end was to electrostatically attract a disc of paper marked with the corresponding letter.
1767 – Joseph Priestley proposes an electrical inverse-square law
1774 – Georges-Louis LeSage builds an electrostatic telegraph system with 26 insulated wires conducting Leyden-jar charges to pith-ball electroscopes, each corresponding to a letter of the alphabet. Its range was only between rooms of his home.
1784 – Henry Cavendish defines the inductive capacity of dielectrics (insulators) and measures the specific inductive capacity of various substances by comparison with an air condenser.
1785 – Charles Coulomb introduces the inverse-square law of electrostatics
1786 – Luigi Galvani discovers "animal electricity" and postulates that animal bodies are storehouses of electricity. His invention of the voltaic cell leads to the invention the electric battery.
1791 – Luigi Galvani discovers galvanic electricity and bioelectricity through experiments following an observation that touching exposed muscles in frogs' legs with a scalpel which had been close to a static electrical machine caused them to jump. He called this "animal electricity". Years of experimentation in the 1780s eventually led him to the construction of an arc of two different metals (copper and zinc for example) by connecting the two metal pieces and then connecting their open ends across the nerve of a frog leg, producing the same muscular contractions (by completing a circuit) as originally accidentally observed. The use of different metals to produce an electrical spark is the basis that led Alessandro Volta in 1799 to his invention of his voltaic pile, which eventually became the galvanic battery.
1799 – Alessandro Volta, following Galvani's discovery of galvanic electricity, creates a voltaic cell producing an electric current by the chemical action of several pairs of alternating copper (or silver) and zinc discs "piled" and separated by cloth or cardboard which had been soaked brine (salt water) or acid to increase conductivity. In 1800 he demonstrates the production of light from a glowing wire conducting electricity. This was followed in 1801 by his construction of the first electric battery, by utilizing multiple voltaic cells. Prior to his major discoveries, in a letter of praise to the Royal Society 1793, Volta reported Luigi Galvani's experiments of the 1780s as the "most beautiful and important discoveries", regarding them as the foundation of future discoveries. Volta's inventions led to revolutionary changes with this method of the production of inexpensive, controlled electric current vs. existing frictional machines and Leyden jars. The electric battery became standard equipment in every experimental laboratory and heralded an age of practical applications of electricity. The unit volt is named for his contributions.
1800 – William Herschel discovers infrared radiation from the Sun.
1800 – William Nicholson, Anthony Carlisle and Johann Ritter use electricity to decompose water into hydrogen and oxygen, thereby discovering the process of electrolysis, which led to the discovery of many other elements.
1800 – Alessandro Volta invents the voltaic pile, or "battery", specifically to disprove Galvani's animal electricity theory.
== 19th century ==
=== 1801–1850 ===
1801 – Johann Ritter discovers ultraviolet radiation from the Sun
1801 – Thomas Young demonstrates the wave nature of light and the principle of interference
1802 – Gian Domenico Romagnosi, Italian legal scholar, discovers that electricity and magnetism are related by noting that a nearby voltaic pile deflects a magnetic needle. He published his account in an Italian newspaper, but this was overlooked by the scientific community.
1803 – Thomas Young develops the Double-slit experiment and demonstrates the effect of interference.
1806 – Alessandro Volta employs a voltaic pile to decompose potash and soda, showing that they are the oxides of the previously unknown metals potassium and sodium. These experiments were the beginning of electrochemistry.
1808 – Étienne-Louis Malus discovers polarization by reflection
1809 – Étienne-Louis Malus publishes the law of Malus which predicts the light intensity transmitted by two polarizing sheets
1809 – Humphry Davy first publicly demonstrates the electric arc light.
1811 – François Jean Dominique Arago discovers that some quartz crystals continuously rotate the electric vector of light
1814 – Joseph von Fraunhofer discovered and studied the dark absorption lines in the spectrum of the sun now known as Fraunhofer lines
1816 – David Brewster discovers stress birefringence
1818 – Siméon Poisson predicts the Poisson-Arago bright spot at the center of the shadow of a circular opaque obstacle
1818 – François Jean Dominique Arago verifies the existence of the Poisson-Arago bright spot
1820 – Hans Christian Ørsted, Danish physicist and chemist, develops an experiment in which he notices a compass needle is deflected from magnetic north when an electric current from the battery he was using was switched on and off, convincing him that magnetic fields radiate from all sides of a live wire just as light and heat do, confirming a direct relationship between electricity and magnetism. He also observes that the movement of the compass-needle to one side or the other depends upon the direction of the current. Following intensive investigations, he published his findings, proving that a changing electric current produces a magnetic field as it flows through a wire. The oersted unit of magnetic induction is named for his contributions.
1820 – André-Marie Ampère, professor of mathematics at the École Polytechnique, demonstrates that parallel current-carrying wires experience magnetic force in a meeting of the French Academy of Sciences, exactly one week after Ørsted's announcement of his discovery that a magnetic needle is acted on by a voltaic current. He shows that a coil of wire carrying a current behaves like an ordinary magnet and suggests that electromagnetism might be used in telegraphy. He mathematically develops Ampère's law describing the magnetic force between two electric currents. His mathematical theory explains known electromagnetic phenomena and predicts new ones. His laws of electrodynamics include the facts that parallel conductors currying current in the same direction attract and those carrying currents in the opposite directions repel one another. One of the first to develop electrical measuring techniques, he built an instrument utilizing a free-moving needle to measure the flow of electricity, contributing to the development of the galvanometer. In 1821, he proposed a telegraphy system utilizing one wire per "galvanometer" to indicate each letter, and reported experimenting successfully with such a system. However, in 1824, Peter Barlow reported its maximum distance was only 200 feet, and so was impractical. In 1826 he publishes the Memoir on the Mathematical Theory of Electrodynamic Phenomena, Uniquely Deduced from Experience containing a mathematical derivation of the electrodynamic force law. Following Faraday's discovery of electromagnetic induction in 1831, Ampère agreed that Faraday deserved full credit for the discovery.
1820 – Johann Salomo Christoph Schweigger, German chemist, physicist, and professor, builds the first sensitive galvanometer, wrapping a coil of wire around a graduated compass, an acceptable instrument for actual measurement as well as detection of small amounts of electric current, naming it after Luigi Galvani.
1821 – André-Marie Ampère announces his theory of electrodynamics, predicting the force that one current exerts upon another.
1821 – Thomas Johann Seebeck discovers the thermoelectric effect.
1821 – Augustin-Jean Fresnel derives a mathematical demonstration that polarization can be explained only if light is entirely transverse, with no longitudinal vibration whatsoever.
1825 – Augustin Fresnel phenomenologically explains optical activity by introducing circular birefringence
1825 – William Sturgeon, founder of the first English Electric Journal, Annals of Electricity, found that an iron core inside a helical coil of wire connected to a battery greatly increased the resulting magnetic field, thus making possible the more powerful electromagnets utilizing a ferromagnetic core. Sturgeon also bent the iron core into a U-shape to bring the poles closer together, thus concentrating the magnetic field lines. These discoveries followed Ampère's discovery that electricity passing through a coiled wire produced a magnetic force and that of Dominique François Jean Arago finding that an iron bar is magnetized by putting it inside the coil of current-carrying wire, but Arago had not observed the increased strength of the resulting field while the bar was being magnetized.
1826 – Georg Simon Ohm states his Ohm's law of electrical resistance in the journals of Schweigger and Poggendorff, and also published in his landmark pamphlet Die galvanische Kette mathematisch bearbeitet in 1827. The unit ohm (Ω) of electrical resistance has been named in his honor.
1829 & 1830 – Francesco Zantedeschi publishes papers on the production of electric currents in closed circuits by the approach and withdrawal of a magnet, thereby anticipating Michael Faraday's classical experiments of 1831.
1831 – Michael Faraday began experiments leading to his discovery of the law of electromagnetic induction, though the discovery may have been anticipated by the work of Francesco Zantedeschi. His breakthrough came when he wrapped two insulated coils of wire around a massive iron ring, bolted to a chair, and found that upon passing a current through one coil, a momentary electric current was induced in the other coil. He then found that if he moved a magnet through a loop of wire, or vice versa, an electric current also flowed in the wire. He then used this principle to construct the electric dynamo, the first electric power generator. He proposed that electromagnetic forces extended into the empty space around the conductor, but did not complete that work. Faraday's concept of lines of flux emanating from charged bodies and magnets provided a way to visualize electric and magnetic fields. That mental model was crucial to the successful development of electromechanical devices which were to dominate the 19th century. His demonstrations that a changing magnetic field produces an electric field, mathematically modeled by Faraday's law of induction, would subsequently become one of Maxwell's equations. These consequently evolved into the generalization of field theory.
1831 – Macedonio Melloni uses a thermopile to detect infrared radiation
1832 – Baron Pavel L'vovitch Schilling (Paul Schilling) creates the first electromagnetic telegraph, consisting of a single-needle system in which a code was used to indicate the characters. Only months later, Göttingen professors Carl Friedrich Gauss and Wilhelm Weber constructed a telegraph that was working two years before Schilling could put his into practice. Schilling demonstrated the long-distance transmission of signals between two different rooms of his apartment and was the first to put into practice a binary system of signal transmission.
1833 – Heinrich Lenz states Lenz's law: if an increasing (or decreasing) magnetic flux induces an electromotive force (EMF), the resulting current will oppose a further increase (or decrease) in magnetic flux, i.e., that an induced current in a closed conducting loop will appear in such a direction that it opposes the change that produced it. Lenz's law is one consequence of the principle of conservation of energy. If a magnet moves towards a closed loop, then the induced current in the loop creates a field that exerts a force opposing the motion of the magnet. Lenz's law can be derived from Faraday's law of induction by noting the negative sign on the right side of the equation. He also independently discovered Joule's law in 1842; to honor his efforts, Russian physicists refer to it as the "Joule–Lenz law".
1833 – Michael Faraday announces his law of electrochemical equivalents
1834 – Heinrich Lenz determines the direction of the induced electromotive force (emf) and current resulting from electromagnetic induction. Lenz's law provides a physical interpretation of the choice of sign in Faraday's law of induction (1831), indicating that the induced emf and the change in flux have opposite signs.
1834 – Jean-Charles Peltier discovers the Peltier effect: heating by an electric current at the junction of two different metals.
1835 – Joseph Henry invents the electric relay, which is an electrical switch by which the change of a weak current through the windings of an electromagnet will attract an armature to open or close the switch. Because this can control (by opening or closing) another, much higher-power, circuit, it is in a broad sense a form of electrical amplifier. This made a practical electric telegraph possible. He was the first to coil insulated wire tightly around an iron core in order to make an extremely powerful electromagnet, improving on William Sturgeon's design, which used loosely coiled, uninsulated wire. He also discovered the property of self inductance independently of Michael Faraday.
1836 – William Fothergill Cooke invents a mechanical telegraph. 1837 with Charles Wheatstone invents the Cooke and Wheatstone needle telegraph. 1838 the Cooke and Wheatstone telegraph becomes the first commercial telegraph in the world when it is installed on the Great Western Railway.
1837 – Samuel Morse develops an alternative electrical telegraph design capable of transmitting long distances over poor quality wire. He and his assistant Alfred Vail develop the Morse code signaling alphabet. In 1838 Morse successfully tested the device at the Speedwell Ironworks near Morristown, New Jersey, and publicly demonstrated it to a scientific committee at the Franklin Institute in Philadelphia, Pennsylvania. The first electric telegram using this device was sent by Morse on 24 May, 1844 from Baltimore to Washington, D.C., bearing the message "What hath God wrought?"
1838 – Michael Faraday uses Volta's battery to discover cathode rays.
1839 – Alexandre Edmond Becquerel observes the photoelectric effect with an electrode in a conductive solution exposed to light.
1840 – James Prescott Joule formulates Joule's Law (sometimes called the Joule-Lenz law) quantifying the amount of heat produced in a circuit as proportional to the product of the time duration, the resistance, and the square of the current passing through it.
1845 – Michael Faraday discovers that light propagation in a material can be influenced by external magnetic fields (Faraday effect)
1849 – Hippolyte Fizeau and Jean-Bernard Foucault measure the speed of light to be about 298,000 km/s
=== 1851–1900 ===
1852 – George Gabriel Stokes defines the Stokes parameters of polarization
1852 – Edward Frankland develops the theory of chemical valence
1854 – Gustav Robert Kirchhoff, physicist and one of the founders of spectroscopy, publishes Kirchhoff's Laws on the conservation of electric charge and energy, which are used to determine currents in each branch of a circuit.
1855 – James Clerk Maxwell submits On Faraday's Lines of Force for publication containing a mathematical statement of Ampère's circuital law relating the curl of a magnetic field to the electrical current at a point.
1861 – the first transcontinental telegraph system spans North America by connecting an existing network in the eastern United States to a small network in California by a link between Omaha and Carson City via Salt Lake City. The slower Pony Express system ceased operation a month later.
1864 – James Clerk Maxwell publishes his papers on a dynamical theory of the electromagnetic field
1865 – James Clerk Maxwell publishes his landmark paper A Dynamical Theory of the Electromagnetic Field, in which Maxwell's equations demonstrated that electric and magnetic forces are two complementary aspects of electromagnetism. He shows that the associated complementary electric and magnetic fields of electromagnetism travel through space, in the form of waves, at a constant velocity of 3.0×108 m/s. He also proposes that light is a form of electromagnetic radiation and that waves of oscillating electric and magnetic fields travel through empty space at a speed that could be predicted from simple electrical experiments. Using available data, he obtains a velocity of 310740000 m/s and states "This velocity is so nearly that of light, that it seems we have strong reason to conclude that light itself (including radiant heat, and other radiations if any) is an electromagnetic disturbance in the form of waves propagated through the electromagnetic field according to electromagnetic laws."
1866 – the first successful transatlantic telegraph system was completed. Earlier submarine cable transatlantic cables installed in 1857 and 1858 failed after operating for a few days or weeks.
1869 – William Crookes invents the Crookes tube.
1873 — The British Association establishes the units volt, ampere, and ohm.
1873 – Willoughby Smith discovers the photoelectric effect in metals not in solution (i.e., selenium).
1871 – Lord Rayleigh discusses the blue sky law and sunsets (Rayleigh scattering)
1873 – J. C. Maxwell publishes A Treatise on Electricity and Magnetism which states that light is an electromagnetic phenomenon.
1874 – German scientist Karl Ferdinand Braun discovers the "unilateral conduction" of crystals. Braun patents the first solid state diode, a crystal rectifier, in 1899.
1875 – John Kerr discovers the electrically induced birefringence of some liquids
1878 – Thomas Edison, following work on a "multiplex telegraph" system and the phonograph, invents an improved incandescent light bulb. This was not the first electric light bulb but the first commercially practical incandescent light. In 1879 he produces a high-resistance lamp in a very high vacuum; the lamp lasts hundreds of hours. While the earlier inventors had produced electric lighting in lab conditions, Edison concentrated on commercial application and was able to sell the concept to homes and businesses by mass-producing relatively long-lasting light bulbs and creating a complete system for the generation and distribution of electricity.
1879 – Jožef Stefan discovers the Stefan–Boltzmann radiation law of a black body and uses it to calculate the first sensible value of the temperature of the Sun's surface to be 5700 K
1880 – Edison discovers thermionic emission or the Edison effect.
1882 – Edison switches on the world's first electrical power distribution system, providing 110 volts direct current (DC) to 59 customers.
1884 – Oliver Heaviside reformulates Maxwell's original mathematical treatment of electromagnetic theory from twenty equations in twenty unknowns into four simple equations in four unknowns (the modern vector form of Maxwell's equations).
1886 – Oliver Heaviside coins the term inductance.
1887 – Heinrich Hertz invents a device for the production and reception of electromagnetic (EM) radio waves. His receiver consists of a coil with a spark gap.
1888 – Introduction of the induction motor, an electric motor that harnesses a rotating magnetic field produced by alternating current, independently invented by Galileo Ferraris and Nikola Tesla.
1888 – Heinrich Hertz demonstrates the existence of electromagnetic waves by building an apparatus that produced and detected UHF radio waves (or microwaves in the UHF region). He also found that radio waves could be transmitted through different types of materials and were reflected by others, the key to radar. His experiments explain reflection, refraction, polarization, interference, and velocity of electromagnetic waves.
1893 – Victor Schumann discovers the vacuum ultraviolet spectrum.
1895 – Wilhelm Conrad Röntgen discovers X-rays
1895 – Jagadis Chandra Bose gives his first public demonstration of electromagnetic waves
1896 – Arnold Sommerfeld solves the half-plane diffraction problem
1897 – J. J. Thomson discovers the electron.
1899 – Pyotr Lebedev measures the pressure of light on a solid body.
1900 – The Liénard–Wiechert potentials are introduced as time-dependent (retarded) electrodynamic potentials
1900 – Max Planck resolves the ultraviolet catastrophe by suggesting that black-body radiation consists of discrete packets, or quanta, of energy. The amount of energy in each packet is proportional to the frequency of the electromagnetic waves. The constant of proportionality is now called the Planck constant in his honor.
== 20th century ==
1904 – John Ambrose Fleming invents the thermionic diode, the first electronic vacuum tube, which had practical use in early radio receivers.
1905 – Albert Einstein proposes the Theory of Special Relativity, in which he rejects the existence of the aether as unnecessary for explaining the propagation of electromagnetic waves. Instead, Einstein asserts as a postulate that the speed of light is constant in all inertial frames of reference, and goes on to demonstrate a number of revolutionary (and highly counter-intuitive) consequences, including time dilation, length contraction, the relativity of simultaneity, the dependence of mass on velocity, and the equivalence of mass and energy.
1905 – Einstein explains the photoelectric effect by extending Planck's idea of light quanta, or photons, to the absorption and emission of photoelectrons. Einstein would later receive the Nobel Prize in Physics for this discovery, which launched the quantum revolution in physics.
1911 – Superconductivity is discovered by Heike Kamerlingh Onnes, who was studying the resistivity of solid mercury at cryogenic temperatures using the recently discovered liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistivity abruptly disappeared. For this discovery, he was awarded the Nobel Prize in Physics in 1913.
1919 – Albert A. Michelson makes the first interferometric measurements of stellar diameters at Mount Wilson Observatory (see history of astronomical interferometry)
1924 – Louis de Broglie postulates the wave nature of electrons and suggests that all matter has wave properties.
1946 – Martin Ryle and Vonberg build the first two-element astronomical radio interferometer (see history of astronomical interferometry)
1953 – Charles H. Townes, James P. Gordon, and Herbert J. Zeiger produce the first maser
1956 – R. Hanbury-Brown and R.Q. Twiss complete the correlation interferometer
1959 – Sheldon Glashow, Abdus Salam, and John Clive Ward merge electromagnetism and the weak interaction in Standard Model, into electroweak interaction
1960 – Theodore Maiman produces the first working laser
1966 – Jefimenko introduces time-dependent (retarded) generalizations of Coulomb's law and the Biot–Savart law
1999 – M. Henny and others demonstrate the fermionic Hanbury Brown and Twiss experiment
== See also ==
History of electromagnetic theory
History of optics
History of special relativity
History of superconductivity
Timeline of luminiferous aether
== References ==
== Further reading ==
The Natural History Pliny the Elder, The Natural History from Perseus Digital Library
The Discovery of the Electron from the American Institute of Physics
Enterprise and electrolysis... from the Royal Society of Chemistry (chemsoc)
Pure Science-History, Worldwide School
== External links ==
The Work of Jagadis Chandra Bose: 100 Years of MM-Wave Research
Jagadis Chandra Bose and His Pioneering Research on Microwaves | Wikipedia/Timeline_of_electromagnetic_theory |
What is now often called Lorentz ether theory (LET) has its roots in Hendrik Lorentz's "theory of electrons", which marked the end of the development of the classical aether theories at the end of the 19th and at the beginning of the 20th century.
Lorentz's initial theory was created between 1892 and 1895 and was based on removing assumptions about aether motion. It explained the failure of the negative aether drift experiments to first order in v/c by introducing an auxiliary variable called "local time" for connecting systems at rest and in motion in the aether. In addition, the negative result of the Michelson–Morley experiment led to the introduction of the hypothesis of length contraction in 1892. However, other experiments also produced negative results and (guided by Henri Poincaré's principle of relativity) Lorentz tried in 1899 and 1904 to expand his theory to all orders in v/c by introducing the Lorentz transformation. In addition, he assumed that non-electromagnetic forces (if they exist) transform like electric forces. However, Lorentz's expression for charge density and current were incorrect, so his theory did not fully exclude the possibility of detecting the aether. Eventually, it was Henri Poincaré who in 1905 corrected the errors in Lorentz's paper and actually incorporated non-electromagnetic forces (including gravitation) within the theory, which he called "The New Mechanics". Many aspects of Lorentz's theory were incorporated into special relativity (SR) with the works of Albert Einstein and Hermann Minkowski.
Today LET is often treated as some sort of "Lorentzian" or "neo-Lorentzian" interpretation of special relativity. The introduction of length contraction and time dilation for all phenomena in a "preferred" frame of reference, which plays the role of Lorentz's immobile aether, leads to the complete Lorentz transformation (see the Robertson–Mansouri–Sexl test theory as an example), so Lorentz covariance doesn't provide any experimentally verifiable distinctions between LET and SR. The absolute simultaneity in the Mansouri–Sexl test theory formulation of LET implies that a one-way speed of light experiment could in principle distinguish between LET and SR, but it is now widely held that it is impossible to perform such a test. In the absence of any way to experimentally distinguish between LET and SR, SR is widely preferred over LET, due to the superfluous assumption of an undetectable aether in LET, and the validity of the relativity principle in LET seeming ad hoc or coincidental.
== Historical development ==
=== Basic concept ===
The Lorentz ether theory, which was developed mainly between 1892 and 1906 by Lorentz and Poincaré, was based on the aether theory of Augustin-Jean Fresnel, Maxwell's equations and the electron theory of Rudolf Clausius. Lorentz's 1895 paper rejected the aether drift theories, and refused to express assumptions about the nature of the aether. It said:
That we cannot speak about an absolute rest of the aether, is self-evident; this expression would not even make sense. When I say for the sake of brevity, that the aether would be at rest, then this only means that one part of this medium does not move against the other one and that all perceptible motions are relative motions of the celestial bodies in relation to the aether.
As Max Born later said, it was natural (though not logically necessary) for scientists of that time to identify the rest frame of the Lorentz aether with the absolute space of Isaac Newton. The condition of this aether can be described by the electric field E and the magnetic field H, where these fields represent the "states" of the aether (with no further specification), related to the charges of the electrons. Thus an abstract electromagnetic aether replaces the older mechanistic aether models. Contrary to Clausius, who accepted that the electrons operate by actions at a distance, the electromagnetic field of the aether appears as a mediator between the electrons, and changes in this field can propagate not faster than the speed of light. Lorentz theoretically explained the Zeeman effect on the basis of his theory, for which he received the Nobel Prize in Physics in 1902. Joseph Larmor found a similar theory simultaneously, but his concept was based on a mechanical aether. A fundamental concept of Lorentz's theory in 1895 was the "theorem of corresponding states" for terms of order v/c. This theorem states that a moving observer with respect to the aether can use the same electrodynamic equations as an observer in the stationary aether system, thus they are making the same observations.
=== Length contraction ===
A big challenge for the Lorentz ether theory was the Michelson–Morley experiment in 1887. According to the theories of Fresnel and Lorentz, a relative motion to an immobile aether had to be determined by this experiment; however, the result was negative. Michelson himself thought that the result confirmed the aether drag hypothesis, in which the aether is fully dragged by matter. However, other experiments like the Fizeau experiment and the effect of aberration disproved that model.
A possible solution came in sight, when in 1889 Oliver Heaviside derived from Maxwell's equations that the magnetic vector potential field around a moving body is altered by a factor of
1
−
v
2
/
c
2
{\textstyle {\sqrt {1-v^{2}/c^{2}}}}
. Based on that result, and to bring the hypothesis of an immobile aether into accordance with the Michelson–Morley experiment, George FitzGerald in 1889 (qualitatively) and, independently of him, Lorentz in 1892 (already quantitatively), suggested that not only the electrostatic fields, but also the molecular forces, are affected in such a way that the dimension of a body in the line of motion is less by the value
v
2
/
(
2
c
2
)
{\displaystyle v^{2}/(2c^{2})}
than the dimension perpendicularly to the line of motion. However, an observer co-moving with the earth would not notice this contraction because all other instruments contract at the same ratio. In 1895 Lorentz proposed three possible explanations for this relative contraction:
The body contracts in the line of motion and preserves its dimension perpendicularly to it.
The dimension of the body remains the same in the line of motion, but it expands perpendicularly to it.
The body contracts in the line of motion and expands at the same time perpendicularly to it.
Although the possible connection between electrostatic and intermolecular forces was used by Lorentz as a plausibility argument, the contraction hypothesis was soon considered as purely ad hoc. It is also important that this contraction would only affect the space between the electrons but not the electrons themselves; therefore the name "intermolecular hypothesis" was sometimes used for this effect. The so-called Length contraction without expansion perpendicularly to the line of motion and by the precise value
l
=
l
0
⋅
1
−
v
2
/
c
2
{\textstyle l=l_{0}\cdot {\sqrt {1-v^{2}/c^{2}}}}
(where l0 is the length at rest in the aether) was given by Larmor in 1897 and by Lorentz in 1904. In the same year, Lorentz also argued that electrons themselves are also affected by this contraction. For further development of this concept, see the section § Lorentz transformation.
=== Local time ===
An important part of the theorem of corresponding states in 1892 and 1895 was the local time
t
′
=
t
−
v
x
/
c
2
{\displaystyle t'=t-vx/c^{2}}
, where t is the time coordinate for an observer resting in the aether, and t' is the time coordinate for an observer moving in the aether. (Woldemar Voigt had previously used the same expression for local time in 1887 in connection with the Doppler effect and an incompressible medium.) With the help of this concept Lorentz could explain the aberration of light, the Doppler effect and the Fizeau experiment (i.e. measurements of the Fresnel drag coefficient) by Hippolyte Fizeau in moving and also resting liquids. While for Lorentz length contraction was a real physical effect, he considered the time transformation only as a heuristic working hypothesis and a mathematical stipulation to simplify the calculation from the resting to a "fictitious" moving system. Contrary to Lorentz, Poincaré saw more than a mathematical trick in the definition of local time, which he called Lorentz's "most ingenious idea". In The Measure of Time he wrote in 1898:
We do not have a direct intuition for simultaneity, just as little as for the equality of two periods. If we believe to have this intuition, it is an illusion. We helped ourselves with certain rules, which we usually use without giving us account over it [...] We choose these rules therefore, not because they are true, but because they are the most convenient, and we could summarize them while saying: „The simultaneity of two events, or the order of their succession, the equality of two durations, are to be so defined that the enunciation of the natural laws may be as simple as possible. In other words, all these rules, all these definitions are only the fruit of an unconscious opportunism.“
In 1900 Poincaré interpreted local time as the result of a synchronization procedure based on light signals. He assumed that two observers, A and B, who are moving in the aether, synchronize their clocks by optical signals. Since they treat themselves as being at rest, they must consider only the transmission time of the signals and then crossing their observations to examine whether their clocks are synchronous. However, from the point of view of an observer at rest in the aether the clocks are not synchronous and indicate the local time
t
′
=
t
−
v
x
/
c
2
{\displaystyle t'=t-vx/c^{2}}
. But because the moving observers don't know anything about their movement, they don't recognize this. In 1904, he illustrated the same procedure in the following way: A sends a signal at time 0 to B, which arrives at time t. B also sends a signal at time 0 to A, which arrives at time t. If in both cases t has the same value, the clocks are synchronous, but only in the system in which the clocks are at rest in the aether. So, according to Darrigol, Poincaré understood local time as a physical effect just like length contraction – in contrast to Lorentz, who did not use the same interpretation before 1906. However, contrary to Einstein, who later used a similar synchronization procedure which was called Einstein synchronisation, Darrigol says that Poincaré had the opinion that clocks resting in the aether are showing the true time.
However, at the beginning it was unknown that local time includes what is now known as time dilation. This effect was first noticed by Larmor (1897), who wrote that "individual electrons describe corresponding parts of their orbits in times shorter for the [aether] system in the ratio
ε
−
1
/
2
{\displaystyle \varepsilon ^{-1/2}}
or
(
1
−
(
1
/
2
)
v
2
/
c
2
)
{\displaystyle \left(1-(1/2)v^{2}/c^{2}\right)}
". And in 1899 also Lorentz noted for the frequency of oscillating electrons "that in S the time of vibrations be
k
ε
{\displaystyle k\varepsilon }
times as great as in S0", where S0 is the aether frame, S the mathematical-fictitious frame of the moving observer, k is
1
−
v
2
/
c
2
{\textstyle {\sqrt {1-v^{2}/c^{2}}}}
, and
ε
{\displaystyle \varepsilon }
is an undetermined factor.
=== Lorentz transformation ===
While local time could explain the negative aether drift experiments to first order to v/c, it was necessary – due to other unsuccessful aether drift experiments like the Trouton–Noble experiment – to modify the hypothesis to include second-order effects. The mathematical tool for that is the so-called Lorentz transformation. Voigt in 1887 had already derived a similar set of equations (although with a different scale factor). Afterwards, Larmor in 1897 and Lorentz in 1899 derived equations in a form algebraically equivalent to those which are used up to this day, although Lorentz used an undetermined factor l in his transformation. In his paper Electromagnetic phenomena in a system moving with any velocity smaller than that of light (1904) Lorentz attempted to create such a theory, according to which all forces between the molecules are affected by the Lorentz transformation (in which Lorentz set the factor l to unity) in the same manner as electrostatic forces. In other words, Lorentz attempted to create a theory in which the relative motion of earth and aether is (nearly or fully) undetectable. Therefore, he generalized the contraction hypothesis and argued that not only the forces between the electrons, but also the electrons themselves are contracted in the line of motion. However, Max Abraham (1904) quickly noted a defect of that theory: Within a purely electromagnetic theory the contracted electron-configuration is unstable and one has to introduce non-electromagnetic force to stabilize the electrons – Abraham himself questioned the possibility of including such forces within the theory of Lorentz.
So it was Poincaré, on 5 June 1905, who introduced the so-called "Poincaré stresses" to solve that problem. Those stresses were interpreted by him as an external, non-electromagnetic pressure, which stabilize the electrons and also served as an explanation for length contraction. Although he argued that Lorentz succeeded in creating a theory which complies to the postulate of relativity, he showed that Lorentz's equations of electrodynamics were not fully Lorentz covariant. So by pointing out the group characteristics of the transformation, Poincaré demonstrated the Lorentz covariance of the Maxwell–Lorentz equations and corrected Lorentz's transformation formulae for charge density and current density. He went on to sketch a model of gravitation (incl. gravitational waves) which might be compatible with the transformations. It was Poincaré who, for the first time, used the term "Lorentz transformation", and he gave them a form which is used up to this day. (Where
ℓ
{\displaystyle \ell }
is an arbitrary function of
ε
{\displaystyle \varepsilon }
, which must be set to unity to conserve the group characteristics. He also set the speed of light to unity.)
x
′
=
k
ℓ
(
x
+
ε
t
)
,
y
′
=
ℓ
y
,
z
′
=
ℓ
z
,
t
′
=
k
ℓ
(
t
+
ε
x
)
{\displaystyle x^{\prime }=k\ell \left(x+\varepsilon t\right),\qquad y^{\prime }=\ell y,\qquad z^{\prime }=\ell z,\qquad t^{\prime }=k\ell \left(t+\varepsilon x\right)}
k
=
1
1
−
ε
2
{\displaystyle k={\frac {1}{\sqrt {1-\varepsilon ^{2}}}}}
A substantially extended work (the so-called "Palermo paper") was submitted by Poincaré on 23 July 1905, but was published in January 1906 because the journal appeared only twice a year. He spoke literally of "the postulate of relativity", he showed that the transformations are a consequence of the principle of least action; he demonstrated in more detail the group characteristics of the transformation, which he called Lorentz group, and he showed that the combination
x
2
+
y
2
+
z
2
−
c
2
t
2
{\displaystyle x^{2}+y^{2}+z^{2}-c^{2}t^{2}}
is invariant. While elaborating his gravitational theory, he noticed that the Lorentz transformation is merely a rotation in four-dimensional space about the origin by introducing
c
t
−
1
{\textstyle ct{\sqrt {-1}}}
as a fourth, imaginary, coordinate, and he used an early form of four-vectors. However, Poincaré later said the translation of physics into the language of four-dimensional geometry would entail too much effort for limited profit, and therefore he refused to work out the consequences of this notion. This was later done, however, by Minkowski; see "The shift to relativity".
=== Electromagnetic mass ===
J. J. Thomson (1881) and others noticed that electromagnetic energy contributes to the mass of charged bodies by the amount
m
=
(
4
/
3
)
E
/
c
2
{\displaystyle m=(4/3)E/c^{2}}
, which was called electromagnetic or "apparent mass". Another derivation of some sort of electromagnetic mass was conducted by Poincaré (1900). By using the momentum of electromagnetic fields, he concluded that these fields contribute a mass of
E
e
m
/
c
2
{\displaystyle E_{em}/c^{2}}
to all bodies, which is necessary to save the center of mass theorem.
As noted by Thomson and others, this mass increases also with velocity. Thus in 1899, Lorentz calculated that the ratio of the electron's mass in the moving frame and that of the aether frame is
k
3
ε
{\displaystyle k^{3}\varepsilon }
parallel to the direction of motion, and
k
ε
{\displaystyle k\varepsilon }
perpendicular to the direction of motion, where
k
=
1
−
v
2
/
c
2
{\textstyle k={\sqrt {1-v^{2}/c^{2}}}}
and
ε
{\displaystyle \varepsilon }
is an undetermined factor. And in 1904, he set
ε
=
1
{\displaystyle \varepsilon =1}
, arriving at the expressions for the masses in different directions (longitudinal and transverse):
m
L
=
m
0
(
1
−
v
2
c
2
)
3
,
m
T
=
m
0
1
−
v
2
c
2
,
{\displaystyle m_{L}={\frac {m_{0}}{\left({\sqrt {1-{\frac {v^{2}}{c^{2}}}}}\right)^{3}}},\quad m_{T}={\frac {m_{0}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},}
where
m
0
=
4
3
E
e
m
c
2
{\displaystyle m_{0}={\frac {4}{3}}{\frac {E_{em}}{c^{2}}}}
Many scientists now believed that the entire mass and all forms of forces were electromagnetic in nature. This idea had to be given up, however, in the course of the development of relativistic mechanics. Abraham (1904) argued (as described in the preceding section #Lorentz transformation), that non-electrical binding forces were necessary within Lorentz's electrons model. But Abraham also noted that different results occurred, dependent on whether the em-mass is calculated from the energy or from the momentum. To solve those problems, Poincaré in 1905 and 1906 introduced some sort of pressure of non-electrical nature, which contributes the amount
−
(
1
/
3
)
E
/
c
2
{\displaystyle -(1/3)E/c^{2}}
to the energy of the bodies, and therefore explains the 4/3-factor in the expression for the electromagnetic mass-energy relation. However, while Poincaré's expression for the energy of the electrons was correct, he erroneously stated that only the em-energy contributes to the mass of the bodies.
The concept of electromagnetic mass is not considered anymore as the cause of mass per se, because the entire mass (not only the electromagnetic part) is proportional to energy, and can be converted into different forms of energy, which is explained by Einstein's mass–energy equivalence.
=== Gravitation ===
==== Lorentz's theories ====
In 1900 Lorentz tried to explain gravity on the basis of the Maxwell equations. He first considered a Le Sage type model and argued that there possibly exists a universal radiation field, consisting of very penetrating em-radiation, and exerting a uniform pressure on every body. Lorentz showed that an attractive force between charged particles would indeed arise, if it is assumed that the incident energy is entirely absorbed. This was the same fundamental problem which had afflicted the other Le Sage models, because the radiation must vanish somehow and any absorption must lead to an enormous heating. Therefore, Lorentz abandoned this model.
In the same paper, he assumed like Ottaviano Fabrizio Mossotti and Johann Karl Friedrich Zöllner that the attraction of opposite charged particles is stronger than the repulsion of equal charged particles. The resulting net force is exactly what is known as universal gravitation, in which the speed of gravity is that of light. This leads to a conflict with the law of gravitation by Isaac Newton, in which it was shown by Pierre Simon Laplace that a finite speed of gravity leads to some sort of aberration and therefore makes the orbits unstable. However, Lorentz showed that the theory is not concerned by Laplace's critique, because due to the structure of the Maxwell equations only effects in the order v2/c2 arise. But Lorentz calculated that the value for the perihelion advance of Mercury was much too low. He wrote:
The special form of these terms may perhaps be modified. Yet, what has been said is sufficient to show that gravitation may be attributed to actions which are propagated with no greater velocity than that of light.
In 1908 Poincaré examined the gravitational theory of Lorentz and classified it as compatible with the relativity principle, but (like Lorentz) he criticized the inaccurate indication of the perihelion advance of Mercury. Contrary to Poincaré, Lorentz in 1914 considered his own theory as incompatible with the relativity principle and rejected it.
==== Lorentz-invariant gravitational law ====
Poincaré argued in 1904 that a propagation speed of gravity which is greater than c is contradicting the concept of local time and the relativity principle. He wrote:
What would happen if we could communicate by signals other than those of light, the velocity of propagation of which differed from that of light? If, after having regulated our watches by the optimal method, we wished to verify the result by means of these new signals, we should observe discrepancies due to the common translatory motion of the two stations. And are such signals inconceivable, if we take the view of Laplace, that universal gravitation is transmitted with a velocity a million times as great as that of light?
However, in 1905 and 1906 Poincaré pointed out the possibility of a gravitational theory, in which changes propagate with the speed of light and which is Lorentz covariant. He pointed out that in such a theory the gravitational force not only depends on the masses and their mutual distance, but also on their velocities and their position due to the finite propagation time of interaction. On that occasion Poincaré introduced four-vectors. Following Poincaré, also Minkowski (1908) and Arnold Sommerfeld (1910) tried to establish a Lorentz-invariant gravitational law. However, these attempts were superseded because of Einstein's theory of general relativity, see "The shift to relativity".
The non-existence of a generalization of the Lorentz ether to gravity was a major reason for the preference for the spacetime interpretation. A viable generalization to gravity has been proposed only 2012 by Schmelzer. The preferred frame is defined by the harmonic coordinate condition. The gravitational field is defined by density, velocity and stress tensor of the Lorentz ether, so that the harmonic conditions become continuity and Euler equations. The Einstein Equivalence Principle is derived. The Strong Equivalence Principle is violated, but is recovered in a limit, which gives the Einstein equations of general relativity in harmonic coordinates.
== Principles and conventions ==
=== Constancy of the speed of light ===
Already in his philosophical writing on time measurements (1898), Poincaré wrote that astronomers like Ole Rømer, in determining the speed of light, simply assume that light has a constant speed, and that this speed is the same in all directions. Without this postulate it would not be possible to infer the speed of light from astronomical observations, as Rømer did based on observations of the moons of Jupiter. Poincaré went on to note that Rømer also had to assume that Jupiter's moons obey Newton's laws, including the law of gravitation, whereas it would be possible to reconcile a different speed of light with the same observations if we assumed some different (probably more complicated) laws of motion. According to Poincaré, this illustrates that we adopt for the speed of light a value that makes the laws of mechanics as simple as possible. (This is an example of Poincaré's conventionalist philosophy.) Poincaré also noted that the propagation speed of light can be (and in practice often is) used to define simultaneity between spatially separate events. However, in that paper he did not go on to discuss the consequences of applying these "conventions" to multiple relatively moving systems of reference. This next step was done by Poincaré in 1900, when he recognized that synchronization by light signals in earth's reference frame leads to Lorentz's local time. (See the section on "local time" above). And in 1904 Poincaré wrote:
From all these results, if they were to be confirmed, would issue a wholly new mechanics which would be characterized above all by this fact, that there could be no velocity greater than that of light, any more than a temperature below that of absolute zero. For an observer, participating himself in a motion of translation of which he has no suspicion, no apparent velocity could surpass that of light, and this would be a contradiction, unless one recalls the fact that this observer does not use the same sort of timepiece as that used by a stationary observer, but rather a watch giving the “local time.[..] Perhaps, too, we shall have to construct an entirely new mechanics that we only succeed in catching a glimpse of, where, inertia increasing with the velocity, the velocity of light would become an impassable limit. The ordinary mechanics, more simple, would remain a first approximation, since it would be true for velocities not too great, so that the old dynamics would still be found under the new. We should not have to regret having believed in the principles, and even, since velocities too great for the old formulas would always be only exceptional, the surest way in practise would be still to act as if we continued to believe in them. They are so useful, it would be necessary to keep a place for them. To determine to exclude them altogether would be to deprive oneself of a precious weapon. I hasten to say in conclusion that we are not yet there, and as yet nothing proves that the principles will not come forth from out the fray victorious and intact.”
=== Principle of relativity ===
In 1895 Poincaré argued that experiments like that of Michelson–Morley show that it seems to be impossible to detect the absolute motion of matter or the relative motion of matter in relation to the aether. And although most physicists had other views, Poincaré in 1900 stood to his opinion and alternately used the expressions "principle of relative motion" and "relativity of space". He criticized Lorentz by saying, that it would be better to create a more fundamental theory, which explains the absence of any aether drift, than to create one hypothesis after the other. In 1902 he used for the first time the expression "principle of relativity". In 1904 he appreciated the work of the mathematicians, who saved what he now called the "principle of relativity" with the help of hypotheses like local time, but he confessed that this venture was possible only by an accumulation of hypotheses. And he defined the principle in this way (according to Miller based on Lorentz's theorem of corresponding states): "The principle of relativity, according to which the laws of physical phenomena must be the same for a stationary observer as for one carried along in a uniform motion of translation, so that we have no means, and can have none, of determining whether or not we are being carried along in such a motion."
Referring to the critique of Poincaré from 1900, Lorentz wrote in his famous paper in 1904, where he extended his theorem of corresponding states: "Surely, the course of inventing special hypotheses for each new experimental result is somewhat artificial. It would be more satisfactory, if it were possible to show, by means of certain fundamental assumptions, and without neglecting terms of one order of magnitude or another, that many electromagnetic actions are entirely independent of the motion of the system."
One of the first assessments of Lorentz's paper was by Paul Langevin in May 1905. According to him, this extension of the electron theories of Lorentz and Larmor led to "the physical impossibility to demonstrate the translational motion of the earth". However, Poincaré noticed in 1905 that Lorentz's theory of 1904 was not perfectly "Lorentz invariant" in a few equations such as Lorentz's expression for current density (Lorentz admitted in 1921 that these were defects). As this required just minor modifications of Lorentz's work, also Poincaré asserted that Lorentz had succeeded in harmonizing his theory with the principle of relativity: "It appears that this impossibility of demonstrating the absolute motion of the earth is a general law of nature. [...] Lorentz tried to complete and modify his hypothesis in order to harmonize it with the postulate of complete impossibility of determining absolute motion. It is what he has succeeded in doing in his article entitled Electromagnetic phenomena in a system moving with any velocity smaller than that of light [Lorentz, 1904b]."
In his Palermo paper (1906), Poincaré called this "the postulate of relativity“, and although he stated that it was possible this principle might be disproved at some point (and in fact he mentioned at the paper's end that the discovery of magneto-cathode rays by Paul Ulrich Villard (1904) seems to threaten it), he believed it was interesting to consider the consequences if we were to assume the postulate of relativity was valid without restriction. This would imply that all forces of nature (not just electromagnetism) must be invariant under the Lorentz transformation. In 1921 Lorentz credited Poincaré for establishing the principle and postulate of relativity and wrote: "I have not established the principle of relativity as rigorously and universally true. Poincaré, on the other hand, has obtained a perfect invariance of the electro-magnetic equations, and he has formulated 'the postulate of relativity', terms which he was the first to employ."
=== Aether ===
Poincaré wrote in the sense of his conventionalist philosophy in 1889: "Whether the aether exists or not matters little – let us leave that to the metaphysicians; what is essential for us is, that everything happens as if it existed, and that this hypothesis is found to be suitable for the explanation of phenomena. After all, have we any other reason for believing in the existence of material objects? That, too, is only a convenient hypothesis; only, it will never cease to be so, while some day, no doubt, the aether will be thrown aside as useless."
He also denied the existence of absolute space and time by saying in 1901: "1. There is no absolute space, and we only conceive of relative motion; and yet in most cases mechanical facts are enunciated as if there is an absolute space to which they can be referred. 2. There is no absolute time. When we say that two periods are equal, the statement has no meaning, and can only acquire a meaning by a convention. 3. Not only have we no direct intuition of the equality of two periods, but we have not even direct intuition of the simultaneity of two events occurring in two different places. I have explained this in an article entitled "Mesure du Temps" [1898]. 4. Finally, is not our Euclidean geometry in itself only a kind of convention of language?"
However, Poincaré himself never abandoned the aether hypothesis and stated in 1900: "Does our aether actually exist ? We know the origin of our belief in the aether. If light takes several years to reach us from a distant star, it is no longer on the star, nor is it on the earth. It must be somewhere, and supported, so to speak, by some material agency." And referring to the Fizeau experiment, he even wrote: "The aether is all but in our grasp." He also said the aether is necessary to harmonize Lorentz's theory with Newton's third law. Even in 1912 in a paper called "The Quantum Theory", Poincaré ten times used the word "aether", and described light as "luminous vibrations of the aether".
And although he admitted the relative and conventional character of space and time, he believed that the classical convention is more "convenient" and continued to distinguish between "true" time in the aether and "apparent" time in moving systems. Addressing the question if a new convention of space and time is needed he wrote in 1912: "Shall we be obliged to modify our conclusions? Certainly not; we had adopted a convention because it seemed convenient and we had said that nothing could constrain us to abandon it. Today some physicists want to adopt a new convention. It is not that they are constrained to do so; they consider this new convention more convenient; that is all. And those who are not of this opinion can legitimately retain the old one in order not to disturb their old habits, I believe, just between us, that this is what they shall do for a long time to come."
Also Lorentz argued during his lifetime that in all frames of reference this one has to be preferred, in which the aether is at rest. Clocks in this frame are showing the "real“ time and simultaneity is not relative. However, if the correctness of the relativity principle is accepted, it is impossible to find this system by experiment.
== The shift to relativity ==
=== Special relativity ===
In 1905, Albert Einstein published his paper on what is now called special relativity. In this paper, by examining the fundamental meanings of the space and time coordinates used in physical theories, Einstein showed that the "effective" coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. From this followed all of the physically observable consequences of LET, along with others, all without the need to postulate an unobservable entity (the aether). Einstein identified two fundamental principles, each founded on experience, from which all of Lorentz's electrodynamics follows:
Taken together (along with a few other tacit assumptions such as isotropy and homogeneity of space), these two postulates lead uniquely to the mathematics of special relativity. Lorentz and Poincaré had also adopted these same principles, as necessary to achieve their final results, but didn't recognize that they were also sufficient, and hence that they obviated all the other assumptions underlying Lorentz's initial derivations (many of which later turned out to be incorrect). Therefore, special relativity very quickly gained wide acceptance among physicists, and the 19th century concept of a luminiferous aether was no longer considered useful.
Poincare (1905) and Hermann Minkowski (1905) showed that special relativity had a very natural interpretation in terms of a unified four-dimensional "spacetime" in which absolute intervals are seen to be given by an extension of the Pythagorean theorem. The utility and naturalness of the spacetime representation contributed to the rapid acceptance of special relativity, and to the corresponding loss of interest in Lorentz's aether theory.
In 1909 and 1912 Einstein explained:
...it is impossible to base a theory of the transformation laws of space and time on the principle of relativity alone. As we know, this is connected with the relativity of the concepts of "simultaneity" and "shape of moving bodies." To fill this gap, I introduced the principle of the constancy of the velocity of light, which I borrowed from H. A. Lorentz’s theory of the stationary luminiferous aether, and which, like the principle of relativity, contains a physical assumption that seemed to be justified only by the relevant experiments (experiments by Fizeau, Rowland, etc.)
In 1907 Einstein criticized the "ad hoc" character of Lorentz's contraction hypothesis in his theory of electrons, because according to him it was an artificial assumption to make the Michelson–Morley experiment conform to Lorentz's stationary aether and the relativity principle. Einstein argued that Lorentz's "local time" can simply be called "time", and he stated that the immobile aether as the theoretical foundation of electrodynamics was unsatisfactory. He wrote in 1920:
As to the mechanical nature of the Lorentzian aether, it may be said of it, in a somewhat playful spirit, that immobility is the only mechanical property of which it has not been deprived by H. A. Lorentz. It may be added that the whole change in the conception of the aether which the special theory of relativity brought about, consisted in taking away from the aether its last mechanical quality, namely, its immobility. [...] More careful reflection teaches us, however, that the special theory of relativity does not compel us to deny aether. We may assume the existence of an aether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it.
Minkowski argued that Lorentz's introduction of the contraction hypothesis "sounds rather fantastical", since it is not the product of resistance in the aether but a "gift from above". He said that this hypothesis is "completely equivalent with the new concept of space and time", though it becomes much more comprehensible in the framework of the new spacetime geometry. However, Lorentz disagreed that it was "ad-hoc" and he argued in 1913 that there is little difference between his theory and the negation of a preferred reference frame, as in the theory of Einstein and Minkowski, so that it is a matter of taste which theory one prefers.
=== Mass–energy equivalence ===
It was derived by Einstein (1905) as a consequence of the relativity principle, that inertia of energy is actually represented by
E
/
c
2
{\displaystyle E/c^{2}}
, but in contrast to Poincaré's 1900-paper, Einstein recognized that matter itself loses or gains mass during the emission or absorption. So the mass of any form of matter is equal to a certain amount of energy, which can be converted into and re-converted from other forms of energy. This is the mass–energy equivalence, represented by
E
=
m
c
2
{\displaystyle E=mc^{2}}
. So Einstein didn't have to introduce "fictitious" masses and also avoided the perpetual motion problem, because according to Darrigol, Poincaré's radiation paradox can simply be solved by applying Einstein's equivalence. If the light source loses mass during the emission by
E
/
c
2
{\displaystyle E/c^{2}}
, the contradiction in the momentum law vanishes without the need of any compensating effect in the aether.
Similar to Poincaré, Einstein concluded in 1906 that the inertia of (electromagnetic) energy is a necessary condition for the center of mass theorem to hold in systems, in which electromagnetic fields and matter are acting on each other. Based on the mass–energy equivalence, he showed that emission and absorption of em-radiation, and therefore the transport of inertia, solves all problems. On that occasion, Einstein referred to Poincaré's 1900-paper and wrote:
Although the simple formal views, which must be accomplished for the proof of this statement, are already mainly contained in a work by H. Poincaré [Lorentz-Festschrift, p. 252, 1900], for the sake of clarity I won't rely on that work.
Also Poincaré's rejection of the reaction principle due to the violation of the mass conservation law can be avoided through Einstein's
E
=
m
c
2
{\displaystyle E=mc^{2}}
, because mass conservation appears as a special case of the energy conservation law.
=== General relativity ===
The attempts of Lorentz and Poincaré (and other attempts like those of Abraham and Gunnar Nordström) to formulate a theory of gravitation were superseded by Einstein's theory of general relativity. This theory is based on principles like the equivalence principle, the general principle of relativity, the principle of general covariance, geodesic motion, local Lorentz covariance (the laws of special relativity apply locally for all inertial observers), and that spacetime curvature is created by stress-energy within the spacetime.
In 1920, Einstein compared Lorentz's aether with the "gravitational aether" of general relativity. He said that immobility is the only mechanical property of which the aether has not been deprived by Lorentz, but, contrary to the luminiferous and Lorentz's aether, the aether of general relativity has no mechanical property, not even immobility:
The aether of the general theory of relativity is a medium which is itself devoid of all mechanical and kinematical qualities, but which helps to determine mechanical (and electromagnetic) events. What is fundamentally new in the aether of the general theory of relativity, as opposed to the aether of Lorentz, consists in this, that the state of the former is at every place determined by connections with the matter and the state of the aether in neighbouring places, which are amenable to law in the form of differential equations; whereas the state of the Lorentzian aether in the absence of electromagnetic fields is conditioned by nothing outside itself, and is everywhere the same. The aether of the general theory of relativity is transmuted conceptually into the aether of Lorentz if we substitute constants for the functions of space which describe the former, disregarding the causes which condition its state. Thus we may also say, I think, that the aether of the general theory of relativity is the outcome of the Lorentzian aether, through relativization.
=== Priority ===
Some claim that Poincaré and Lorentz are the true founders of special relativity, not Einstein. For more details see the article on this dispute.
== Later activity ==
Viewed as a theory of elementary particles, Lorentz's electron/ether theory was superseded during the first few decades of the 20th century, first by quantum mechanics and then by quantum field theory. As a general theory of dynamics, Lorentz and Poincare had already (by about 1905) found it necessary to invoke the principle of relativity itself in order to make the theory match all the available empirical data. By this point, most vestiges of a substantial aether had been eliminated from Lorentz's "aether" theory, and it became both empirically and deductively equivalent to special relativity. The main difference was the metaphysical postulate of a unique absolute rest frame, which was empirically undetectable and played no role in the physical predictions of the theory, as Lorentz wrote in 1909, 1910 (published 1913), 1913 (published 1914), or in 1912 (published 1922).
As a result, the term "Lorentz aether theory" is sometimes used today to refer to a neo-Lorentzian interpretation of special relativity. The prefix "neo" is used in recognition of the fact that the interpretation must now be applied to physical entities and processes (such as the standard model of quantum field theory) that were unknown in Lorentz's day.
Subsequent to the advent of special relativity, only a small number of individuals have advocated the Lorentzian approach to physics. Many of these, such as Herbert E. Ives (who, along with G. R. Stilwell, performed the first experimental confirmation of time dilation) have been motivated by the belief that special relativity is logically inconsistent, and so some other conceptual framework is needed to reconcile the relativistic phenomena. For example, Ives wrote "The 'principle' of the constancy of the velocity of light is not merely 'ununderstandable', it is not supported by 'objective matters of fact'; it is untenable...". However, the logical consistency of special relativity (as well as its empirical success) is well established, so the views of such individuals are considered unfounded within the mainstream scientific community.
John Stewart Bell advocated teaching special relativity first from the viewpoint of a single Lorentz inertial frame, then showing that Poincare invariance of the laws of physics such as Maxwell's equations is equivalent to the frame-changing arguments often used in teaching special relativity. Because a single Lorentz inertial frame is one of a preferred class of frames, he called this approach Lorentzian in spirit.
Also some test theories of special relativity use some sort of Lorentzian framework. For instance, the Robertson–Mansouri–Sexl test theory introduces a preferred aether frame and includes parameters indicating different combinations of length and times changes. If time dilation and length contraction of bodies moving in the aether have their exact relativistic values, the complete Lorentz transformation can be derived and the aether is hidden from any observation, which makes it kinematically indistinguishable from the predictions of special relativity. Using this model, the Michelson–Morley experiment, Kennedy–Thorndike experiment, and Ives–Stilwell experiment put sharp constraints on violations of Lorentz invariance.
== References ==
For a more complete list with sources of many other authors, see History of special relativity#References.
=== Works of Lorentz, Poincaré, Einstein, Minkowski (group A) ===
=== Secondary sources (group B) ===
=== Other notes and comments (group C) ===
== External links ==
Mathpages: Corresponding States, The End of My Latin, Who Invented Relativity?, Poincaré Contemplates Copernicus, Whittaker and the Aether, Another Derivation of Mass-Energy Equivalence | Wikipedia/Lorentz_aether_theory |
Metaphysics (Greek: των μετὰ τὰ φυσικά, "those after the physics"; Latin: Metaphysica) is one of the principal works of Aristotle, in which he develops the doctrine that he calls First Philosophy. The work is a compilation of various texts treating abstract subjects, notably substance theory, different kinds of causation, form and matter, the existence of mathematical objects and the cosmos, which together constitute much of the branch of philosophy later known as metaphysics.
== Date, style and composition ==
Many of Aristotle's works are extremely compressed, and many scholars believe that in their current form, they are likely lecture notes. Subsequent to the arrangement of Aristotle's works by Andronicus of Rhodes in the first century BC, a number of his treatises were referred to as the writings "after ("meta") the Physics", the origin of the current title for the collection Metaphysics. Some have interpreted the expression "meta" to imply that the subject of the work goes "beyond" that of Aristotle's Physics or that it is metatheoretical in relation to the Physics. But others believe that "meta" referred simply to the work's place in the canonical arrangement of Aristotle's writings, which is at least as old as Andronicus of Rhodes or even Hermippus of Smyrna. In other surviving works of Aristotle, the metaphysical treatises are referred to as "the [writings] concerning first philosophy"; which was the term Aristotle used for metaphysics.
It is notoriously difficult to specify the date at which Aristotle wrote these treatises as a whole or even individually, especially because the Metaphysics is, in Jonathan Barnes' words, "a farrago, a hotch-potch", and more generally because of the difficulty of dating any of Aristotle's writings. The order in which the books were written is not known; their arrangement is due to later editors. In the manuscripts, books are referred to by Greek letters. For many scholars, it is customary to refer to the books by their letter names. Book 1 is called Alpha (Α); 2, little alpha (α); 3, Beta (Β); 4, Gamma (Γ); 5, Delta (Δ); 6, Epsilon (Ε); 7, Zeta (Ζ); 8, Eta (Η); 9, Theta (Θ); 10, Iota (Ι); 11, Kappa (Κ); 12, Lambda (Λ); 13, Mu (Μ); 14, Nu (Ν).
== Outline ==
=== Books I–VI: Alpha, little Alpha, Beta, Gamma, Delta and Epsilon ===
Book I or Alpha begins by discussing the nature of knowledge and compares knowledge gained from the senses and from memory, arguing that knowledge is acquired from memory through experience. It then defines "wisdom" (sophia) as a knowledge of the first principles (arche) or causes of things. Because those who are wise understand the first principles and causes, they know the why of things, unlike those who only know that things are a certain way based on their memory and sensations. The wise are able to teach because they know the why of things, and so they are better fitted to command, rather than to obey. He then surveys the first principles and causes of previous philosophers, starting with the material monists of the Ionian school and continuing up until Plato.
Book II or "little alpha": Book II addresses a possible objection to the account of how we understand first principles and thus acquire wisdom, that attempting to discover the first principle would lead to an infinite series of causes. It argues in response that the idea of an infinite causal series is absurd, and argues that only things that are created or destroyed require a cause, and that thus there must be a primary cause that is eternal, an idea he develops later in Book Lambda.
Book III or Beta lists the main problems or puzzles (aporia) of philosophy.
Book IV or Gamma: Chapters 2 and 3 argue for its status as a subject in its own right. The rest is a defense of (a) what we now call the principle of contradiction, the principle that it is not possible for the same proposition to be (the case) and not to be (the case), and (b) what we now call the principle of excluded middle: tertium non datur — there cannot be an intermediary between contradictory statements.
Book V or Delta ("philosophical lexicon") is a list of definitions of about thirty key terms such as cause, nature, one, and many.
Book VI or Epsilon has two main concerns. The first concern is the hierarchy of the sciences: productive, practical or theoretical. Aristotle considers theoretical sciences superior because they study beings for their own sake—for example, Physics studies beings that can be moved—and do not have a target (τέλος telos, "end, goal"; τέλειος, "complete, perfect") beyond themselves. He argues that the study of being qua being, or First Philosophy, is superior to all the other theoretical sciences because it is concerned with the ultimate causes of all reality, not just the secondary causes of a part of reality. The second concern of Epsilon is the study of "accidents" (κατὰ συμβεβηκός), those attributes that do not depend on (τέχνη) or exist by necessity, which Aristotle believes do not deserve to be studied as a science.
=== Books VII–IX: Zeta, Eta, and Theta ===
Books Zeta, Eta, and Theta are generally considered the core of the Metaphysics.
Book Zeta (VII) begins by stating that "being" has several senses, the purpose of philosophy is to understand the primary kind of being, called substance (ousia) and determine what substances there are, a concept that Aristotle develops in the Categories.
Zeta goes on to consider four candidates for substance: (i) the 'essence' or 'what it is to be' of a thing (ii) the universal, (iii) the genus to which a substance belongs and (iv) the material substrate that underlies all the properties of a thing.
He dismisses the idea that matter can be substance, for if we eliminate everything that is a property from what can have the property, such as matter and the shape, we are left with something that has no properties at all. Such 'ultimate matter' cannot be substance. Separability and 'this-ness' are fundamental to our concept of substance.
Aristotle then describes his theory that essence is the criterion of substantiality. The essence of something is what is included in a secundum se ('according to itself') account of a thing, i.e. which tells what a thing is by its very nature. You are not musical by your very nature. But you are a human by your very nature. Your essence is what is mentioned in the definition of you.
Aristotle then considers, and dismisses, the idea that substance is the universal or the genus, criticizing the Platonic theory of Ideas.
Aristotle argues that if genus and species are individual things, then different species of the same genus contain the genus as individual thing, which leads to absurdities. Moreover, individuals are incapable of definition.
Finally, he concludes book Zeta by arguing that substance is really a cause.
Book Eta consists of a summary of what has been said so far (i.e., in Book Zeta) about substance, and adds a few further details regarding difference and unity.
Book Theta sets out to define potentiality and actuality. Chapters 1–5 discuss potentiality, the potential of something to change: potentiality is "a principle of change in another thing or in the thing itself qua other." In chapter 6 Aristotle turns to actuality. We can only know actuality through observation or "analogy;" thus "as that which builds is to that which is capable of building, so is that which is awake to that which is asleep...or that which is separated from matter to matter itself". Actuality is the completed state of something that had the potential to be completed. The relationship between actuality and potentiality can be thought of as the relationship between form and matter, but with the added aspect of time. Actuality and potentiality are distinctions that occur over time (diachronic), whereas form and matter are distinctions that can be made at fixed points in time (synchronic).
=== Books X–XIV: Iota, Kappa, Lambda, Mu, and Nu ===
Book X or Iota: Discussion of unity, one and many, sameness and difference.
Book XI or Kappa: Briefer versions of other chapters and of parts of the Physics.
Book XII or Lambda: Further remarks on beings in general, first principles, and God or gods. This book includes Aristotle's famous description of the unmoved mover, "the most divine of things observed by us", as "the thinking of thinking".
Books XIII and XIV, or Mu and Nu: Philosophy of mathematics, in particular how numbers exist.
== Legacy ==
The Metaphysics is considered to be one of the greatest philosophical works. Its influence on the Greeks, the Muslim philosophers, Maimonides thence the scholastic philosophers and even writers such as Dante was immense.
In the 3rd century, Alexander of Aphrodisias wrote a commentary on the first five books of the Metaphysics, and a commentary transmitted under his name exists for the final nine, but modern scholars doubt that this part was written by him. Themistius wrote an epitome of the work, of which book 12 survives in a Hebrew translation. The Neoplatonists Syrianus and Asclepius of Tralles also wrote commentaries on the work, where they attempted to synthesize Aristotle's doctrines with Neoplatonic cosmology.
Aristotle's works gained a reputation for complexity that is never more evident than with the Metaphysics — Avicenna said that he had read the Metaphysics of Aristotle forty times, but did not understand it until he also read al-Farabi's Purposes of the Metaphysics of Aristotle.I read the Metaphysics [of Aristotle], but I could not comprehend its contents, and its author's object remained obscure to me, even when I had gone back and read it forty times and had got to the point where I had memorized it. In spite of this I could not understand it nor its object, and I despaired of myself and said, "This is a book which there is no way of understanding." But one day in the afternoon when I was at the booksellers' quarter a salesman approached with a book in his hand which he was calling out for sale. (...) So I bought it and, lo and behold, it was Abu Nasr al-Farabi's book on the objects of the Metaphysics. I returned home and was quick to read it, and in no time the objects of that book became clear to me because I had got to the point of having memorized it by heart.
The flourishing of Arabic Aristotelian scholarship reached its peak with the work of Ibn Rushd (Latinized: Averroes), whose extensive writings on Aristotle's work led to his later designation as "The Commentator" by future generations of scholars. Maimonides wrote the Guide to the Perplexed in the 12th century, to demonstrate the compatibility of Aristotelian science with Biblical revelation.
The Fourth Crusade (1202–1204) facilitated the discovery and delivery of many original Greek manuscripts to Western Europe. William of Moerbeke's translations of the work formed the basis of the commentaries on the Metaphysics by Albert the Great, Thomas Aquinas and Duns Scotus. They were also used by modern scholars for Greek editions, as William had access to Greek manuscripts that are now lost. Werner Jaeger lists William's translation in his edition of the Greek text in the Scriptorum Classicorum Bibliotheca Oxoniensis (Oxford 1962).
== Textual criticism ==
In the 19th century, with the rise of textual criticism, the Metaphysics was examined anew. Critics, noting the wide variety of topics and the seemingly illogical order of the books, concluded that it was actually a collection of shorter works thrown together haphazardly. In the 20th century two general editions have been produced by W. D. Ross (1924) and by W. Jaeger (1957). Based on a careful study of the content and of the cross-references within them, W. D. Ross concluded that books A, B, Γ, E, Z, H, Θ, M, N, and I "form a more or less continuous work", while the remaining books α, Δ, Κ and Λ were inserted into their present locations by later editors. However, Ross cautions that books A, B, Γ, E, Z, H, Θ, M, N, and I — with or without the insertion of the others — do not constitute "a complete work". Werner Jaeger further maintained that the different books were taken from different periods of Aristotle's life. Everyman's Library, for their 1000th volume, published the Metaphysics in a rearranged order that was intended to make the work easier for readers.
Editing the Metaphysics has become an open issue in works and studies of the new millennium. New critical editions have been produced of books Gamma, Alpha, and Lambda. Differences from the more-familiar 20th Century critical editions of Ross and Jaeger mainly depend on the stemma codicum of Aristotle's Metaphysics, of which different versions have been proposed since 1970.
== Editions and translations ==
Greek text with commentary: Aristotle's Metaphysics. W. D. Ross. 2 Vols. Oxford: Clarendon Press, 1924. Reprinted in 1953 with corrections.
Greek text: Aristotelis Metaphysica. Ed. Werner Jaeger. Oxford Classical Texts. Oxford University Press, 1957. ISBN 978-0-19-814513-4.
Greek text with English: Metaphysics. Trans. Hugh Tredennick. 2 vols. Loeb Classical Library 271, 287. Harvard U. Press, 1933–35. ISBN 0-674-99299-7, ISBN 0-674-99317-9.
Aristotle's Metaphysics. Trans. Hippocrates Gorgias Apostle. Bloomington: Indiana U. Press, 1966.
Aristotle - Metaphysics. Translated by Hope, Richard. Ann Arbor: U. Michigan P. 1960 [1952].
Aristotle's Metaphysics. Translated by Sachs, Joe (2 ed.). Santa Fe, NM: Green Lion P. 2002. ISBN 1-888009-03-9.
Aristotle. The Metaphysics. Penguin Classics. Translated by Lawson-Tancred, Hugh. London: Penguin. 2004 [1998]. ISBN 978-0-140-44619-7.
=== Ancient and medieval commentaries ===
Commentary on Aristotle's Metaphysics (in Greek, Latin, and English). Vol. 3. Translated by Aquinas, Thomas; Rowan, John P. William of Moerbeke (1st ed.). Chicago: Henry Regnery Company (Library of Living Catholic Thought). 1961. OCLC 312731. Archived from the original on October 28, 2011. {{cite book}}: |website= ignored (help)CS1 maint: others (link) (rpt. Notre Dame, Ind.: Dumb Ox, 1995).
== Notes ==
== Citations ==
== References ==
Wolfgang Class: Aristotle's Metaphysics, A Philological Commentary:
Volume I: Textual Criticism, ISBN 978-3-9815841-2-7, Saldenburg 2014;
Volume II: The Composition of the Metaphysics, ISBN 978-3-9815841-3-4, Saldenburg 2015;
Volume III: Sources and Parallels, ISBN 978-3-9815841-6-5, Saldenburg 2017;
Volume IV: Reception and Criticism, ISBN 978-3-9820267-0-1, Saldenburg 2018.
Copleston, Frederick, S.J. A History of Philosophy: Volume I Greece and Rome (Parts I and II) New York: Image Books, 1962.
Aristotle's Metaphysics. Translated by Lawson-Tancred, Hugh. Penguin. 1998. ISBN 0140446192.
== Further reading ==
Ackrill, J. L., 1963, Aristotle: Categories and De Interpretatione, Oxford: Clarendon Press.
Alexandrou, S., 2014, Aristotle's Metaphysics Lambda: Annotated Critical Edition, Leiden: Brill.
Anagnostopoulos, Georgios (ed.), 2009, A Companion to Aristotle, Chichester: Wiley-Blackwell.
Elders, L., 1972, Aristotle's Theology: A Commentary on Book Λ of the Metaphysics, Assen: Van Gorcum.
Gerson, Lloyd P. (ed.) and Joseph Owens, 2007, Aristotle's Gradations of Being in Metaphysics E-Z, South Bend: St Augustine's Press.
Gill, Mary Louise, 1989, Aristotle on Substance: The Paradox of Unity, Princeton: Princeton University Press.
== External links ==
Available bundled with Organon and other works – can be downloaded as .epub, .mobi and other formats.
English translation and original Greek at Perseus. Translation by Hugh Tredennick from the Loeb Classical Library.
English translation by W. D. Ross at MIT's Internet Classics Archive.
Averroes' commentary on the Metaphysics, in Latin, together with the 'old' (Arabic) and new translation based on William of Moerbeke at Gallica.
Aristotle: Metaphysics entry by Joe Sachs in the Internet Encyclopedia of Philosophy
Cohen, S. Marc. "Aristotle's Metaphysics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
A good summary of scholarly comments at: Theory and History of Ontology
Metaphysics public domain audiobook at LibriVox | Wikipedia/Aristotelian_metaphysics |
Deductive reasoning is the process of drawing valid inferences. An inference is valid if its conclusion follows logically from its premises, meaning that it is impossible for the premises to be true and the conclusion to be false. For example, the inference from the premises "all men are mortal" and "Socrates is a man" to the conclusion "Socrates is mortal" is deductively valid. An argument is sound if it is valid and all its premises are true. One approach defines deduction in terms of the intentions of the author: they have to intend for the premises to offer deductive support to the conclusion. With the help of this modification, it is possible to distinguish valid from invalid deductive reasoning: it is invalid if the author's belief about the deductive support is false, but even invalid deductive reasoning is a form of deductive reasoning.
Deductive logic studies under what conditions an argument is valid. According to the semantic approach, an argument is valid if there is no possible interpretation of the argument whereby its premises are true and its conclusion is false. The syntactic approach, by contrast, focuses on rules of inference, that is, schemas of drawing a conclusion from a set of premises based only on their logical form. There are various rules of inference, such as modus ponens and modus tollens. Invalid deductive arguments, which do not follow a rule of inference, are called formal fallacies. Rules of inference are definitory rules and contrast with strategic rules, which specify what inferences one needs to draw in order to arrive at an intended conclusion.
Deductive reasoning contrasts with non-deductive or ampliative reasoning. For ampliative arguments, such as inductive or abductive arguments, the premises offer weaker support to their conclusion: they indicate that it is most likely, but they do not guarantee its truth. They make up for this drawback with their ability to provide genuinely new information (that is, information not already found in the premises), unlike deductive arguments.
Cognitive psychology investigates the mental processes responsible for deductive reasoning. One of its topics concerns the factors determining whether people draw valid or invalid deductive inferences. One such factor is the form of the argument: for example, people draw valid inferences more successfully for arguments of the form modus ponens than of the form modus tollens. Another factor is the content of the arguments: people are more likely to believe that an argument is valid if the claim made in its conclusion is plausible. A general finding is that people tend to perform better for realistic and concrete cases than for abstract cases. Psychological theories of deductive reasoning aim to explain these findings by providing an account of the underlying psychological processes. Mental logic theories hold that deductive reasoning is a language-like process that happens through the manipulation of representations using rules of inference. Mental model theories, on the other hand, claim that deductive reasoning involves models of possible states of the world without the medium of language or rules of inference. According to dual-process theories of reasoning, there are two qualitatively different cognitive systems responsible for reasoning.
The problem of deduction is relevant to various fields and issues. Epistemology tries to understand how justification is transferred from the belief in the premises to the belief in the conclusion in the process of deductive reasoning. Probability logic studies how the probability of the premises of an inference affects the probability of its conclusion. The controversial thesis of deductivism denies that there are other correct forms of inference besides deduction. Natural deduction is a type of proof system based on simple and self-evident rules of inference. In philosophy, the geometrical method is a way of philosophizing that starts from a small set of self-evident axioms and tries to build a comprehensive logical system using deductive reasoning.
== Definition ==
Deductive reasoning is the psychological process of drawing deductive inferences. An inference is a set of premises together with a conclusion. This psychological process starts from the premises and reasons to a conclusion based on and supported by these premises. If the reasoning was done correctly, it results in a valid deduction: the truth of the premises ensures the truth of the conclusion. For example, in the syllogistic argument "all frogs are amphibians; no cats are amphibians; therefore, no cats are frogs" the conclusion is true because its two premises are true. But even arguments with wrong premises can be deductively valid if they obey this principle, as in "all frogs are mammals; no cats are mammals; therefore, no cats are frogs". If the premises of a valid argument are true, then it is called a sound argument.
The relation between the premises and the conclusion of a deductive argument is usually referred to as "logical consequence". According to Alfred Tarski, logical consequence has 3 essential features: it is necessary, formal, and knowable a priori. It is necessary in the sense that the premises of valid deductive arguments necessitate the conclusion: it is impossible for the premises to be true and the conclusion to be false, independent of any other circumstances. Logical consequence is formal in the sense that it depends only on the form or the syntax of the premises and the conclusion. This means that the validity of a particular argument does not depend on the specific contents of this argument. If it is valid, then any argument with the same logical form is also valid, no matter how different it is on the level of its contents. Logical consequence is knowable a priori in the sense that no empirical knowledge of the world is necessary to determine whether a deduction is valid. So it is not necessary to engage in any form of empirical investigation. Some logicians define deduction in terms of possible worlds: A deductive inference is valid if and only if, there is no possible world in which its conclusion is false while its premises are true. This means that there are no counterexamples: the conclusion is true in all such cases, not just in most cases.
It has been argued against this and similar definitions that they fail to distinguish between valid and invalid deductive reasoning, i.e. they leave it open whether there are invalid deductive inferences and how to define them. Some authors define deductive reasoning in psychological terms in order to avoid this problem. According to Mark Vorobey, whether an argument is deductive depends on the psychological state of the person making the argument: "An argument is deductive if, and only if, the author of the argument believes that the truth of the premises necessitates (guarantees) the truth of the conclusion". A similar formulation holds that the speaker claims or intends that the premises offer deductive support for their conclusion. This is sometimes categorized as a speaker-determined definition of deduction since it depends also on the speaker whether the argument in question is deductive or not. For speakerless definitions, on the other hand, only the argument itself matters independent of the speaker. One advantage of this type of formulation is that it makes it possible to distinguish between good or valid and bad or invalid deductive arguments: the argument is good if the author's belief concerning the relation between the premises and the conclusion is true, otherwise it is bad. One consequence of this approach is that deductive arguments cannot be identified by the law of inference they use. For example, an argument of the form modus ponens may be non-deductive if the author's beliefs are sufficiently confused. That brings with it an important drawback of this definition: it is difficult to apply to concrete cases since the intentions of the author are usually not explicitly stated.
Deductive reasoning is studied in logic, psychology, and the cognitive sciences. Some theorists emphasize in their definition the difference between these fields. On this view, psychology studies deductive reasoning as an empirical mental process, i.e. what happens when humans engage in reasoning. But the descriptive question of how actual reasoning happens is different from the normative question of how it should happen or what constitutes correct deductive reasoning, which is studied by logic. This is sometimes expressed by stating that, strictly speaking, logic does not study deductive reasoning but the deductive relation between premises and a conclusion known as logical consequence. But this distinction is not always precisely observed in the academic literature. One important aspect of this difference is that logic is not interested in whether the conclusion of an argument is sensible. So from the premise "the printer has ink" one may draw the unhelpful conclusion "the printer has ink and the printer has ink and the printer has ink", which has little relevance from a psychological point of view. Instead, actual reasoners usually try to remove redundant or irrelevant information and make the relevant information more explicit. The psychological study of deductive reasoning is also concerned with how good people are at drawing deductive inferences and with the factors determining their performance. Deductive inferences are found both in natural language and in formal logical systems, such as propositional logic.
== Conceptions of deduction ==
Deductive arguments differ from non-deductive arguments in that the truth of their premises ensures the truth of their conclusion. There are two important conceptions of what this exactly means. They are referred to as the syntactic and the semantic approach. According to the syntactic approach, whether an argument is deductively valid depends only on its form, syntax, or structure. Two arguments have the same form if they use the same logical vocabulary in the same arrangement, even if their contents differ. For example, the arguments "if it rains then the street will be wet; it rains; therefore, the street will be wet" and "if the meat is not cooled then it will spoil; the meat is not cooled; therefore, it will spoil" have the same logical form: they follow the modus ponens. Their form can be expressed more abstractly as "if A then B; A; therefore B" in order to make the common syntax explicit. There are various other valid logical forms or rules of inference, like modus tollens or the disjunction elimination. The syntactic approach then holds that an argument is deductively valid if and only if its conclusion can be deduced from its premises using a valid rule of inference. One difficulty for the syntactic approach is that it is usually necessary to express the argument in a formal language in order to assess whether it is valid. This often brings with it the difficulty of translating the natural language argument into a formal language, a process that comes with various problems of its own. Another difficulty is due to the fact that the syntactic approach depends on the distinction between formal and non-formal features. While there is a wide agreement concerning the paradigmatic cases, there are also various controversial cases where it is not clear how this distinction is to be drawn.
The semantic approach suggests an alternative definition of deductive validity. It is based on the idea that the sentences constituting the premises and conclusions have to be interpreted in order to determine whether the argument is valid. This means that one ascribes semantic values to the expressions used in the sentences, such as the reference to an object for singular terms or to a truth-value for atomic sentences. The semantic approach is also referred to as the model-theoretic approach since the branch of mathematics known as model theory is often used to interpret these sentences. Usually, many different interpretations are possible, such as whether a singular term refers to one object or to another. According to the semantic approach, an argument is deductively valid if and only if there is no possible interpretation where its premises are true and its conclusion is false. Some objections to the semantic approach are based on the claim that the semantics of a language cannot be expressed in the same language, i.e. that a richer metalanguage is necessary. This would imply that the semantic approach cannot provide a universal account of deduction for language as an all-encompassing medium.
== Rules of inference ==
Deductive reasoning usually happens by applying rules of inference. A rule of inference is a way or schema of drawing a conclusion from a set of premises. This happens usually based only on the logical form of the premises. A rule of inference is valid if, when applied to true premises, the conclusion cannot be false. A particular argument is valid if it follows a valid rule of inference. Deductive arguments that do not follow a valid rule of inference are called formal fallacies: the truth of their premises does not ensure the truth of their conclusion.
In some cases, whether a rule of inference is valid depends on the logical system one is using. The dominant logical system is classical logic and the rules of inference listed here are all valid in classical logic. But so-called deviant logics provide a different account of which inferences are valid. For example, the rule of inference known as double negation elimination, i.e. that if a proposition is not not true then it is also true, is accepted in classical logic but rejected in intuitionistic logic.
=== Prominent rules of inference ===
==== Modus ponens ====
Modus ponens (also known as "affirming the antecedent" or "the law of detachment") is the primary deductive rule of inference. It applies to arguments that have as first premise a conditional statement (
P
→
Q
{\displaystyle P\rightarrow Q}
) and as second premise the antecedent (
P
{\displaystyle P}
) of the conditional statement. It obtains the consequent (
Q
{\displaystyle Q}
) of the conditional statement as its conclusion. The argument form is listed below:
P
→
Q
{\displaystyle P\rightarrow Q}
(First premise is a conditional statement)
P
{\displaystyle P}
(Second premise is the antecedent)
Q
{\displaystyle Q}
(Conclusion deduced is the consequent)
In this form of deductive reasoning, the consequent (
Q
{\displaystyle Q}
) obtains as the conclusion from the premises of a conditional statement (
P
→
Q
{\displaystyle P\rightarrow Q}
) and its antecedent (
P
{\displaystyle P}
). However, the antecedent (
P
{\displaystyle P}
) cannot be similarly obtained as the conclusion from the premises of the conditional statement (
P
→
Q
{\displaystyle P\rightarrow Q}
) and the consequent (
Q
{\displaystyle Q}
). Such an argument commits the logical fallacy of affirming the consequent.
The following is an example of an argument using modus ponens:
If it is raining, then there are clouds in the sky.
It is raining.
Thus, there are clouds in the sky.
==== Modus tollens ====
Modus tollens (also known as "the law of contrapositive") is a deductive rule of inference. It validates an argument that has as premises a conditional statement (formula) and the negation of the consequent (
¬
Q
{\displaystyle \lnot Q}
) and as conclusion the negation of the antecedent (
¬
P
{\displaystyle \lnot P}
). In contrast to modus ponens, reasoning with modus tollens goes in the opposite direction to that of the conditional. The general expression for modus tollens is the following:
P
→
Q
{\displaystyle P\rightarrow Q}
. (First premise is a conditional statement)
¬
Q
{\displaystyle \lnot Q}
. (Second premise is the negation of the consequent)
¬
P
{\displaystyle \lnot P}
. (Conclusion deduced is the negation of the antecedent)
The following is an example of an argument using modus tollens:
If it is raining, then there are clouds in the sky.
There are no clouds in the sky.
Thus, it is not raining.
==== Hypothetical syllogism ====
A hypothetical syllogism is an inference that takes two conditional statements and forms a conclusion by combining the hypothesis of one statement with the conclusion of another. Here is the general form:
P
→
Q
{\displaystyle P\rightarrow Q}
Q
→
R
{\displaystyle Q\rightarrow R}
Therefore,
P
→
R
{\displaystyle P\rightarrow R}
.
In there being a subformula in common between the two premises that does not occur in the consequence, this resembles syllogisms in term logic, although it differs in that this subformula is a proposition whereas in Aristotelian logic, this common element is a term and not a proposition.
The following is an example of an argument using a hypothetical syllogism:
If there had been a thunderstorm, it would have rained.
If it had rained, things would have gotten wet.
Thus, if there had been a thunderstorm, things would have gotten wet.
=== Fallacies ===
Various formal fallacies have been described. They are invalid forms of deductive reasoning. An additional aspect of them is that they appear to be valid on some occasions or on the first impression. They may thereby seduce people into accepting and committing them. One type of formal fallacy is affirming the consequent, as in "if John is a bachelor, then he is male; John is male; therefore, John is a bachelor". This is similar to the valid rule of inference named modus ponens, but the second premise and the conclusion are switched around, which is why it is invalid. A similar formal fallacy is denying the antecedent, as in "if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore, Othello is not male". This is similar to the valid rule of inference called modus tollens, the difference being that the second premise and the conclusion are switched around. Other formal fallacies include affirming a disjunct, denying a conjunct, and the fallacy of the undistributed middle. All of them have in common that the truth of their premises does not ensure the truth of their conclusion. But it may still happen by coincidence that both the premises and the conclusion of formal fallacies are true.
=== Definitory and strategic rules ===
Rules of inferences are definitory rules: they determine whether an argument is deductively valid or not. But reasoners are usually not just interested in making any kind of valid argument. Instead, they often have a specific point or conclusion that they wish to prove or refute. So given a set of premises, they are faced with the problem of choosing the relevant rules of inference for their deduction to arrive at their intended conclusion. This issue belongs to the field of strategic rules: the question of which inferences need to be drawn to support one's conclusion. The distinction between definitory and strategic rules is not exclusive to logic: it is also found in various games. In chess, for example, the definitory rules state that bishops may only move diagonally while the strategic rules recommend that one should control the center and protect one's king if one intends to win. In this sense, definitory rules determine whether one plays chess or something else whereas strategic rules determine whether one is a good or a bad chess player. The same applies to deductive reasoning: to be an effective reasoner involves mastering both definitory and strategic rules.
== Validity and soundness ==
Deductive arguments are evaluated in terms of their validity and soundness.
An argument is valid if it is impossible for its premises to be true while its conclusion is false. In other words, the conclusion must be true if the premises are true. An argument can be “valid” even if one or more of its premises are false.
An argument is sound if it is valid and the premises are true.
It is possible to have a deductive argument that is logically valid but is not sound. Fallacious arguments often take that form.
The following is an example of an argument that is “valid”, but not “sound”:
Everyone who eats carrots is a quarterback.
John eats carrots.
Therefore, John is a quarterback.
The example's first premise is false – there are people who eat carrots who are not quarterbacks – but the conclusion would necessarily be true, if the premises were true. In other words, it is impossible for the premises to be true and the conclusion false. Therefore, the argument is “valid”, but not “sound”. False generalizations – such as "Everyone who eats carrots is a quarterback" – are often used to make unsound arguments. The fact that there are some people who eat carrots but are not quarterbacks proves the flaw of the argument.
In this example, the first statement uses categorical reasoning, saying that all carrot-eaters are definitely quarterbacks. This theory of deductive reasoning – also known as term logic – was developed by Aristotle, but was superseded by propositional (sentential) logic and predicate logic.
Deductive reasoning can be contrasted with inductive reasoning, in regards to validity and soundness. In cases of inductive reasoning, even though the premises are true and the argument is “valid”, it is possible for the conclusion to be false (determined to be false with a counterexample or other means).
== Difference from ampliative reasoning ==
Deductive reasoning is usually contrasted with non-deductive or ampliative reasoning. The hallmark of valid deductive inferences is that it is impossible for their premises to be true and their conclusion to be false. In this way, the premises provide the strongest possible support to their conclusion. The premises of ampliative inferences also support their conclusion. But this support is weaker: they are not necessarily truth-preserving. So even for correct ampliative arguments, it is possible that their premises are true and their conclusion is false. Two important forms of ampliative reasoning are inductive and abductive reasoning. Sometimes the term "inductive reasoning" is used in a very wide sense to cover all forms of ampliative reasoning. However, in a more strict usage, inductive reasoning is just one form of ampliative reasoning. In the narrow sense, inductive inferences are forms of statistical generalization. They are usually based on many individual observations that all show a certain pattern. These observations are then used to form a conclusion either about a yet unobserved entity or about a general law. For abductive inferences, the premises support the conclusion because the conclusion is the best explanation of why the premises are true.
The support ampliative arguments provide for their conclusion comes in degrees: some ampliative arguments are stronger than others. This is often explained in terms of probability: the premises make it more likely that the conclusion is true. Strong ampliative arguments make their conclusion very likely, but not absolutely certain. An example of ampliative reasoning is the inference from the premise "every raven in a random sample of 3200 ravens is black" to the conclusion "all ravens are black": the extensive random sample makes the conclusion very likely, but it does not exclude that there are rare exceptions. In this sense, ampliative reasoning is defeasible: it may become necessary to retract an earlier conclusion upon receiving new related information. Ampliative reasoning is very common in everyday discourse and the sciences.
An important drawback of deductive reasoning is that it does not lead to genuinely new information. This means that the conclusion only repeats information already found in the premises. Ampliative reasoning, on the other hand, goes beyond the premises by arriving at genuinely new information. One difficulty for this characterization is that it makes deductive reasoning appear useless: if deduction is uninformative, it is not clear why people would engage in it and study it. It has been suggested that this problem can be solved by distinguishing between surface and depth information. On this view, deductive reasoning is uninformative on the depth level, in contrast to ampliative reasoning. But it may still be valuable on the surface level by presenting the information in the premises in a new and sometimes surprising way.
A popular misconception of the relation between deduction and induction identifies their difference on the level of particular and general claims. On this view, deductive inferences start from general premises and draw particular conclusions, while inductive inferences start from particular premises and draw general conclusions. This idea is often motivated by seeing deduction and induction as two inverse processes that complement each other: deduction is top-down while induction is bottom-up. But this is a misconception that does not reflect how valid deduction is defined in the field of logic: a deduction is valid if it is impossible for its premises to be true while its conclusion is false, independent of whether the premises or the conclusion are particular or general. Because of this, some deductive inferences have a general conclusion and some also have particular premises.
== In various fields ==
=== Cognitive psychology ===
Cognitive psychology studies the psychological processes responsible for deductive reasoning. It is concerned, among other things, with how good people are at drawing valid deductive inferences. This includes the study of the factors affecting their performance, their tendency to commit fallacies, and the underlying biases involved. A notable finding in this field is that the type of deductive inference has a significant impact on whether the correct conclusion is drawn. In a meta-analysis of 65 studies, for example, 97% of the subjects evaluated modus ponens inferences correctly, while the success rate for modus tollens was only 72%. On the other hand, even some fallacies like affirming the consequent or denying the antecedent were regarded as valid arguments by the majority of the subjects. An important factor for these mistakes is whether the conclusion seems initially plausible: the more believable the conclusion is, the higher the chance that a subject will mistake a fallacy for a valid argument.
An important bias is the matching bias, which is often illustrated using the Wason selection task. In an often-cited experiment by Peter Wason, 4 cards are presented to the participant. In one case, the visible sides show the symbols D, K, 3, and 7 on the different cards. The participant is told that every card has a letter on one side and a number on the other side, and that "[e]very card which has a D on one side has a 3 on the other side". Their task is to identify which cards need to be turned around in order to confirm or refute this conditional claim. The correct answer, only given by about 10%, is the cards D and 7. Many select card 3 instead, even though the conditional claim does not involve any requirements on what symbols can be found on the opposite side of card 3. But this result can be drastically changed if different symbols are used: the visible sides show "drinking a beer", "drinking a coke", "16 years of age", and "22 years of age" and the participants are asked to evaluate the claim "[i]f a person is drinking beer, then the person must be over 19 years of age". In this case, 74% of the participants identified correctly that the cards "drinking a beer" and "16 years of age" have to be turned around. These findings suggest that the deductive reasoning ability is heavily influenced by the content of the involved claims and not just by the abstract logical form of the task: the more realistic and concrete the cases are, the better the subjects tend to perform.
Another bias is called the "negative conclusion bias", which happens when one of the premises has the form of a negative material conditional, as in "If the card does not have an A on the left, then it has a 3 on the right. The card does not have a 3 on the right. Therefore, the card has an A on the left". The increased tendency to misjudge the validity of this type of argument is not present for positive material conditionals, as in "If the card has an A on the left, then it has a 3 on the right. The card does not have a 3 on the right. Therefore, the card does not have an A on the left".
==== Psychological theories of deductive reasoning ====
Various psychological theories of deductive reasoning have been proposed. These theories aim to explain how deductive reasoning works in relation to the underlying psychological processes responsible. They are often used to explain the empirical findings, such as why human reasoners are more susceptible to some types of fallacies than to others.
An important distinction is between mental logic theories, sometimes also referred to as rule theories, and mental model theories. Mental logic theories see deductive reasoning as a language-like process that happens through the manipulation of representations. This is done by applying syntactic rules of inference in a way very similar to how systems of natural deduction transform their premises to arrive at a conclusion. On this view, some deductions are simpler than others since they involve fewer inferential steps. This idea can be used, for example, to explain why humans have more difficulties with some deductions, like the modus tollens, than with others, like the modus ponens: because the more error-prone forms do not have a native rule of inference but need to be calculated by combining several inferential steps with other rules of inference. In such cases, the additional cognitive labor makes the inferences more open to error.
Mental model theories, on the other hand, hold that deductive reasoning involves models or mental representations of possible states of the world without the medium of language or rules of inference. In order to assess whether a deductive inference is valid, the reasoner mentally constructs models that are compatible with the premises of the inference. The conclusion is then tested by looking at these models and trying to find a counterexample in which the conclusion is false. The inference is valid if no such counterexample can be found. In order to reduce cognitive labor, only such models are represented in which the premises are true. Because of this, the evaluation of some forms of inference only requires the construction of very few models while for others, many different models are necessary. In the latter case, the additional cognitive labor required makes deductive reasoning more error-prone, thereby explaining the increased rate of error observed. This theory can also explain why some errors depend on the content rather than the form of the argument. For example, when the conclusion of an argument is very plausible, the subjects may lack the motivation to search for counterexamples among the constructed models.
Both mental logic theories and mental model theories assume that there is one general-purpose reasoning mechanism that applies to all forms of deductive reasoning. But there are also alternative accounts that posit various different special-purpose reasoning mechanisms for different contents and contexts. In this sense, it has been claimed that humans possess a special mechanism for permissions and obligations, specifically for detecting cheating in social exchanges. This can be used to explain why humans are often more successful in drawing valid inferences if the contents involve human behavior in relation to social norms. Another example is the so-called dual-process theory. This theory posits that there are two distinct cognitive systems responsible for reasoning. Their interrelation can be used to explain commonly observed biases in deductive reasoning. System 1 is the older system in terms of evolution. It is based on associative learning and happens fast and automatically without demanding many cognitive resources. System 2, on the other hand, is of more recent evolutionary origin. It is slow and cognitively demanding, but also more flexible and under deliberate control. The dual-process theory posits that system 1 is the default system guiding most of our everyday reasoning in a pragmatic way. But for particularly difficult problems on the logical level, system 2 is employed. System 2 is mostly responsible for deductive reasoning.
==== Intelligence ====
The ability of deductive reasoning is an important aspect of intelligence and many tests of intelligence include problems that call for deductive inferences. Because of this relation to intelligence, deduction is highly relevant to psychology and the cognitive sciences. But the subject of deductive reasoning is also pertinent to the computer sciences, for example, in the creation of artificial intelligence.
=== Epistemology ===
Deductive reasoning plays an important role in epistemology. Epistemology is concerned with the question of justification, i.e. to point out which beliefs are justified and why. Deductive inferences are able to transfer the justification of the premises onto the conclusion. So while logic is interested in the truth-preserving nature of deduction, epistemology is interested in the justification-preserving nature of deduction. There are different theories trying to explain why deductive reasoning is justification-preserving. According to reliabilism, this is the case because deductions are truth-preserving: they are reliable processes that ensure a true conclusion given the premises are true. Some theorists hold that the thinker has to have explicit awareness of the truth-preserving nature of the inference for the justification to be transferred from the premises to the conclusion. One consequence of such a view is that, for young children, this deductive transference does not take place since they lack this specific awareness.
=== Probability logic ===
Probability logic is interested in how the probability of the premises of an argument affects the probability of its conclusion. It differs from classical logic, which assumes that propositions are either true or false but does not take into consideration the probability or certainty that a proposition is true or false.
== History ==
Aristotle, a Greek philosopher, started documenting deductive reasoning in the 4th century BC. René Descartes, in his book Discourse on Method, refined the idea for the Scientific Revolution. Developing four rules to follow for proving an idea deductively, Descartes laid the foundation for the deductive portion of the scientific method. Descartes' background in geometry and mathematics influenced his ideas on the truth and reasoning, causing him to develop a system of general reasoning now used for most mathematical reasoning. Similar to postulates, Descartes believed that ideas could be self-evident and that reasoning alone must prove that observations are reliable. These ideas also lay the foundations for the ideas of rationalism.
== Related concepts and theories ==
=== Deductivism ===
Deductivism is a philosophical position that gives primacy to deductive reasoning or arguments over their non-deductive counterparts. It is often understood as the evaluative claim that only deductive inferences are good or correct inferences. This theory would have wide-reaching consequences for various fields since it implies that the rules of deduction are "the only acceptable standard of evidence". This way, the rationality or correctness of the different forms of inductive reasoning is denied. Some forms of deductivism express this in terms of degrees of reasonableness or probability. Inductive inferences are usually seen as providing a certain degree of support for their conclusion: they make it more likely that their conclusion is true. Deductivism states that such inferences are not rational: the premises either ensure their conclusion, as in deductive reasoning, or they do not provide any support at all.
One motivation for deductivism is the problem of induction introduced by David Hume. It consists in the challenge of explaining how or whether inductive inferences based on past experiences support conclusions about future events. For example, a chicken comes to expect, based on all its past experiences, that the person entering its coop is going to feed it, until one day the person "at last wrings its neck instead". According to Karl Popper's falsificationism, deductive reasoning alone is sufficient. This is due to its truth-preserving nature: a theory can be falsified if one of its deductive consequences is false. So while inductive reasoning does not offer positive evidence for a theory, the theory still remains a viable competitor until falsified by empirical observation. In this sense, deduction alone is sufficient for discriminating between competing hypotheses about what is the case. Hypothetico-deductivism is a closely related scientific method, according to which science progresses by formulating hypotheses and then aims to falsify them by trying to make observations that run counter to their deductive consequences.
=== Natural deduction ===
The term "natural deduction" refers to a class of proof systems based on self-evident rules of inference. The first systems of natural deduction were developed by Gerhard Gentzen and Stanislaw Jaskowski in the 1930s. The core motivation was to give a simple presentation of deductive reasoning that closely mirrors how reasoning actually takes place. In this sense, natural deduction stands in contrast to other less intuitive proof systems, such as Hilbert-style deductive systems, which employ axiom schemes to express logical truths. Natural deduction, on the other hand, avoids axioms schemes by including many different rules of inference that can be used to formulate proofs. These rules of inference express how logical constants behave. They are often divided into introduction rules and elimination rules. Introduction rules specify under which conditions a logical constant may be introduced into a new sentence of the proof. For example, the introduction rule for the logical constant "
∧
{\displaystyle \land }
" (and) is "
A
,
B
(
A
∧
B
)
{\displaystyle {\frac {A,B}{(A\land B)}}}
". It expresses that, given the premises "
A
{\displaystyle A}
" and "
B
{\displaystyle B}
" individually, one may draw the conclusion "
A
∧
B
{\displaystyle A\land B}
" and thereby include it in one's proof. This way, the symbol "
∧
{\displaystyle \land }
" is introduced into the proof. The removal of this symbol is governed by other rules of inference, such as the elimination rule "
(
A
∧
B
)
A
{\displaystyle {\frac {(A\land B)}{A}}}
", which states that one may deduce the sentence "
A
{\displaystyle A}
" from the premise "
(
A
∧
B
)
{\displaystyle (A\land B)}
". Similar introduction and elimination rules are given for other logical constants, such as the propositional operator "
¬
{\displaystyle \lnot }
", the propositional connectives "
∨
{\displaystyle \lor }
" and "
→
{\displaystyle \rightarrow }
", and the quantifiers "
∃
{\displaystyle \exists }
" and "
∀
{\displaystyle \forall }
".
The focus on rules of inferences instead of axiom schemes is an important feature of natural deduction. But there is no general agreement on how natural deduction is to be defined. Some theorists hold that all proof systems with this feature are forms of natural deduction. This would include various forms of sequent calculi or tableau calculi. But other theorists use the term in a more narrow sense, for example, to refer to the proof systems developed by Gentzen and Jaskowski. Because of its simplicity, natural deduction is often used for teaching logic to students.
=== Geometrical method ===
The geometrical method is a method of philosophy based on deductive reasoning. It starts from a small set of self-evident axioms and tries to build a comprehensive logical system based only on deductive inferences from these first axioms. It was initially formulated by Baruch Spinoza and came to prominence in various rationalist philosophical systems in the modern era. It gets its name from the forms of mathematical demonstration found in traditional geometry, which are usually based on axioms, definitions, and inferred theorems. An important motivation of the geometrical method is to repudiate philosophical skepticism by grounding one's philosophical system on absolutely certain axioms. Deductive reasoning is central to this endeavor because of its necessarily truth-preserving nature. This way, the certainty initially invested only in the axioms is transferred to all parts of the philosophical system.
One recurrent criticism of philosophical systems build using the geometrical method is that their initial axioms are not as self-evident or certain as their defenders proclaim. This problem lies beyond the deductive reasoning itself, which only ensures that the conclusion is true if the premises are true, but not that the premises themselves are true. For example, Spinoza's philosophical system has been criticized this way based on objections raised against the causal axiom, i.e. that "the knowledge of an effect depends on and involves knowledge of its cause". A different criticism targets not the premises but the reasoning itself, which may at times implicitly assume premises that are themselves not self-evident.
== See also ==
== Notes and references ==
== Further reading ==
Vincent F. Hendricks, Thought 2 Talk: A Crash Course in Reflection and Expression, New York: Automatic Press / VIP, 2005, ISBN 87-991013-7-8
Philip Johnson-Laird, Ruth M. J. Byrne, Deduction, Psychology Press 1991, ISBN 978-0-86377-149-1
Zarefsky, David, Argumentation: The Study of Effective Reasoning Parts I and II, The Teaching Company 2002
Bullemore, Thomas. The Pragmatic Problem of Induction.
== External links ==
Deductive reasoning at PhilPapers
Deductive reasoning at the Indiana Philosophy Ontology Project
"Deductive reasoning". Internet Encyclopedia of Philosophy. | Wikipedia/Deductive_inference |
In the philosophy of science, the special sciences are all sciences other than fundamental physics, including, for example, chemistry, biology, and neuroscience. The distinction reflects a view that "all events which fall under the laws of any science are physical events and hence fall under the laws of physics".
In this view, all sciences except fundamental physics are special sciences. However, the legitimacy of this view, and the status of other sciences and their relation to physics, are unresolved matters. Jerry Fodor, a key writer on this subject, refers to "many philosophers" who hold this position, but in an opposing argument he has argued for strong autonomy, concluding that the special sciences are not even in principle reducible to physics. As such, Fodor has often been credited for having helped turn the tide against reductionist physicalism.
== See also ==
Emergence – Unpredictable phenomenon in complex systems
Emergentism – Philosophical belief in emergence
Multiple realizability – Thesis in the philosophy of mind
Reductionism – Philosophical view explaining systems in terms of smaller parts
Supervenience – Relation between sets of properties or facts
The central science – Term often associated with chemistry
Unity of science – Theory in the philosophy of science
== References == | Wikipedia/Special_science |
Philosophy of language refers to the philosophical study of the nature of language. It investigates the relationship between language, language users, and the world. Investigations may include inquiry into the nature of meaning, intentionality, reference, the constitution of sentences, concepts, learning, and thought.
Gottlob Frege and Bertrand Russell were pivotal figures in analytic philosophy's "linguistic turn". These writers were followed by Ludwig Wittgenstein (Tractatus Logico-Philosophicus), the Vienna Circle, logical positivists, and Willard Van Orman Quine.
== History ==
=== Ancient philosophy ===
In the West, inquiry into language stretches back to the 5th century BC with philosophers such as Socrates, Plato, Aristotle, and the Stoics. Linguistic speculation predated systematic descriptions of grammar which emerged c. the 5th century BC in India and c. the 3rd century BC in Greece.
In the dialogue Cratylus, Plato considered the question of whether the names of things were determined by convention or by nature. He criticized conventionalism because it led to the bizarre consequence that anything can be conventionally denominated by any name. Hence, it cannot account for the correct or incorrect application of a name. He claimed that there was a natural correctness to names. To do this, he pointed out that compound words and phrases have a range of correctness. He also argued that primitive names had a natural correctness, because each phoneme represented basic ideas or sentiments. For example, for Plato the letter l and its sound represented the idea of softness. However, by the end of Cratylus, he had admitted that some social conventions were also involved, and that there were faults in the idea that phonemes had individual meanings. Plato is often considered a proponent of extreme realism.
Aristotle interested himself with issues of logic, categories, and the creation of meaning. He separated all things into categories of species and genus. He thought that the meaning of a predicate was established through an abstraction of the similarities between various individual things. This theory later came to be called nominalism. However, since Aristotle took these similarities to be constituted by a real commonality of form, he is more often considered a proponent of moderate realism.
The Stoics made important contributions to the analysis of grammar, distinguishing five parts of speech: nouns, verbs, appellatives (names or epithets), conjunctions and articles. They also developed a sophisticated doctrine of the lektón associated with each sign of a language, but distinct from both the sign itself and the thing to which it refers. This lektón was the meaning or sense of every term. The complete lektón of a sentence is what we would now call its proposition. Only propositions were considered truth-bearing—meaning they could be considered true or false—while sentences were simply their vehicles of expression. Different lektá could also express things besides propositions, such as commands, questions and exclamations.
=== Medieval philosophy ===
Medieval philosophers were greatly interested in the subtleties of language and its usage. For many scholastics, this interest was provoked by the necessity of translating Greek texts into Latin. There were several noteworthy philosophers of language in the medieval period. According to Peter J. King (though this has been disputed), Peter Abelard anticipated the modern theories of reference. Also, William of Ockham's Summa Logicae brought forward one of the first serious proposals for codifying a mental language.
The scholastics of the high medieval period, such as Ockham and John Duns Scotus, considered logic to be a scientia sermocinalis (science of language). The result of their studies was the elaboration of linguistic-philosophical notions whose complexity and subtlety has only recently come to be appreciated. Many of the most interesting problems of modern philosophy of language were anticipated by medieval thinkers. The phenomena of vagueness and ambiguity were analyzed intensely, and this led to an increasing interest in problems related to the use of syncategorematic words, such as and, or, not, if, and every. The study of categorematic words (or terms) and their properties was also developed greatly. One of the major developments of the scholastics in this area was the doctrine of the suppositio. The suppositio of a term is the interpretation that is given of it in a specific context. It can be proper or improper (as when it is used in metaphor, metonym, and other figures of speech). A proper suppositio, in turn, can be either formal or material according to whether it refers to its usual non-linguistic referent (as in "Charles is a man"), or to itself as a linguistic entity (as in "'Charles' has seven letters"). Such a classification scheme is the precursor of modern distinctions between use and mention, and between language and metalanguage.
There is a tradition called speculative grammar which existed from the 11th to the 13th century. Leading scholars included Martin of Dacia and Thomas of Erfurt (see Modistae).
=== Modern philosophy ===
Linguists of the Renaissance and Baroque periods such as Johannes Goropius Becanus, Athanasius Kircher and John Wilkins were infatuated with the idea of a philosophical language reversing the confusion of tongues, influenced by the gradual discovery of Chinese characters and Egyptian hieroglyphs (Hieroglyphica). This thought parallels the idea that there might be a universal language of music.
European scholarship began to absorb the Indian linguistic tradition only from the mid-18th century, pioneered by Jean François Pons and Henry Thomas Colebrooke (the editio princeps of Varadarāja, a 17th-century Sanskrit grammarian, dating to 1849).
In the early 19th century, the Danish philosopher Søren Kierkegaard insisted that language should play a larger role in Western philosophy. He argued that philosophy has not sufficiently focused on the role language plays in cognition and that future philosophy ought to proceed with a conscious focus on language:
If the claim of philosophers to be unbiased were all it pretends to be, it would also have to take account of language and its whole significance in relation to speculative philosophy ... Language is partly something originally given, partly that which develops freely. And just as the individual can never reach the point at which he becomes absolutely independent ... so too with language.
=== Contemporary philosophy ===
The phrase "linguistic turn" was used to describe the noteworthy emphasis that contemporary philosophers put upon language.
Language began to play a central role in Western philosophy in the early 20th century. One of the central figures involved in this development was the German philosopher Gottlob Frege, whose work on philosophical logic and the philosophy of language in the late 19th century influenced the work of 20th-century analytic philosophers Bertrand Russell and Ludwig Wittgenstein. The philosophy of language became so pervasive that for a time, in analytic philosophy circles, philosophy as a whole was understood to be a matter of philosophy of language.
In continental philosophy, the foundational work in the field was Ferdinand de Saussure's Cours de linguistique générale, published posthumously in 1916.
== Major topics and subfields ==
=== Meaning ===
The topic that has received the most attention in the philosophy of language has been the nature of meaning, to explain what "meaning" is, and what we mean when we talk about meaning. Within this area, issues include: the nature of synonymy, the origins of meaning itself, our apprehension of meaning, and the nature of composition (the question of how meaningful units of language are composed of smaller meaningful parts, and how the meaning of the whole is derived from the meaning of its parts).
There have been several distinctive explanations of what a linguistic "meaning" is. Each has been associated with its own body of literature.
The ideational theory of meaning, most commonly associated with the British empiricist John Locke, claims that meanings are mental representations provoked by signs. Although this view of meaning has been beset by a number of problems from the beginning (see the main article for details), interest in it has been renewed by some contemporary theorists under the guise of semantic internalism.
The truth-conditional theory of meaning holds meaning to be the conditions under which an expression may be true or false. This tradition goes back at least to Frege and is associated with a rich body of modern work, spearheaded by philosophers like Alfred Tarski and Donald Davidson. (See also Wittgenstein's picture theory of language.)
The use theory of meaning, most commonly associated with the later Wittgenstein, helped inaugurate the idea of "meaning as use", and a communitarian view of language. Wittgenstein was interested in the way in which the communities use language, and how far it can be taken. It is also associated with P. F. Strawson, John Searle, Robert Brandom, and others.
The inferentialist theory of meaning, the view that the meaning of an expression is derived from the inferential relations that it has with other expressions. This view is thought to be descended from the use theory of meaning, and has been most notably defended by Wilfrid Sellars and Robert Brandom.
The direct reference theory of meaning, the view that the meaning of a word or expression is what it points out in the world. While views of this kind have been widely criticized regarding the use of language in general, John Stuart Mill defended a form of this view, and Saul Kripke and Ruth Barcan Marcus have both defended the application of direct reference theory to proper names.
The semantic externalist theory of meaning, according to which meaning is not a purely psychological phenomenon, because it is determined, at least in part, by features of one's environment. There are two broad subspecies of externalism: social and environmental. The first is most closely associated with Tyler Burge and the second with Hilary Putnam, Saul Kripke and others.
The verificationist theory of meaning is generally associated with the early 20th century movement of logical positivism. The traditional formulation of such a theory is that the meaning of a sentence is its method of verification or falsification. In this form, the thesis was abandoned after the acceptance by most philosophers of the Duhem–Quine thesis of confirmation holism after the publication of Quine's "Two Dogmas of Empiricism". However, Michael Dummett has advocated a modified form of verificationism since the 1970s. In this version, the comprehension (and hence meaning) of a sentence consists in the hearer's ability to recognize the demonstration (mathematical, empirical or other) of the truth of the sentence.
Pragmatic theories of meaning include any theory in which the meaning (or understanding) of a sentence is determined by the consequences of its application. Dummett attributes such a theory of meaning to Charles Sanders Peirce and other early 20th century American pragmatists.
Psychological theories of meaning, which focus on the intentions of a speaker in determining the meaning of an utterance. One notable proponent of such a view was Paul Grice, whose views also account for non-linguistic meaning (i.e., meaning as conveyed by body language, meanings as consequences, etc.).
=== Reference ===
Investigations into how language interacts with the world are called theories of reference. Gottlob Frege was an advocate of a mediated reference theory. Frege divided the semantic content of every expression, including sentences, into two components: sense and reference. The sense of a sentence is the thought that it expresses. Such a thought is abstract, universal and objective. The sense of any sub-sentential expression consists in its contribution to the thought that its embedding sentence expresses. Senses determine reference and are also the modes of presentation of the objects to which expressions refer. Referents are the objects in the world that words pick out. The senses of sentences are thoughts, while their referents are truth values (true or false). The referents of sentences embedded in propositional attitude ascriptions and other opaque contexts are their usual senses.
Bertrand Russell, in his later writings and for reasons related to his theory of acquaintance in epistemology, held that the only directly referential expressions are what he called "logically proper names". Logically proper names are such terms as I, now, here and other indexicals. He viewed proper names of the sort described above as "abbreviated definite descriptions" (see Theory of descriptions). Hence Joseph R. Biden may be an abbreviation for "a past President of the United States and husband of Jill Biden". Definite descriptions are denoting phrases (see "On Denoting") which are analyzed by Russell into existentially quantified logical constructions. Such phrases denote in the sense that there is an object that satisfies the description. However, such objects are not to be considered meaningful on their own, but have meaning only in the proposition expressed by the sentences of which they are a part. Hence, they are not directly referential in the same way as logically proper names, for Russell.
On Frege's account, any referring expression has a sense as well as a referent. Such a "mediated reference" view has certain theoretical advantages over Mill's view. For example, co-referential names, such as Samuel Clemens and Mark Twain, cause problems for a directly referential view because it is possible for someone to hear "Mark Twain is Samuel Clemens" and be surprised – thus, their cognitive content seems different.
Despite the differences between the views of Frege and Russell, they are generally lumped together as descriptivists about proper names. Such descriptivism was criticized in Saul Kripke's Naming and Necessity.
Kripke put forth what has come to be known as "the modal argument" (or "argument from rigidity"). Consider the name Aristotle and the descriptions "the greatest student of Plato", "the founder of logic" and "the teacher of Alexander". Aristotle obviously satisfies all of the descriptions (and many of the others we commonly associate with him), but it is not necessarily true that if Aristotle existed then Aristotle was any one, or all, of these descriptions. Aristotle may well have existed without doing any single one of the things for which he is known to posterity. He may have existed and not have become known to posterity at all or he may have died in infancy. Suppose that Aristotle is associated by Mary with the description "the last great philosopher of antiquity" and (the actual) Aristotle died in infancy. Then Mary's description would seem to refer to Plato. But this is deeply counterintuitive. Hence, names are rigid designators, according to Kripke. That is, they refer to the same individual in every possible world in which that individual exists. In the same work, Kripke articulated several other arguments against "Frege–Russell" descriptivism (see also Kripke's causal theory of reference).
The whole philosophical enterprise of studying reference has been critiqued by linguist Noam Chomsky in various works.
=== Composition and parts ===
It has long been known that there are different parts of speech. One part of the common sentence is the lexical word, which is composed of nouns, verbs, and adjectives. A major question in the field – perhaps the single most important question for formalist and structuralist thinkers – is how the meaning of a sentence emerges from its parts.
Many aspects of the problem of the composition of sentences are addressed in the field of linguistics of syntax. Philosophical semantics tends to focus on the principle of compositionality to explain the relationship between meaningful parts and whole sentences. The principle of compositionality asserts that a sentence can be understood on the basis of the meaning of the parts of the sentence (i.e., words, morphemes) along with an understanding of its structure (i.e., syntax, logic). Further, syntactic propositions are arranged into discourse or narrative structures, which also encode meanings through pragmatics like temporal relations and pronominals.
It is possible to use the concept of functions to describe more than just how lexical meanings work: they can also be used to describe the meaning of a sentence. In the sentence "The horse is red", "the horse" can be considered to be the product of a propositional function. A propositional function is an operation of language that takes an entity (in this case, the horse) as an input and outputs a semantic fact (i.e., the proposition that is represented by "The horse is red"). In other words, a propositional function is like an algorithm. The meaning of "red" in this case is whatever takes the entity "the horse" and turns it into the statement, "The horse is red."
Linguists have developed at least two general methods of understanding the relationship between the parts of a linguistic string and how it is put together: syntactic and semantic trees. Syntactic trees draw upon the words of a sentence with the grammar of the sentence in mind; semantic trees focus upon the role of the meaning of the words and how those meanings combine to provide insight onto the genesis of semantic facts.
=== Mind and language ===
==== Innateness and learning ====
Some of the major issues at the intersection of philosophy of language and philosophy of mind are also dealt with in modern psycholinguistics. Some important questions regard the amount of innate language, if language acquisition is a special faculty in the mind, and what the connection is between thought and language.
There are three general perspectives on the issue of language learning. The first is the behaviorist perspective, which dictates that not only is the solid bulk of language learned, but it is learned via conditioning. The second is the hypothesis testing perspective, which understands the child's learning of syntactic rules and meanings to involve the postulation and testing of hypotheses, through the use of the general faculty of intelligence. The final candidate for explanation is the innatist perspective, which states that at least some of the syntactic settings are innate and hardwired, based on certain modules of the mind.
There are varying notions of the structure of the brain when it comes to language. Connectionist models emphasize the idea that a person's lexicon and their thoughts operate in a kind of distributed, associative network. Nativist models assert that there are specialized devices in the brain that are dedicated to language acquisition. Computation models emphasize the notion of a representational language of thought and the logic-like, computational processing that the mind performs over them. Emergentist models focus on the notion that natural faculties are a complex system that emerge from simpler biological parts. Reductionist models attempt to explain higher-level mental processes in terms of the basic low-level neurophysiological activity.
=== Communication ===
Firstly, this field of study seeks to better understand what speakers and listeners do with language in communication, and how it is used socially. Specific interests include the topics of language learning, language creation, and speech acts.
Secondly, the question of how language relates to the minds of both the speaker and the interpreter is investigated. Of specific interest is the grounds for successful translation of words and concepts into their equivalents in another language.
==== Language and thought ====
An important problem which touches both philosophy of language and philosophy of mind is to what extent language influences thought and vice versa. There have been a number of different perspectives on this issue, each offering a number of insights and suggestions.
Linguists Sapir and Whorf suggested that language limited the extent to which members of a "linguistic community" can think about certain subjects (a hypothesis paralleled in George Orwell's novel Nineteen Eighty-Four). In other words, language was analytically prior to thought. Philosopher Michael Dummett is also a proponent of the "language-first" viewpoint.
The stark opposite to the Sapir–Whorf position is the notion that thought (or, more broadly, mental content) has priority over language. The "knowledge-first" position can be found, for instance, in the work of Paul Grice. Further, this view is closely associated with Jerry Fodor and his language of thought hypothesis. According to his argument, spoken and written language derive their intentionality and meaning from an internal language encoded in the mind. The main argument in favor of such a view is that the structure of thoughts and the structure of language seem to share a compositional, systematic character. Another argument is that it is difficult to explain how signs and symbols on paper can represent anything meaningful unless some sort of meaning is infused into them by the contents of the mind. One of the main arguments against is that such levels of language can lead to an infinite regress. In any case, many philosophers of mind and language, such as Ruth Millikan, Fred Dretske and Fodor, have recently turned their attention to explaining the meanings of mental contents and states directly.
Another tradition of philosophers has attempted to show that language and thought are coextensive – that there is no way of explaining one without the other. Donald Davidson, in his essay "Thought and Talk", argued that the notion of belief could only arise as a product of public linguistic interaction. Daniel Dennett holds a similar interpretationist view of propositional attitudes. To an extent, the theoretical underpinnings to cognitive semantics (including the notion of semantic framing) suggest the influence of language upon thought. However, the same tradition views meaning and grammar as a function of conceptualization, making it difficult to assess in any straightforward way.
Some thinkers, like the ancient sophist Gorgias, have questioned whether or not language was capable of capturing thought at all.
...speech can never exactly represent perceptibles, since it is different from them, and perceptibles are apprehended each by the one kind of organ, speech by another. Hence, since the objects of sight cannot be presented to any other organ but sight, and the different sense-organs cannot give their information to one another, similarly speech cannot give any information about perceptibles. Therefore, if anything exists and is comprehended, it is incommunicable.
There are studies that prove that languages shape how people understand causality. Some of them were performed by Lera Boroditsky. For example, English speakers tend to say things like "John broke the vase" even for accidents. However, Spanish or Japanese speakers would be more likely to say "the vase broke itself". In studies conducted by Caitlin Fausey at Stanford University speakers of English, Spanish and Japanese watched videos of two people popping balloons, breaking eggs and spilling drinks either intentionally or accidentally. Later everyone was asked whether they could remember who did what. Spanish and Japanese speakers did not remember the agents of accidental events as well as did English speakers.
Russian speakers, who make an extra distinction between light and dark blue in their language, are better able to visually discriminate shades of blue. The Piraha, a tribe in Brazil, whose language has only terms like few and many instead of numerals, are not able to keep track of exact quantities.
In one study German and Spanish speakers were asked to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical gender. For example, when asked to describe a "key"—a word that is masculine in German and feminine in Spanish—the German speakers were more likely to use words like "hard", "heavy", "jagged", "metal", "serrated" and "useful" whereas Spanish speakers were more likely to say "golden", "intricate", "little", "lovely", "shiny" and "tiny". To describe a "bridge", which is feminine in German and masculine in Spanish, the German speakers said "beautiful", "elegant", "fragile", "peaceful", "pretty" and "slender", and the Spanish speakers said "big", "dangerous", "long", "strong", "sturdy" and "towering". This was the case even though all testing was done in English, a language without grammatical gender.
In a series of studies conducted by Gary Lupyan, people were asked to look at a series of images of imaginary aliens. Whether each alien was friendly or hostile was determined by certain subtle features but participants were not told what these were. They had to guess whether each alien was friendly or hostile, and after each response they were told if they were correct or not, helping them learn the subtle cues that distinguished friend from foe. A quarter of the participants were told in advance that the friendly aliens were called "leebish" and the hostile ones "grecious", while another quarter were told the opposite. For the rest, the aliens remained nameless. It was found that participants who were given names for the aliens learned to categorize the aliens far more quickly, reaching 80 per cent accuracy in less than half the time taken by those not told the names. By the end of the test, those told the names could correctly categorize 88 per cent of aliens, compared to just 80 per cent for the rest. It was concluded that naming objects helps us categorize and memorize them.
In another series of experiments, a group of people was asked to view furniture from an IKEA catalog. Half the time they were asked to label the object – whether it was a chair or lamp, for example – while the rest of the time they had to say whether or not they liked it. It was found that when asked to label items, people were later less likely to recall the specific details of products, such as whether a chair had arms or not. It was concluded that labeling objects helps our minds build a prototype of the typical object in the group at the expense of individual features.
=== Social interaction and language ===
A common claim is that language is governed by social conventions. Questions inevitably arise on surrounding topics. One question regards what a convention exactly is, and how it is studied, and second regards the extent that conventions even matter in the study of language. David Kellogg Lewis proposed a worthy reply to the first question by expounding the view that a convention is a "rationally self-perpetuating regularity in behavior". However, this view seems to compete to some extent with the Gricean view of speaker's meaning, requiring either one (or both) to be weakened if both are to be taken as true.
Some have questioned whether or not conventions are relevant to the study of meaning at all. Noam Chomsky proposed that the study of language could be done in terms of the I-Language, or internal language of persons. If this is so, then it undermines the pursuit of explanations in terms of conventions, and relegates such explanations to the domain of metasemantics. Metasemantics is a term used by philosopher of language Robert Stainton to describe all those fields that attempt to explain how semantic facts arise. One fruitful source of research involves investigation into the social conditions that give rise to, or are associated with, meanings and languages. Etymology (the study of the origins of words) and stylistics (philosophical argumentation over what makes "good grammar", relative to a particular language) are two other examples of fields that are taken to be metasemantic.
Many separate (but related) fields have investigated the topic of linguistic convention within their own research paradigms. The presumptions that prop up each theoretical view are of interest to the philosopher of language. For instance, one of the major fields of sociology, symbolic interactionism, is based on the insight that human social organization is based almost entirely on the use of meanings. In consequence, any explanation of a social structure (like an institution) would need to account for the shared meanings which create and sustain the structure.
Rhetoric is the study of the particular words that people use to achieve the proper emotional and rational effect in the listener, be it to persuade, provoke, endear, or teach. Some relevant applications of the field include the examination of propaganda and didacticism, the examination of the purposes of swearing and pejoratives (especially how it influences the behaviors of others, and defines relationships), or the effects of gendered language. It can also be used to study linguistic transparency (or speaking in an accessible manner), as well as performative utterances and the various tasks that language can perform (called "speech acts"). It also has applications to the study and interpretation of law, and helps give insight to the logical concept of the domain of discourse.
Literary theory is a discipline that some literary theorists claim overlaps with the philosophy of language. It emphasizes the methods that readers and critics use in understanding a text. This field, an outgrowth of the study of how to properly interpret messages, is
closely tied to the ancient discipline of hermeneutics.
=== Truth ===
Finally, philosophers of language investigate how language and meaning relate to truth and the reality being referred to. They tend to be less interested in which sentences are actually true, and more in what kinds of meanings can be true or false. A truth-oriented philosopher of language might wonder whether or not a meaningless sentence can be true or false, or whether or not sentences can express propositions about things that do not exist, rather than the way sentences are used.
== Problems in the philosophy of language ==
=== Nature of language ===
In the philosophical tradition stemming from the Ancient Greeks, such as Plato and Aristotle, language is seen as a tool for making statements about the reality by means of predication; e.g. "Man is a rational animal", where Man is the subject and is a rational animal is the predicate, which expresses a property of the subject. Such structures also constitute the syntactic basis of syllogism, which remained the standard model of formal logic until the early 20th century, when it was replaced with predicate logic. In linguistics and philosophy of language, the classical model survived in the Middle Ages, and the link between Aristotelian philosophy of science and linguistics was elaborated by Thomas of Erfurt's Modistae grammar (c. 1305), which gives an example of the analysis of the transitive sentence: "Plato strikes Socrates", where Socrates is the object and part of the predicate.
The social and evolutionary aspects of language were discussed during the classical and mediaeval periods. Plato's dialogue Cratylus investigates the iconicity of words, arguing that words are made by "wordsmiths" and selected by those who need the words, and that the study of language is external to the philosophical objective of studying ideas. Age-of-Enlightenment thinkers accommodated the classical model with a Christian worldview, arguing that God created Man social and rational, and, out of these properties, Man created his own cultural habits including language. In this tradition, the logic of the subject-predicate structure forms a general, or 'universal' grammar, which governs thinking and underpins all languages. Variation between languages was investigated in the Port-Royal Grammar of Arnauld and Lancelot, among others, who described it as accidental and separate from the logical requirements of thought and language.
The classical view was overturned in the early 19th century by the advocates of German romanticism. Humboldt and his contemporaries questioned the existence of a universal inner form of thought. They argued that, since thinking is verbal, language must be the prerequisite for thought. Therefore, every nation has its own unique way of thinking, a worldview, which has evolved with the linguistic history of the nation. Diversity became emphasized with a focus on the uncontrollable sociohistorical construction of language. Influential romantic accounts include Grimm's sound laws of linguistic evolution, Schleicher's "Darwinian" species-language analogy, the Völkerpsychologie accounts of language by Steinthal and Wundt, and Saussure's semiology, a dyadic model of semiotics, i.e., language as a sign system with its own inner logic, separated from physical reality.
In the early 20th century, logical grammar was defended by Frege and Husserl. Husserl's 'pure logical grammar' draws from 17th-century rational universal grammar, proposing a formal semantics that links the structures of physical reality (e.g., "This paper is white") with the structures of the mind, meaning, and the surface form of natural languages. Husserl's treatise was, however, rejected in general linguistics. Instead, linguists opted for Chomsky's theory of universal grammar as an innate biological structure that generates syntax in a formalistic fashion, i.e., irrespective of meaning.
Many philosophers continue to hold the view that language is a logically based tool of expressing the structures of reality by means of predicate-argument structure. Proponents include, with different nuances, Russell, Wittgenstein, Sellars, Davidson, Putnam, and Searle. Attempts to revive logical formal semantics as a basis of linguistics followed, e.g., the Montague grammar. Despite resistance from linguists including Chomsky and Lakoff, formal semantics was established in the late twentieth century. However, its influence has been mostly limited to computational linguistics, with little impact on general linguistics.
The incompatibility with genetics and neuropsychology of Chomsky's innate grammar gave rise to new psychologically and biologically oriented theories of language in the 1980s, and these have gained influence in linguistics and cognitive science in the 21st century. Examples include Lakoff's conceptual metaphor, which argues that language arises automatically from visual and other sensory input, and different models inspired by Dawkins's memetics, a neo-Darwinian model of linguistic units as the units of natural selection. These include cognitive grammar, construction grammar, and usage-based linguistics.
=== Problem of universals and composition ===
One debate that has captured the interest of many philosophers is the debate over the meaning of universals. It might be asked, for example, why when people say the word rocks, what it is that the word represents. Two different answers have emerged to this question. Some have said that the expression stands for some real, abstract universal out in the world called "rocks". Others have said that the word stands for some collection of particular, individual rocks that are associated with merely a nomenclature. The former position has been called philosophical realism, and the latter nominalism.
The issue here can be explicated in examination of the proposition "Socrates is a man".
From the realist's perspective, the connection between S and M is a connection between two abstract entities. There is an entity, "man", and an entity, "Socrates". These two things connect in some way or overlap.
From a nominalist's perspective, the connection between S and M is the connection between a particular entity (Socrates) and a vast collection of particular things (men). To say that Socrates is a man is to say that Socrates is a part of the class of "men". Another perspective is to consider "man" to be a property of the entity, "Socrates".
There is a third way, between nominalism and (extreme) realism, usually called "moderate realism" and attributed to Aristotle and Thomas Aquinas. Moderate realists hold that "man" refers to a real essence or form that is really present and identical in Socrates and all other men, but "man" does not exist as a separate and distinct entity. This is a realist position, because "man" is real, insofar as it really exists in all men; but it is a moderate realism, because "man" is not an entity separate from the men it informs.
=== Formal versus informal approaches ===
Another of the questions that has divided philosophers of language is the extent to which formal logic can be used as an effective tool in the analysis and understanding of natural languages. While most philosophers, including Gottlob Frege, Alfred Tarski and Rudolf Carnap, have been more or less skeptical about formalizing natural languages, many of them developed formal languages for use in the sciences or formalized parts of natural language for investigation. Some of the most prominent members of this tradition of formal semantics include Tarski, Carnap, Richard Montague and Donald Davidson.
On the other side of the divide, and especially prominent in the 1950s and '60s, were the so-called "ordinary language philosophers". Philosophers such as P. F. Strawson, John Langshaw Austin and Gilbert Ryle stressed the importance of studying natural language without regard to the truth-conditions of sentences and the references of terms. They did not believe that the social and practical dimensions of linguistic meaning could be captured by any attempts at formalization using the tools of logic. Logic is one thing and language is something entirely different. What is important is not expressions themselves but what people use them to do in communication.
Hence, Austin developed a theory of speech acts, which described the kinds of things which can be done with a sentence (assertion, command, inquiry, exclamation) in different contexts of use on different occasions. Strawson argued that the truth-table semantics of the logical connectives (e.g.,
∧
{\displaystyle \land }
,
∨
{\displaystyle \lor }
and
→
{\displaystyle \rightarrow }
) do not capture the meanings of their natural language counterparts ("and", "or" and "if-then"). While the "ordinary language" movement basically died out in the 1970s, its influence was crucial to the development of the fields of speech-act theory and the study of pragmatics. Many of its ideas have been absorbed by theorists such as Kent Bach, Robert Brandom, Paul Horwich and Stephen Neale. In recent work, the division between semantics and pragmatics has become a lively topic of discussion at the interface of philosophy and linguistics, for instance in work by Sperber and Wilson, Carston and Levinson.
While keeping these traditions in mind, the question of whether or not there is any grounds for conflict between the formal and informal approaches is far from being decided. Some theorists, like Paul Grice, have been skeptical of any claims that there is a substantial conflict between logic and natural language.
=== Game theoretical approach ===
Game theory has been suggested as a tool to study the evolution of language. Some researchers that have developed game theoretical approaches to philosophy of language are David K. Lewis, Schuhmacher, and Rubinstein.
=== Translation and interpretation ===
Translation and interpretation are two other problems that philosophers of language have attempted to confront. In the 1950s, W.V. Quine argued for the indeterminacy of meaning and reference based on the principle of radical translation. In Word and Object, Quine asks readers to imagine a situation in which they are confronted with a previously undocumented, group of indigenous people where they must attempt to make sense of the utterances and gestures that its members make. This is the situation of radical translation.
He claimed that, in such a situation, it is impossible in principle to be absolutely certain of the meaning or reference that a speaker of the indigenous peoples language attaches to an utterance. For example, if a speaker sees a rabbit and says "gavagai", is she referring to the whole rabbit, to the rabbit's tail, or to a temporal part of the rabbit? All that can be done is to examine the utterance as a part of the overall linguistic behaviour of the individual, and then use these observations to interpret the meaning of all other utterances. From this basis, one can form a manual of translation. But, since reference is indeterminate, there will be many such manuals, no one of which is more correct than the others. For Quine, as for Wittgenstein and Austin, meaning is not something that is associated with a single word or sentence, but is rather something that, if it can be attributed at all, can only be attributed to a whole language. The resulting view is called semantic holism.
Inspired by Quine's discussion, Donald Davidson extended the idea of radical translation to the interpretation of utterances and behavior within a single linguistic community. He dubbed this notion radical interpretation. He suggested that the meaning that any individual ascribed to a sentence could only be determined by attributing meanings to many, perhaps all, of the individual's assertions, as well as their mental states and attitudes.
=== Vagueness ===
One issue that has troubled philosophers of language and logic is the problem of the vagueness of words. The specific instances of vagueness that most interest philosophers of language are those where the existence of "borderline cases" makes it seemingly impossible to say whether a predicate is true or false. Classic examples are "is tall" or "is bald", where it cannot be said that some borderline case (some given person) is tall or not-tall. In consequence, vagueness gives rise to the paradox of the heap. Many theorists have attempted to solve the paradox by way of n-valued logics, such as fuzzy logic, which have radically departed from classical two-valued logics.
== Further reading ==
Atherton, Catherine. 1993. The Stoics on Ambiguity. Cambridge, UK: Cambridge University Press.
Denyer, Nicholas. 1991. Language, Thought and Falsehood in Ancient Greek Philosophy. London: Routledge.
Kneale, W., and M. Kneale. 1962. The Development of Logic. Oxford: Clarendon.
Modrak, Deborah K. W. 2001. Aristotle's Theory of Language and Meaning. Cambridge, UK: Cambridge University Press.
Sedley, David. 2003. Plato's Cratylus. Cambridge, UK: Cambridge University Press.
== See also ==
Analytic philosophy
Discourse
Interpersonal communication
Linguistics
Semiotics
Theory of language
== External links ==
Philosophy of language at the Indiana Philosophy Ontology Project
Philosophy of language at PhilPapers
"Philosophy of Language". Internet Encyclopedia of Philosophy.
Magee, Bryan (March 14, 2008). "John Searle on the Philosophy of Language, Part 1". Searle John (interviewee). flame0430's channel. Archived from the original on 2021-11-11. One of five parts, the others found here, 2 here. 3 here, 4 here, 5 There are also 16 lectures by Searle, beginning with "Searle: Philosophy of Language, lecture 1". SocioPhilosophy's channel. October 25, 2011. Archived from the original on 2021-11-11.
Sprachlogik short articles in the philosophies of logic and language.
Glossary of Linguistic terms.
What is I-language? Archived 2011-07-06 at the Wayback Machine – Chapter 1 of I-language: An Introduction to Linguistics as Cognitive Science.
The London Philosophy Study Guide Archived 2009-09-23 at the Wayback Machine offers many suggestions on what to read, depending on the student's familiarity with the subject: Philosophy of Language.
Carnap, R., (1956). Meaning and Necessity: a Study in Semantics and Modal Logic. University of Chicago Press.
Collins, John. (2001). Truth Conditions Without Interpretation. [1].
Devitt, Michael and Hanley, Richard, eds. (2006) The Blackwell Guide to the Philosophy of Language. Oxford: Blackwell.
Eco, Umberto. Semiotics and the Philosophy of Language. Indiana University Press, 1986, ISBN 0253203988, ISBN 9780253203984.
Greenberg, Mark and Harman, Gilbert. (2005). Conceptual Role Semantics. [2].
Hale, B. and Crispin Wright, Ed. (1999). Blackwell Companions To Philosophy. Malden, Massachusetts, Blackwell Publishers.
Isac, Daniela; Charles Reiss (2013). I-language: An Introduction to Linguistics as Cognitive Science, 2nd edition. Oxford University Press. ISBN 978-0-19-953420-3.
Lepore, Ernest and Barry C. Smith (eds). (2006). The Oxford Handbook of Philosophy of Language. Oxford University Press.
Lycan, W. G. (2008). Philosophy of Language: A Contemporary Introduction. New York, Routledge.
Miller, James. (1999). PEN-L message, Bad writing.
Searle, John (2007). Philosophy of Language: an interview with John Searle.
Stainton, Robert J. (1996). Philosophical Perspectives on Language. Peterborough, Ont., Broadview Press.
Tarski, Alfred. (1944). "The Semantical Conception of Truth".
Turri, John. (2016). Knowledge and the Norm of Assertion: An Essay in Philosophical Science. Open Book Publishers. doi:10.11647/OBP.0083. ISBN 978-1-78374-183-0.
Watson, Gerard (1982). "St. Augustine's Theory of Language". The Maynooth Review / Revieú Mhá Nuad. 6 (2). Maynooth: Faculty of Arts, Celtic Studies & Philosophy NUIM: 4–20. ISSN 0332-4869. JSTOR 20556950. Retrieved 3 March 2025..
== References == | Wikipedia/Theory_of_reference |
A causal theory of reference or historical chain theory of reference is a theory of how terms acquire specific referents based on evidence. Such theories have been used to describe many referring terms, particularly logical terms, proper names, and natural kind terms. In the case of names, for example, a causal theory of reference typically involves the following claims:
a name's referent is fixed by an original act of naming (also called a "dubbing" or, by Saul Kripke, an "initial baptism"), whereupon the name becomes a rigid designator of that object.
later uses of the name succeed in referring to the referent by being linked to that original act via a causal chain.
Weaker versions of the position (perhaps not properly called "causal theories") claim merely that, in many cases, events in the causal history of a speaker's use of the term, including when the term was first acquired, must be considered to correctly assign references to the speaker's words.
Causal theories of names became popular during the 1970s, under the influence of work by Saul Kripke and Keith Donnellan. Kripke and Hilary Putnam also defended an analogous causal account of natural kind terms.
== Kripke's causal account of names ==
In lectures later published as Naming and Necessity, Kripke provided a rough outline of his causal theory of reference for names. Although he refused to explicitly endorse such a theory, he indicated that such an approach was far more promising than the then-popular descriptive theory of names introduced by Russell, according to which names are in fact disguised definite descriptions. Kripke argued that in order to use a name successfully to refer to something, you do not have to be acquainted with a uniquely identifying description of that thing. Rather, your use of the name need only be caused (in an appropriate way) by the naming of that thing.
Such a causal process might proceed as follows: the parents of a newborn baby name it, pointing to the child and saying "we'll call her 'Jane'." Henceforth everyone calls her 'Jane'. With that act, the parents give the girl her name. The assembled family and friends now know that 'Jane' is a name which refers to Jane. This is referred to as Jane's dubbing, naming, or initial baptism.
However, not everyone who knows Jane and uses the name 'Jane' to refer to her was present at this naming. So how is it that when they use the name 'Jane', they are referring to Jane? The answer provided by causal theories is that there is a causal chain that passes from the original observers of Jane's naming to everyone else who uses her name. For example, maybe Jill was not at the naming, but Jill learns about Jane, and learns that her name is 'Jane', from Jane's mother, who was there. She then uses the name 'Jane' with the intention of referring to the child Jane's mother referred to. Jill can now use the name, and her use of it can in turn transmit the ability to refer to Jane to other speakers.
Philosophers such as Gareth Evans have insisted that the theory's account of the dubbing process needs to be broadened to include what are called 'multiple groundings'. After her initial baptism, uses of 'Jane' in the presence of Jane may, under the right circumstances, be considered to further ground the name ('Jane') in its referent (Jane). That is, if I am in direct contact with Jane, the reference for my utterance of the name 'Jane' may be fixed not simply by a causal chain through people who had encountered her earlier (when she was first named); it may also be indexically fixed to Jane at the moment of my utterance. Thus our modern day use of a name such as 'Christopher Columbus' can be thought of as referring to Columbus through a causal chain that terminates not simply in one instance of his naming, but rather in a series of grounding uses of the name that occurred throughout his life. Under certain circumstances of confusion, this can lead to the alteration of a name's referent (for one example of how this might happen, see Twin Earth thought experiment).
== Motivation ==
Causal theories of reference were born partially in response to the widespread acceptance of Russellian descriptive theories. Russell found that certain logical contradictions could be avoided if names were considered disguised definite descriptions (a similar view is often attributed to Gottlob Frege, mostly on the strength of a footnoted comment in "On Sense and Reference", although many Frege scholars consider this attribution misguided). On such an account, the name 'Aristotle' might be seen as meaning 'the student of Plato and teacher of Alexander the Great'. Later description theorists expanded upon this by suggesting that a name expressed not one particular description, but many (perhaps constituting all of one's essential knowledge of the individual named), or a weighted average of these descriptions.
Kripke found this account to be deeply flawed, for a number of reasons. Notably:
We can successfully refer to individuals for whom we have no uniquely identifying description. (For example, a speaker can talk about Phillie Sophik even if one only knows him as 'some poet'.)
We can successfully refer to individuals for whom the only identifying descriptions we have fail to refer as we believe them to. (Many speakers have no identifying beliefs about Christopher Columbus other than 'the first European in North America' or 'the first person to believe that the earth was round'. Both of these beliefs are incorrect. Nevertheless, when such a person says 'Christopher Columbus', we acknowledge that they are referring to Christopher Columbus, not to whatever individual satisfies one of those descriptions.)
We use names to speak hypothetically about what could have happened to a person. A name functions as a rigid designator, while a definite description does not. (One could say 'If Aristotle had died young, he would never have taught Alexander the Great.' But if 'the teacher of Alexander the Great' were a component of the meaning of 'Aristotle' then this would be nonsense.)
A causal theory avoids these difficulties. A name refers rigidly to the bearer to which it is causally connected, regardless of any particular facts about the bearer, and in all possible worlds where the bearer exists.
The same motivations apply to causal theories in regard to other sorts of terms. Putnam, for instance, attempted to establish that 'water' refers rigidly to the stuff that we do in fact call 'water', to the exclusion of any possible identical water-like substance for which we have no causal connection. These considerations motivate semantic externalism. Because speakers interact with a natural kind such as water regularly, and because there is generally no naming ceremony through which their names are formalized, the multiple groundings described above are even more essential to a causal account of such terms. A speaker whose environment changes may thus observe that the referents of his terms shift, as described in the Twin Earth and Swampman thought experiments.
== Variations ==
Variations of the causal theory include:
The causal-historical theory of reference is the original version of the causal theory. It was put forward by Keith Donnellan in 1972 and Saul Kripke in 1980. This view introduces the idea of reference-passing links in a causal-historical chain.
The descriptive-causal theory of reference (also causal-descriptive theory of reference), a view put forward by David Lewis in 1984, introduces the idea that a minimal descriptive apparatus needs to be added to the causal relations between speaker and object.
== Criticism of the theory ==
Gareth Evans argued that the causal theory, or at least certain common and over-simple variants of it, have the consequence that, however remote or obscure the causal connection between someone's use of a proper name and the object it originally referred to, they still refer to that object when they use the name. (Imagine a name briefly overheard in a train or café.) The theory effectively ignores context and makes reference into a magic trick. Evans describes it as a "photograph" theory of reference.
The links between different users of the name are particularly obscure. Each user must somehow pass the name on to the next, and must somehow "mean" the right individual as they do so (suppose "Socrates" is the name of a pet aardvark). Kripke himself notes the difficulty, John Searle makes much of it.
Mark Sainsbury argued for a causal theory similar to Kripke's, except that the baptised object is eliminated. A "baptism" may be a baptism of nothing, he argues: a name can be intelligibly introduced even if it names nothing. The causal chain we associate with the use of proper names may begin merely with a "journalistic" source.
The causal theory has a difficult time explaining the phenomenon of reference change. Gareth Evans cites the example of when Marco Polo unknowingly referred to the African Island as "Madagascar" when the natives actually used the term to refer to a part of the mainland. Evans claims that Polo clearly intended to use the term as the natives do, but somehow changed the meaning of the term "Madagascar" to refer to the island as it is known today. Michael Devitt claims that repeated groundings in an object can account for reference change. However, such a response leaves open the problem of cognitive significance that originally intrigued Russell and Frege.
== See also ==
Brain in a vat
Mediated reference theory
== Notes ==
== Citations ==
== References ==
Evans, G. (1985). "The Causal Theory of Names". In Martinich, A. P., ed. The Philosophy of Language. Oxford University Press, 2012.
Evans, G. The Varieties of Reference, Oxford 1982.
Kripke, Saul. 1980. Naming and Necessity. Cambridge, Mass.: Harvard University Press.
McDowell, John. (1977) "On the Sense and Reference of a Proper Name."
Salmon, Nathan. (1981) Reference and Essence, Prometheus Books.
Machery, E.; Mallon, R.; Nichols, S.; Stich, S. P. (2004). "Semantics, Cross-cultural Style". Cognition. 92 (3): B1 – B12. CiteSeerX 10.1.1.174.5119. doi:10.1016/j.cognition.2003.10.003. PMID 15019555. S2CID 15074526.
Sainsbury, R.M. (2001). "Sense without Reference". In Newen, A.; Nortmann, U.; Stuhlmann Laisz, R. (eds.). Building on Frege. Stanford.{{cite book}}: CS1 maint: location missing publisher (link) | Wikipedia/Causal_theory_of_reference |
A paradigm shift is a fundamental change in the basic concepts and experimental practices of a scientific discipline. It is a concept in the philosophy of science that was introduced and brought into the common lexicon by the American physicist and philosopher Thomas Kuhn. Even though Kuhn restricted the use of the term to the natural sciences, the concept of a paradigm shift has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events.
Kuhn presented his notion of a paradigm shift in his influential book The Structure of Scientific Revolutions (1962).
Kuhn contrasts paradigm shifts, which characterize a Scientific Revolution, to the activity of normal science, which he describes as scientific work done within a prevailing framework or paradigm. Paradigm shifts arise when the dominant paradigm under which normal science operates is rendered incompatible with new phenomena, facilitating the adoption of a new theory or paradigm.
As one commentator summarizes:
Kuhn acknowledges having used the term "paradigm" in two different meanings. In the first one, "paradigm" designates what the members of a certain scientific community have in common, that is to say, the whole of techniques, patents and values shared by the members of the community. In the second sense, the paradigm is a single element of a whole, say for instance Newton's Principia, which, acting as a common model or an example... stands for the explicit rules and thus defines a coherent tradition of investigation. Thus the question is for Kuhn to investigate by means of the paradigm what makes possible the constitution of what he calls "normal science". That is to say, the science which can decide if a certain problem will be considered scientific or not. Normal science does not mean at all a science guided by a coherent system of rules, on the contrary, the rules can be derived from the paradigms, but the paradigms can guide the investigation also in the absence of rules. This is precisely the second meaning of the term "paradigm", which Kuhn considered the most new and profound, though it is in truth the oldest.
== History ==
The nature of scientific revolutions has been studied by modern philosophy since Immanuel Kant used the phrase in the preface to the second edition of his Critique of Pure Reason (1787). Kant used the phrase "revolution of the way of thinking" (Revolution der Denkart) to refer to Greek mathematics and Newtonian physics. In the 20th century, new developments in the basic concepts of mathematics, physics, and biology revitalized interest in the question among scholars.
=== Original usage ===
In his 1962 book The Structure of Scientific Revolutions, Kuhn explains the development of paradigm shifts in science into four stages:
Normal science – In this stage, which Kuhn sees as most prominent in science, a dominant paradigm is active. This paradigm is characterized by a set of theories and ideas that define what is possible and rational to do, giving scientists a clear set of tools to approach certain problems. Some examples of dominant paradigms that Kuhn gives are: Newtonian physics, caloric theory, and the theory of electromagnetism. Insofar as paradigms are useful, they expand both the scope and the tools with which scientists do research. Kuhn stresses that, rather than being monolithic, the paradigms that define normal science can be particular to different people. A chemist and a physicist might operate with different paradigms of what a helium atom is. Under normal science, scientists encounter anomalies that cannot be explained by the universally accepted paradigm within which scientific progress has thereto been made.
Extraordinary research – When enough significant anomalies have accrued against a current paradigm, the scientific discipline is thrown into a state of crisis. To address the crisis, scientists push the boundaries of normal science in what Kuhn calls “extraordinary research”, which is characterized by its exploratory nature. Without the structures of the dominant paradigm to depend on, scientists engaging in extraordinary research must produce new theories, thought experiments, and experiments to explain the anomalies. Kuhn sees the practice of this stage – “the proliferation of competing articulations, the willingness to try anything, the expression of explicit discontent, the recourse to philosophy and to debate over fundamentals” – as even more important to science than paradigm shifts.
Adoption of a new paradigm – Eventually a new paradigm is formed, which gains its own new followers. For Kuhn, this stage entails both resistance to the new paradigm, and reasons for why individual scientists adopt it. According to Max Planck, "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Because scientists are committed to the dominant paradigm, and paradigm shifts involve gestalt-like changes, Kuhn stresses that paradigms are difficult to change. However, paradigms can gain influence by explaining or predicting phenomena much better than before (i.e., Bohr's model of the atom) or by being more subjectively pleasing. During this phase, proponents for competing paradigms address what Kuhn considers the core of a paradigm debate: whether a given paradigm will be a good guide for future problems – things that neither the proposed paradigm nor the dominant paradigm are capable of solving currently.
Aftermath of the scientific revolution – In the long run, the new paradigm becomes institutionalized as the dominant one. Textbooks are written, obscuring the revolutionary process.
== Features ==
=== Paradigm shifts and progress ===
A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism: the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better, not just different.
=== Incommensurability ===
These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another—that they are incommensurable. This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes—so different that whether or not one was better, they could not be understood by one another.
Donald Davidson famously argued against this idea of conceptual relativism, claiming that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are.
Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous, with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour.
=== Gradualism vs. sudden change ===
Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system.
In The Structure of Scientific Revolutions, Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science" (p. 12). Kuhn's idea was itself revolutionary in its time as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science.
Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it.
== Examples ==
=== Natural sciences ===
Some of the "classical cases" of Kuhnian paradigm shifts in science are:
1543 – The transition in cosmology from a Ptolemaic cosmology to a Copernican one.
1543 – The acceptance of the work of Andreas Vesalius, whose work De humani corporis fabrica corrected the numerous errors in the previously held system of human anatomy created by Galen.
1687 – The transition in mechanics from Aristotelian mechanics to classical mechanics.
1783 – The acceptance of Lavoisier's theory of chemical reactions and combustion in place of phlogiston theory, known as the chemical revolution.
The transition in optics from geometrical optics to physical optics with Augustin-Jean Fresnel's wave theory.
1826 – The discovery of hyperbolic geometry.
1830 to 1833 – Geologist Charles Lyell published Principles of Geology, which not only put forth the concept of uniformitarianism, which was in direct contrast to the popular geological theory, at the time, catastrophism, but also utilized geological proof to determine that the age of the Earth was older than 6,000 years, which was previously held to be true.
1859 – The revolution in evolution from goal-directed change to Charles Darwin's natural selection.
1880 – The germ theory of disease began overtaking Galen's miasma theory.
1905 – The development of quantum mechanics, which replaced classical mechanics at microscopic scales.
1887 to 1905 – The transition from the luminiferous aether present in space to electromagnetic radiation in spacetime.
1919 – The transition between the worldview of Newtonian gravity and general relativity.
1920 – The emergence of the modern view of the Milky Way as just one of countless galaxies within an immeasurably vast universe following the results of the Smithsonian's Great Debate between astronomers Harlow Shapley and Heber Curtis.
1952 – Chemists Stanley Miller and Harold Urey perform an experiment which simulated the conditions on the early Earth that favored chemical reactions that synthesized more complex organic compounds from simpler inorganic precursors, kickstarting decades of research into the chemical origins of life.
1964 – The discovery of cosmic microwave background radiation leads to the Big Bang theory being accepted over the steady state theory in cosmology.
1965 – The acceptance of plate tectonics as the explanation for large-scale geologic changes.
1969 – Astronomer Victor Safronov, in his book Evolution of the protoplanetary cloud and formation of the Earth and the planets, developed the early version of the current accepted theory of planetary formation.
1974 – The November Revolution, with the discovery of the J/psi meson, and the acceptance of the existence of quarks and the Standard Model of particle physics.
1960 to 1985 – The acceptance of the ubiquity of nonlinear dynamical systems as promoted by chaos theory, instead of a laplacian world-view of deterministic predictability.
=== Social sciences ===
In Kuhn's view, the existence of a single reigning paradigm is characteristic of the natural sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals." Others have applied Kuhn's concept of paradigm shift to the social sciences.
The movement known as the cognitive revolution moved away from behaviourist approaches to psychology and the acceptance of cognition as central to studying human behavior.
Anthropologist Franz Boas published The Mind of Primitive Man, which integrated his theories concerning the history and development of cultures and established a program that would dominate American anthropology in the following years. His research, along with that of his other colleagues, combatted and debunked the claims being made by scholars at the time, given scientific racism and eugenics were dominant in many universities and institutions that were dedicated to studying humans and society. Eventually anthropology would apply a holistic approach, utilizing four subcategories to study humans: archaeology, cultural, evolutionary, and linguistic anthropology.
At the turn of the 20th century, sociologists, along with other social scientists developed and adopted methodological antipositivism, which sought to uphold a subjective perspective when studying human activities pertaining to culture, society, and behavior. This was in stark contrast to positivism, which took its influence from the methodologies utilized within the natural sciences.
First proposed by Ferdinand de Saussure in 1879, the laryngeal theory in Indo-European linguistics postulated the existence of "laryngeal" consonants in the Proto-Indo-European language (PIE), a theory that was confirmed by the discovery of the Hittite language in the early 20th century. The theory has since been accepted by the vast majority of linguists, paving the way for the internal reconstruction of the syntax and grammatical rules of PIE and is considered one of the most significant developments in linguistics since the initial discovery of the Indo-European language family.
The adoption of radiocarbon dating by archaeologists has been proposed as a paradigm shift because of how it greatly increased the time depth the archaeologists could reliably date objects from. Similarly the use of LIDAR for remote geospatial imaging of cultural landscapes, and the shift from processual to post-processual archaeology have both been claimed as paradigm shifts by archaeologists.
The Marginal Revolution, a development of economic theory in the late 19th century led by William Stanley Jevons in England, Carl Menger in Austria, and Léon Walras in Switzerland and France which explained economic behavior in terms of marginal utility.
=== Applied sciences ===
More recently, paradigm shifts are also recognisable in applied sciences:
In medicine, the transition from "clinical judgment" to evidence-based medicine.
In Artificial Intelligence, the transition from a knowledge-based to a data-driven paradigm has been discussed from 2010.
== Other uses ==
The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought pattern—a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing:
M. L. Handa, a professor of sociology in education at O.I.S.E. University of Toronto, Canada, developed the concept of a paradigm within the context of social sciences. He defines what he means by "paradigm" and introduces the idea of a "social paradigm". In addition, he identifies the basic component of any social paradigm. Like Kuhn, he addresses the issue of changing paradigms, the process popularly known as "paradigm shift". In this respect, he focuses on the social circumstances that precipitate such a shift. Relatedly, he addresses how that shift affects social institutions, including the institution of education.
The concept has been developed for technology and economics in the identification of new techno-economic paradigms as changes in technological systems that have a major influence on the behaviour of the entire economy (Carlota Perez; earlier work only on technological paradigms by Giovanni Dosi). This concept is linked to Joseph Schumpeter's idea of creative destruction. Examples include the move to mass production and the introduction of microelectronics.
Two photographs of the Earth from space, "Earthrise" (1968) and "The Blue Marble" (1972), are thought to have helped to usher in the environmentalist movement, which gained great prominence in the years immediately following distribution of those images.
Hans Küng applies Thomas Kuhn's theory of paradigm change to the entire history of Christian thought and theology. He identifies six historical "macromodels": 1) the apocalyptic paradigm of primitive Christianity, 2) the Hellenistic paradigm of the patristic period, 3) the medieval Roman Catholic paradigm, 4) the Protestant (Reformation) paradigm, 5) the modern Enlightenment paradigm, and 6) the emerging ecumenical paradigm. He also discusses five analogies between natural science and theology in relation to paradigm shifts. Küng addresses paradigm change in his books, Paradigm Change in Theology and Theology for the Third Millennium: An Ecumenical View.
In the later part of the 1990s, 'paradigm shift' emerged as a buzzword, popularized as marketing speak and appearing more frequently in print and publication. In his book Mind The Gaffe, author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains the phrase. It is referred to in several articles and books as abused and overused to the point of becoming meaningless.
The concept of technological paradigms has been advanced, particularly by Giovanni Dosi.
== Criticism ==
In a 2015 retrospective on Kuhn, the philosopher Martin Cohen describes the notion of the paradigm shift as a kind of intellectual virus – spreading from hard science to social science and on to the arts and even everyday political rhetoric today. Cohen claims that Kuhn had only a very hazy idea of what it might mean and, in line with the Austrian philosopher of science Paul Feyerabend, accuses Kuhn of retreating from the more radical implications of his theory, which are that scientific facts are never really more than opinions whose popularity is transitory and far from conclusive. Cohen says scientific knowledge is less certain than it is usually portrayed, and that science and knowledge generally is not the 'very sensible and reassuringly solid sort of affair' that Kuhn describes, in which progress involves periodic paradigm shifts in which much of the old certainties are abandoned in order to open up new approaches to understanding that scientists would never have considered valid before. He argues that information cascades can distort rational, scientific debate. He has focused on health issues, including the example of highly mediatised 'pandemic' alarms, and why they have turned out eventually to be little more than scares.
== See also ==
== References ==
=== Citations ===
=== Sources ===
Kuhn, Thomas (1970). The Structure of Scientific Revolutions (2nd, enlarged ed.). University of Chicago Press. ISBN 978-0-226-45804-5.
== External links ==
The dictionary definition of paradigm shift at Wiktionary
MIT 6.933J – The Structure of Engineering Revolutions. From MIT OpenCourseWare, course materials (graduate level) for a course on the history of technology through a Kuhnian lens.
""Scientific Change"". Internet Encyclopedia of Philosophy. | Wikipedia/Theory_change |
Science and Hypothesis (French: La Science et l'Hypothèse) is a book by French mathematician Henri Poincaré, first published in 1902. Aimed at a non-specialist readership, it deals with mathematics, space, physics and nature. It puts forward the theses that absolute truth in science is unattainable, and that many commonly held beliefs of scientists are held as convenient conventions rather than because they are more valid than the alternatives.
In this book, Poincaré describes open scientific questions regarding the photo-electric effect, Brownian motion, and the relativity of physical laws in space.
Reading this book inspired Albert Einstein's subsequent Annus Mirabilis papers published in 1905.
A new translation was published in November 2017.
== References == | Wikipedia/Science_and_Hypothesis |
In the philosophy of language, the descriptivist theory of proper names (also descriptivist theory of reference) is the view that the meaning or semantic content of a proper name is identical to the descriptions associated with it by speakers, while their referents are determined to be the objects that satisfy these descriptions. Bertrand Russell and Gottlob Frege have both been associated with the descriptivist theory, which has been called the mediated reference theory or Frege–Russell view.
In the 1970s, this theory came under attack from causal theorists such as Saul Kripke, Hilary Putnam and others. However, it has seen something of a revival in recent years, especially under the form of what are called two-dimensional semantic theories. This latter trend is exemplified by the theories of David Chalmers, among others.
== The descriptive theory and its merits ==
A simple descriptivist theory of names can be thought of as follows: for every proper name p, there is some collection of descriptions D associated with p that constitute the meaning of p. For example, the descriptivist may hold that the proper name Saul Kripke is synonymous with the collection of descriptions such as
the man who wrote Naming and Necessity
a person who was born on November 13, 1940, in Bay Shore, New York
the son of a leader of Beth El Synagogue in Omaha, Nebraska
etc ...
The descriptivist takes the meaning of the name Saul Kripke to be that collection of descriptions and takes the referent of the name to be the thing that satisfies all or most of those descriptions.
A simple descriptivist theory may further hold that the meaning of a sentence S that contains p is given by the collection of sentences produced by replacing each instance of p in S with one of the descriptions in D. So, the sentence such as "Saul Kripke stands next to a table" has the same meaning as the following collection of sentences:
The man who wrote Naming and Necessity stands next to a table.
A person who was born on November 13, 1940, in Bay Shore, New York, stands next to a table.
The son of a leader of Beth El Synagogue in Omaha, Nebraska, stands next to a table.
etc ...
A version of descriptivism was formulated by Frege in reaction to problems with his original theory of meaning or reference (Bedeutung), which entailed that sentences with empty proper names cannot have a meaning. Yet a sentence containing the name 'Odysseus' is intelligible, and therefore has a sense, even though there is no individual object (its reference) to which the name corresponds. Also, the sense of different names is different, even when their reference is the same. Frege said that if an identity statement such as "Hesperus is the same planet as Phosphorus" is to be informative, the proper names flanking the identity sign must have a different meaning or sense. But clearly, if the statement is true, they must have the same reference. The sense is a 'mode of presentation', which serves to illuminate only a single aspect of the referent. Scholars disagree as to whether Frege intended such modes of presentation to be descriptions. See the article Sense and reference.
Russell's approach is somewhat different. First of all, Russell makes an important distinction between what he calls "ordinary" proper names and "logically" proper names. Logically proper names are indexicals such as this and that, which directly refer (in a Millian sense) to sense-data or other objects of immediate acquaintance. For Russell, ordinary proper names are abbreviated definite descriptions. Here definite description refers again to the type of formulation "The…" which was used above to describe Santa Claus as "the benevolent, bearded…." According to Russell, the name "Aristotle" is just a sort of shorthand for a definite description such as "The last great philosopher of ancient Greece" or "The teacher of Alexander the great" or some conjunction of two or more such descriptions. Now, according to Russell's theory of definite descriptions, such descriptions must, in turn, be reduced, to a certain very specific logical form of existential generalization as follows:
"The king of France is bald."
becomes
∃
x
(
K
(
x
)
∧
∀
y
(
K
(
y
)
→
x
=
y
)
∧
B
(
x
)
)
{\displaystyle \exists x(K(x)\land \forall y(K(y)\rightarrow x=y)\land B(x))}
This says that there is exactly one object ‘’x’’ such that ‘’x’’ is King of France and ‘’x’’ is bald. Notice that this formulation is entirely general: it says that there is some x out in the world that satisfies the description, but does not specify which one thing ‘’x’’ refers to. Indeed, for Russell, definite descriptions (and hence names) have no reference at all and their meanings (senses in the Fregean sense) are just the truth conditions of the logical forms illustrated above. This is made clearer by Russell’s example involving ‘’Bismarck’’:
(G) "The Chancellor of Germany..."
In this case, Russell suggests that only Bismarck himself can be in a relation of acquaintance such that the man himself enters into the proposition expressed by the sentence. For any other than Bismarck, the only relation that is possible with such a proposition is through its descriptions. Bismarck could never have existed and the sentence (G) would still be meaningful because of its general nature described by the logical form underlying the sentence.
Notwithstanding these differences however, descriptivism and the descriptive theory of proper names came to be associated with both the views of Frege and Russell and both address the general problems (names without bearers, Frege's puzzles concerning identity and substitution in contexts of intentional attitude attributions) in a similar manner.
Another problem for Millianism is Frege's famous puzzles concerning the identity of co-referring terms. For example:
(V) "Hesperus is Phosphorus."
In this case, both terms ("Hesperus" and "Phosphorus") refer to the same entity: Venus. The Millian theory would predict that this sentence is trivial, since meaning is just reference and "Venus is Venus" is not very informative. Suppose, however, that someone did not know that Hesperus and Phosphorus both referred to Venus. Then it is at least arguable that the sentence (V) is an attempt to inform someone of just this fact.
Another problem for Millianism is that of statements such as ”Fred believes that Cicero, but not Tully, was Roman.”
== Kripke’s objections and the causal theory ==
In his book Naming and Necessity, Saul Kripke criticised the descriptivist theory. At the end of Lecture I (pp. 64–70) Kripke sets out what he believes to be the tenets of the descriptivist theory. Kripke formally states a number of theses as the core of the descriptivist theory, with these theses explaining the theory in terms of reference (rather than the sense or meaning). As he explains before stating the theory, "There are more theses if you take it in the stronger version as a theory of meaning" (p. 64).
As he states it, the descriptivist theory is "weaker," i.e., the claims it makes do not assert as much as a stronger theory would. This actually makes it harder to refute. The descriptivist theory of meaning would include these theses and definitions however, thus refuting these would suffice for refuting the descriptivist theory of meaning as well. Kripke formulates them as follows:
To every name or designating expression 'X', there corresponds a cluster of properties, namely the family of those properties φ such that [speaker] A believes 'φX'
One of the properties, or some conjointly, are believed by A to pick out some individual uniquely.
If most, or a weighted most, of the φ's are satisfied by one unique object y, then y is the referent of 'X'.
If the vote yields no unique object, 'X' does not refer.
The statement, 'If X exists, then X has most of the φ's [corresponding to X]' is known a priori by the speaker.
The statement, 'If X exists, then X has most of the φ's [corresponding to X]' expresses a necessary truth (in the idiolect of the speaker).
(1) States the properties or concepts related to any given proper name, where a name 'X' has a set of properties associated with it. The set of properties are those that a speaker, on inquiry of "Who is Barack Obama?" would respond "The former President of the U.S., former Senator of Illinois, husband of Michelle Obama, etc." (1) does not stipulate that the set of properties φ is the meaning of X. (2) stipulates the epistemic position of the speaker. Note (2) says "believed by A to pick out."
(3) Takes the properties in (1) and (2) and turns them into a mechanism of reference. Basically, if a unique object satisfies the properties associated with 'X' such that A believes that 'X has such-and-such properties', it picks out or refers to that object. (4) states what happens when no object satisfies the properties (Kripke talks in terms of taking a "vote" as to the unique referent).
(5) Follows from (1)–(3). If there is a set of properties that speaker A believes to be associated with X, then these properties must be already known by the speaker. In this sense they are a priori. To know what a bachelor is, an individual must know what an unmarried male is; likewise an individual must know who is 'The President of the U.S., former Senator of Illinois, husband of Michelle Obama, etc.' to know who Obama is.
(6) However is not a direct product of the theses. Kripke notes "(6) need not be a thesis of the theory if someone doesn't think that the cluster is part of the meaning of the name" (p. 65). However, when the descriptivist theory is taken as a theory of reference and meaning, (6) would be a thesis.
Taken as a theory of reference, the following would be true:
If someone fits the description 'the author who wrote, among other things, 1984 and Animal Farm' uniquely, then this someone is the George Orwell. (Thesis 3)
'George Orwell wrote, among other things, 1984 and Animal Farm' is known a priori by the speaker. (Thesis 5)
The idea in the second sentence is that one can't refer to something without knowing what he or she is referring to.
Taken as a theory of reference and meaning, the following would be true:
The author who wrote, among other things, 1984 and Animal Farm, wrote 1984 and Animal Farm. (Thesis 6)
After breaking down the descriptivist theory, he begins to point out what's wrong with it. First, he offered up what has come to be known as "the modal argument" (or "argument from rigidity") against descriptivism. Consider the name "Aristotle" and the descriptions "the greatest student of Plato," "the founder of logic" and "the teacher of Alexander." Aristotle obviously satisfies all of the descriptions (and many of the others we commonly associate with him), but it is not a necessary truth that if Aristotle existed then Aristotle was any one, or all, of these descriptions, contrary to thesis (6). Aristotle might well have existed without doing any single one of the things he is known for. He might have existed and not have become known to posterity at all or he might have died in infancy.
Suppose that Aristotle is associated by Mary with the description
“the last great philosopher of antiquity” and (the actual) Aristotle died in infancy. Then Mary's description would seem to refer to Plato. But this is deeply counterintuitive. Hence, names are "rigid designators," according to Kripke. That is, they refer to the same individual in every possible world in which that individual exists.
This is the counterintuitive result of thesis (6). For descriptivists Aristotle means "the greatest student of Plato," "the founder of logic" and "the teacher of Alexander." So the sentence “the greatest student of Plato, etc., was the greatest student of Plato,” is equivalent to "Aristotle was the greatest student of Plato, etc." Of course a sentence like “x=x” is necessary, but this just isn't the case with proper names and their descriptions. Aristotle could have done something else, thus he is not necessarily identical to his description.
The second argument employed by Kripke has come to be called the "epistemic argument" (or "argument from unwanted necessity"). This is simply the observation that if the meaning of "Angela Merkel" is "the Chancellor of Germany," then "Angela is the Chancellor of Germany" should seem to the average person to be a priori, analytic, and trivial, as if falling out of the meaning of "Angela Merkel" just as "unmarried male" falls out of the meaning of "bachelor." If thesis (5) is to hold, the properties of Angela Merkel should be known a priori by the speaker. But this is not true. We had to go out into the world to see who the Chancellor of Germany is.
Kripke's third argument against descriptive theories consisted in pointing out that people may associate inadequate or inaccurate descriptions with proper names. Kripke uses Kurt Gödel as an example. The only thing most people know about Gödel is that he proved the incompleteness of arithmetic. Suppose he hadn't proved it, and really he stole it from his friend Schmidt. Thesis (3) says that if most of the properties associated with 'Gödel' are satisfied by one unique object, in this case Schmidt, then Schmidt is the referent of 'Gödel.' This means that every time someone (in the world where Gödel stole the incompleteness theorem from Schmidt) says 'Gödel' he or she is actually referring to Schmidt. This is far too counter-intuitive for the descriptivist theory to hold.
== Revival of descriptivism and two-dimensionalism ==
In recent years, there has been something of a revival in descriptivist theories, including descriptivist theories of proper names. Metalinguistic description theories have been developed and adopted by such contemporary theorists as Kent Bach and Jerrold Katz. According to Katz, "metalinguistic description theories explicate the sense of proper nouns--but not common nouns--in terms of a relation between the noun and the objects that bear its name." Differently from the traditional theory, such theories do not posit a need for sense to determine reference and the metalinguistic description mentions the name it is the sense of (hence it is "metalinguistic") while placing no conditions on being the bearer of a name. Katz's theory, to take this example, is based on the fundamental idea that sense should not have to be defined in terms of, nor determine, referential or extensional properties but that it should be defined in terms of, and determined by, all and only the intensional properties of names.
He illustrates the way a metalinguistic description theory can be successful against Kripkean counterexamples by citing, as one example, the case of "Jonah." Kripke’s Jonah case is very powerful because in this case the only information that we have about the Biblical character Jonah is just what the Bible tells us. Unless we are fundamentalist literalists, it is not controversial that all of this is false. Since, under traditional descriptivism, these descriptions are what define the name Jonah, these descriptivists must say that Jonah did not exist. But this does not follow. But under Katz's version of descriptivism, the sense of Jonah contains no information derived from the Biblical accounts but contains only the term "Jonah" itself in the phrase "the thing that is a bearer of 'Jonah'." Hence, it is not vulnerable to these kinds of counterexamples.
The most common and challenging criticism to metalinguistic description theories was put forth by Kripke himself: they seem to be an ad hoc explanation of a single linguistic phenomenon. Why should there be a metalinguistic theory for proper nouns (like names) but not for common nouns, count nouns, verbs, predicates, indexicals and other parts of speech.
Another recent approach is two-dimensional semantics. The motivations for this approach are rather different from those that inspired other forms of descriptivism, however. Two-dimensional approaches are usually motivated by a sense of dissatisfaction with the causal theorist explanation of how it is that a single proposition can be both necessary and a posteriori or contingent and a priori.
== See also ==
Onomastics
Causal theory of reference
Tag theory of names
Theory of descriptions
== Notes ==
== References ==
Russell, Bertrand. On Denoting. Mind. 1905.
Kripke, Saul. Naming and Necessity. Basil Blackwell. Boston. 1980.
Frege, Gottlob. On Sense and Reference. In P. Geach, M. Black, eds. Translations from the Philosophical Writings of Gottlob Frege. Oxford: Blackwell. 1952.
Soames, Scott. Reference and Description. 2005.
Katz, Jerrold. Names Without Bearers. 2005.
Chalmers, David. Two-Dimensional Semantics. in E. Lepore and B. Smith, eds. The Oxford Handbook of Philosophy of Language. Oxford University Press. 2005.
Cipriani, Enrico. The Descriptivist vs. Anti-Descriptivist Semantics Debate Between Syntax and Semantics. Philosophy Study, 2015, 5(8), pp. 421-30 | Wikipedia/Descriptivist_theory_of_names |
A causal theory of reference or historical chain theory of reference is a theory of how terms acquire specific referents based on evidence. Such theories have been used to describe many referring terms, particularly logical terms, proper names, and natural kind terms. In the case of names, for example, a causal theory of reference typically involves the following claims:
a name's referent is fixed by an original act of naming (also called a "dubbing" or, by Saul Kripke, an "initial baptism"), whereupon the name becomes a rigid designator of that object.
later uses of the name succeed in referring to the referent by being linked to that original act via a causal chain.
Weaker versions of the position (perhaps not properly called "causal theories") claim merely that, in many cases, events in the causal history of a speaker's use of the term, including when the term was first acquired, must be considered to correctly assign references to the speaker's words.
Causal theories of names became popular during the 1970s, under the influence of work by Saul Kripke and Keith Donnellan. Kripke and Hilary Putnam also defended an analogous causal account of natural kind terms.
== Kripke's causal account of names ==
In lectures later published as Naming and Necessity, Kripke provided a rough outline of his causal theory of reference for names. Although he refused to explicitly endorse such a theory, he indicated that such an approach was far more promising than the then-popular descriptive theory of names introduced by Russell, according to which names are in fact disguised definite descriptions. Kripke argued that in order to use a name successfully to refer to something, you do not have to be acquainted with a uniquely identifying description of that thing. Rather, your use of the name need only be caused (in an appropriate way) by the naming of that thing.
Such a causal process might proceed as follows: the parents of a newborn baby name it, pointing to the child and saying "we'll call her 'Jane'." Henceforth everyone calls her 'Jane'. With that act, the parents give the girl her name. The assembled family and friends now know that 'Jane' is a name which refers to Jane. This is referred to as Jane's dubbing, naming, or initial baptism.
However, not everyone who knows Jane and uses the name 'Jane' to refer to her was present at this naming. So how is it that when they use the name 'Jane', they are referring to Jane? The answer provided by causal theories is that there is a causal chain that passes from the original observers of Jane's naming to everyone else who uses her name. For example, maybe Jill was not at the naming, but Jill learns about Jane, and learns that her name is 'Jane', from Jane's mother, who was there. She then uses the name 'Jane' with the intention of referring to the child Jane's mother referred to. Jill can now use the name, and her use of it can in turn transmit the ability to refer to Jane to other speakers.
Philosophers such as Gareth Evans have insisted that the theory's account of the dubbing process needs to be broadened to include what are called 'multiple groundings'. After her initial baptism, uses of 'Jane' in the presence of Jane may, under the right circumstances, be considered to further ground the name ('Jane') in its referent (Jane). That is, if I am in direct contact with Jane, the reference for my utterance of the name 'Jane' may be fixed not simply by a causal chain through people who had encountered her earlier (when she was first named); it may also be indexically fixed to Jane at the moment of my utterance. Thus our modern day use of a name such as 'Christopher Columbus' can be thought of as referring to Columbus through a causal chain that terminates not simply in one instance of his naming, but rather in a series of grounding uses of the name that occurred throughout his life. Under certain circumstances of confusion, this can lead to the alteration of a name's referent (for one example of how this might happen, see Twin Earth thought experiment).
== Motivation ==
Causal theories of reference were born partially in response to the widespread acceptance of Russellian descriptive theories. Russell found that certain logical contradictions could be avoided if names were considered disguised definite descriptions (a similar view is often attributed to Gottlob Frege, mostly on the strength of a footnoted comment in "On Sense and Reference", although many Frege scholars consider this attribution misguided). On such an account, the name 'Aristotle' might be seen as meaning 'the student of Plato and teacher of Alexander the Great'. Later description theorists expanded upon this by suggesting that a name expressed not one particular description, but many (perhaps constituting all of one's essential knowledge of the individual named), or a weighted average of these descriptions.
Kripke found this account to be deeply flawed, for a number of reasons. Notably:
We can successfully refer to individuals for whom we have no uniquely identifying description. (For example, a speaker can talk about Phillie Sophik even if one only knows him as 'some poet'.)
We can successfully refer to individuals for whom the only identifying descriptions we have fail to refer as we believe them to. (Many speakers have no identifying beliefs about Christopher Columbus other than 'the first European in North America' or 'the first person to believe that the earth was round'. Both of these beliefs are incorrect. Nevertheless, when such a person says 'Christopher Columbus', we acknowledge that they are referring to Christopher Columbus, not to whatever individual satisfies one of those descriptions.)
We use names to speak hypothetically about what could have happened to a person. A name functions as a rigid designator, while a definite description does not. (One could say 'If Aristotle had died young, he would never have taught Alexander the Great.' But if 'the teacher of Alexander the Great' were a component of the meaning of 'Aristotle' then this would be nonsense.)
A causal theory avoids these difficulties. A name refers rigidly to the bearer to which it is causally connected, regardless of any particular facts about the bearer, and in all possible worlds where the bearer exists.
The same motivations apply to causal theories in regard to other sorts of terms. Putnam, for instance, attempted to establish that 'water' refers rigidly to the stuff that we do in fact call 'water', to the exclusion of any possible identical water-like substance for which we have no causal connection. These considerations motivate semantic externalism. Because speakers interact with a natural kind such as water regularly, and because there is generally no naming ceremony through which their names are formalized, the multiple groundings described above are even more essential to a causal account of such terms. A speaker whose environment changes may thus observe that the referents of his terms shift, as described in the Twin Earth and Swampman thought experiments.
== Variations ==
Variations of the causal theory include:
The causal-historical theory of reference is the original version of the causal theory. It was put forward by Keith Donnellan in 1972 and Saul Kripke in 1980. This view introduces the idea of reference-passing links in a causal-historical chain.
The descriptive-causal theory of reference (also causal-descriptive theory of reference), a view put forward by David Lewis in 1984, introduces the idea that a minimal descriptive apparatus needs to be added to the causal relations between speaker and object.
== Criticism of the theory ==
Gareth Evans argued that the causal theory, or at least certain common and over-simple variants of it, have the consequence that, however remote or obscure the causal connection between someone's use of a proper name and the object it originally referred to, they still refer to that object when they use the name. (Imagine a name briefly overheard in a train or café.) The theory effectively ignores context and makes reference into a magic trick. Evans describes it as a "photograph" theory of reference.
The links between different users of the name are particularly obscure. Each user must somehow pass the name on to the next, and must somehow "mean" the right individual as they do so (suppose "Socrates" is the name of a pet aardvark). Kripke himself notes the difficulty, John Searle makes much of it.
Mark Sainsbury argued for a causal theory similar to Kripke's, except that the baptised object is eliminated. A "baptism" may be a baptism of nothing, he argues: a name can be intelligibly introduced even if it names nothing. The causal chain we associate with the use of proper names may begin merely with a "journalistic" source.
The causal theory has a difficult time explaining the phenomenon of reference change. Gareth Evans cites the example of when Marco Polo unknowingly referred to the African Island as "Madagascar" when the natives actually used the term to refer to a part of the mainland. Evans claims that Polo clearly intended to use the term as the natives do, but somehow changed the meaning of the term "Madagascar" to refer to the island as it is known today. Michael Devitt claims that repeated groundings in an object can account for reference change. However, such a response leaves open the problem of cognitive significance that originally intrigued Russell and Frege.
== See also ==
Brain in a vat
Mediated reference theory
== Notes ==
== Citations ==
== References ==
Evans, G. (1985). "The Causal Theory of Names". In Martinich, A. P., ed. The Philosophy of Language. Oxford University Press, 2012.
Evans, G. The Varieties of Reference, Oxford 1982.
Kripke, Saul. 1980. Naming and Necessity. Cambridge, Mass.: Harvard University Press.
McDowell, John. (1977) "On the Sense and Reference of a Proper Name."
Salmon, Nathan. (1981) Reference and Essence, Prometheus Books.
Machery, E.; Mallon, R.; Nichols, S.; Stich, S. P. (2004). "Semantics, Cross-cultural Style". Cognition. 92 (3): B1 – B12. CiteSeerX 10.1.1.174.5119. doi:10.1016/j.cognition.2003.10.003. PMID 15019555. S2CID 15074526.
Sainsbury, R.M. (2001). "Sense without Reference". In Newen, A.; Nortmann, U.; Stuhlmann Laisz, R. (eds.). Building on Frege. Stanford.{{cite book}}: CS1 maint: location missing publisher (link) | Wikipedia/Descriptive-causal_theory_of_reference |
In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.
Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology.
A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity.
As an example, the function H(t) denoting the height of a growing flower at time t would be considered continuous. In contrast, the function M(t) denoting the amount of money in a bank account at time t would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn.
== History ==
A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of
y
=
f
(
x
)
{\displaystyle y=f(x)}
as follows: an infinitely small increment
α
{\displaystyle \alpha }
of the independent variable x always produces an infinitely small change
f
(
x
+
α
)
−
f
(
x
)
{\displaystyle f(x+\alpha )-f(x)}
of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854.
== Real functions ==
=== Definition ===
A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below.
Continuity of real functions is usually defined in terms of limits. A function f with variable x is continuous at the real number c, if the limit of
f
(
x
)
,
{\displaystyle f(x),}
as x tends to c, is equal to
f
(
c
)
.
{\displaystyle f(c).}
There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain.
A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval
(
−
∞
,
+
∞
)
{\displaystyle (-\infty ,+\infty )}
(the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere.
A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function
f
(
x
)
=
x
{\displaystyle f(x)={\sqrt {x}}}
is continuous on its whole domain, which is the closed interval
[
0
,
+
∞
)
.
{\displaystyle [0,+\infty ).}
Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
and the tangent function
x
↦
tan
x
.
{\displaystyle x\mapsto \tan x.}
When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous.
A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
and
x
↦
sin
(
1
x
)
{\textstyle x\mapsto \sin({\frac {1}{x}})}
are discontinuous at 0, and remain discontinuous whichever value is chosen for defining them at 0. A point where a function is discontinuous is called a discontinuity.
Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above.
Let
f
:
D
→
R
{\textstyle f:D\to \mathbb {R} }
be a function whose domain
D
{\displaystyle D}
is contained in
R
{\displaystyle \mathbb {R} }
of real numbers.
Some (but not all) possibilities for
D
{\displaystyle D}
are:
D
{\displaystyle D}
is the whole real line; that is,
D
=
R
{\displaystyle D=\mathbb {R} }
D
{\displaystyle D}
is a closed interval of the form
D
=
[
a
,
b
]
=
{
x
∈
R
∣
a
≤
x
≤
b
}
,
{\displaystyle D=[a,b]=\{x\in \mathbb {R} \mid a\leq x\leq b\},}
where a and b are real numbers
D
{\displaystyle D}
is an open interval of the form
D
=
(
a
,
b
)
=
{
x
∈
R
∣
a
<
x
<
b
}
,
{\displaystyle D=(a,b)=\{x\in \mathbb {R} \mid a<x<b\},}
where a and b are real numbers
In the case of an open interval,
a
{\displaystyle a}
and
b
{\displaystyle b}
do not belong to
D
{\displaystyle D}
, and the values
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
{\displaystyle f(b)}
are not defined, and if they are, they do not matter for continuity on
D
{\displaystyle D}
.
==== Definition in terms of limits of functions ====
The function f is continuous at some point c of its domain if the limit of
f
(
x
)
,
{\displaystyle f(x),}
as x approaches c through the domain of f, exists and is equal to
f
(
c
)
.
{\displaystyle f(c).}
In mathematical notation, this is written as
lim
x
→
c
f
(
x
)
=
f
(
c
)
.
{\displaystyle \lim _{x\to c}{f(x)}=f(c).}
In detail this means three conditions: first, f has to be defined at c (guaranteed by the requirement that c is in the domain of f). Second, the limit of that equation has to exist. Third, the value of this limit must equal
f
(
c
)
.
{\displaystyle f(c).}
(Here, we have assumed that the domain of f does not have any isolated points.)
==== Definition in terms of neighborhoods ====
A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point
f
(
c
)
{\displaystyle f(c)}
as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood
N
1
(
f
(
c
)
)
{\displaystyle N_{1}(f(c))}
there is a neighborhood
N
2
(
c
)
{\displaystyle N_{2}(c)}
in its domain such that
f
(
x
)
∈
N
1
(
f
(
c
)
)
{\displaystyle f(x)\in N_{1}(f(c))}
whenever
x
∈
N
2
(
c
)
.
{\displaystyle x\in N_{2}(c).}
As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous.
==== Definition in terms of limits of sequences ====
One can instead require that for any sequence
(
x
n
)
n
∈
N
{\displaystyle (x_{n})_{n\in \mathbb {N} }}
of points in the domain which converges to c, the corresponding sequence
(
f
(
x
n
)
)
n
∈
N
{\displaystyle \left(f(x_{n})\right)_{n\in \mathbb {N} }}
converges to
f
(
c
)
.
{\displaystyle f(c).}
In mathematical notation,
∀
(
x
n
)
n
∈
N
⊂
D
:
lim
n
→
∞
x
n
=
c
⇒
lim
n
→
∞
f
(
x
n
)
=
f
(
c
)
.
{\displaystyle \forall (x_{n})_{n\in \mathbb {N} }\subset D:\lim _{n\to \infty }x_{n}=c\Rightarrow \lim _{n\to \infty }f(x_{n})=f(c)\,.}
==== Weierstrass and Jordan definitions (epsilon–delta) of continuous functions ====
Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function
f
:
D
→
R
{\displaystyle f:D\to \mathbb {R} }
as above and an element
x
0
{\displaystyle x_{0}}
of the domain
D
{\displaystyle D}
,
f
{\displaystyle f}
is said to be continuous at the point
x
0
{\displaystyle x_{0}}
when the following holds: For any positive real number
ε
>
0
,
{\displaystyle \varepsilon >0,}
however small, there exists some positive real number
δ
>
0
{\displaystyle \delta >0}
such that for all
x
{\displaystyle x}
in the domain of
f
{\displaystyle f}
with
x
0
−
δ
<
x
<
x
0
+
δ
,
{\displaystyle x_{0}-\delta <x<x_{0}+\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
satisfies
f
(
x
0
)
−
ε
<
f
(
x
)
<
f
(
x
0
)
+
ε
.
{\displaystyle f\left(x_{0}\right)-\varepsilon <f(x)<f(x_{0})+\varepsilon .}
Alternatively written, continuity of
f
:
D
→
R
{\displaystyle f:D\to \mathbb {R} }
at
x
0
∈
D
{\displaystyle x_{0}\in D}
means that for every
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists a
δ
>
0
{\displaystyle \delta >0}
such that for all
x
∈
D
{\displaystyle x\in D}
:
|
x
−
x
0
|
<
δ
implies
|
f
(
x
)
−
f
(
x
0
)
|
<
ε
.
{\displaystyle \left|x-x_{0}\right|<\delta ~~{\text{ implies }}~~|f(x)-f(x_{0})|<\varepsilon .}
More intuitively, we can say that if we want to get all the
f
(
x
)
{\displaystyle f(x)}
values to stay in some small neighborhood around
f
(
x
0
)
,
{\displaystyle f\left(x_{0}\right),}
we need to choose a small enough neighborhood for the
x
{\displaystyle x}
values around
x
0
.
{\displaystyle x_{0}.}
If we can do that no matter how small the
f
(
x
0
)
{\displaystyle f(x_{0})}
neighborhood is, then
f
{\displaystyle f}
is continuous at
x
0
.
{\displaystyle x_{0}.}
In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology.
Weierstrass had required that the interval
x
0
−
δ
<
x
<
x
0
+
δ
{\displaystyle x_{0}-\delta <x<x_{0}+\delta }
be entirely within the domain
D
{\displaystyle D}
, but Jordan removed that restriction.
==== Definition in terms of control of the remainder ====
In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity.
A function
C
:
[
0
,
∞
)
→
[
0
,
∞
]
{\displaystyle C:[0,\infty )\to [0,\infty ]}
is called a control function if
C is non-decreasing
inf
δ
>
0
C
(
δ
)
=
0
{\displaystyle \inf _{\delta >0}C(\delta )=0}
A function
f
:
D
→
R
{\displaystyle f:D\to R}
is C-continuous at
x
0
{\displaystyle x_{0}}
if there exists such a neighbourhood
N
(
x
0
)
{\textstyle N(x_{0})}
that
|
f
(
x
)
−
f
(
x
0
)
|
≤
C
(
|
x
−
x
0
|
)
for all
x
∈
D
∩
N
(
x
0
)
{\displaystyle |f(x)-f(x_{0})|\leq C\left(\left|x-x_{0}\right|\right){\text{ for all }}x\in D\cap N(x_{0})}
A function is continuous in
x
0
{\displaystyle x_{0}}
if it is C-continuous for some control function C.
This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions
C
{\displaystyle {\mathcal {C}}}
a function is
C
{\displaystyle {\mathcal {C}}}
-continuous if it is
C
{\displaystyle C}
-continuous for some
C
∈
C
.
{\displaystyle C\in {\mathcal {C}}.}
For example, the Lipschitz, the Hölder continuous functions of exponent α and the uniformly continuous functions below are defined by the set of control functions
C
L
i
p
s
c
h
i
t
z
=
{
C
:
C
(
δ
)
=
K
|
δ
|
,
K
>
0
}
{\displaystyle {\mathcal {C}}_{\mathrm {Lipschitz} }=\{C:C(\delta )=K|\delta |,\ K>0\}}
C
Hölder
−
α
=
{
C
:
C
(
δ
)
=
K
|
δ
|
α
,
K
>
0
}
{\displaystyle {\mathcal {C}}_{{\text{Hölder}}-\alpha }=\{C:C(\delta )=K|\delta |^{\alpha },\ K>0\}}
C
uniform cont.
=
{
C
:
C
(
0
)
=
0
}
{\displaystyle {\mathcal {C}}_{\text{uniform cont.}}=\{C:C(0)=0\}}
respectively.
==== Definition using oscillation ====
Continuity can also be defined in terms of oscillation: a function f is continuous at a point
x
0
{\displaystyle x_{0}}
if and only if its oscillation at that point is zero; in symbols,
ω
f
(
x
0
)
=
0.
{\displaystyle \omega _{f}(x_{0})=0.}
A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point.
This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than
ε
{\displaystyle \varepsilon }
(hence a
G
δ
{\displaystyle G_{\delta }}
set) – and gives a rapid proof of one direction of the Lebesgue integrability condition.
The oscillation is equivalent to the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given
ε
0
{\displaystyle \varepsilon _{0}}
there is no
δ
{\displaystyle \delta }
that satisfies the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition, then the oscillation is at least
ε
0
,
{\displaystyle \varepsilon _{0},}
and conversely if for every
ε
{\displaystyle \varepsilon }
there is a desired
δ
,
{\displaystyle \delta ,}
the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space.
==== Definition using the hyperreals ====
Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows.
(see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity.
=== Rules for continuity ===
Proving the continuity of a function by a direct application of the definition is generaly a noneasy task. Fortunately, in practice, most functions are built from simpler functions, and their continuity can be deduced immediately from the way they are defined, by applying the following rules:
Every constant function is continuous
The identity function
f
(
x
)
=
x
{\displaystyle f(x)=x}
is continuous
Addition and multiplication: If the functions
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous on their respective domains
D
f
{\displaystyle D_{f}}
and
D
g
{\displaystyle D_{g}}
, then their sum
f
+
g
{\displaystyle f+g}
and their product
f
⋅
g
{\displaystyle f\cdot g}
are continuous on the intersection
D
f
∩
D
g
{\displaystyle D_{f}\cap D_{g}}
, where
f
+
g
{\displaystyle f+g}
and
f
g
{\displaystyle fg}
are defined by
(
f
+
g
)
(
x
)
=
f
(
x
)
+
g
(
x
)
{\displaystyle (f+g)(x)=f(x)+g(x)}
and
(
f
⋅
g
)
(
x
)
=
f
(
x
)
⋅
g
(
x
)
{\displaystyle (f\cdot g)(x)=f(x)\cdot g(x)}
.
Reciprocal: If the function
f
{\displaystyle f}
is continuous on the domain
D
f
{\displaystyle D_{f}}
, then its reciprocal
1
f
{\displaystyle {\tfrac {1}{f}}}
, defined by
(
1
f
)
(
x
)
=
1
f
(
x
)
{\displaystyle ({\tfrac {1}{f}})(x)={\tfrac {1}{f(x)}}}
is continuous on the domain
D
f
∖
f
−
1
(
0
)
{\displaystyle D_{f}\setminus f^{-1}(0)}
, that is, the domain
D
f
{\displaystyle D_{f}}
from which the points
x
{\displaystyle x}
such that
f
(
x
)
=
0
{\displaystyle f(x)=0}
are removed.
Function composition: If the functions
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous on their respective domains
D
f
{\displaystyle D_{f}}
and
D
g
{\displaystyle D_{g}}
, then the composition
g
∘
f
{\displaystyle g\circ f}
defined by
1
{\displaystyle {1}}
is continuous on
D
f
∩
f
−
1
(
D
g
)
{\displaystyle D_{f}\cap f^{-1}(D_{g})}
, that the part of
D
f
{\displaystyle D_{f}}
that is mapped by
f
{\displaystyle f}
inside
D
g
{\displaystyle D_{g}}
.
The sine and cosine functions (
sin
x
{\displaystyle \sin x}
and
cos
x
{\displaystyle \cos x}
) are continuous everywhere.
The exponential function
e
x
{\displaystyle e^{x}}
is continuous everywhere.
The natural logarithm
ln
x
{\displaystyle \ln x}
is continuous on the domain formed by all positive real numbers
{
x
∣
x
>
0
}
{\displaystyle \{x\mid x>0\}}
.
These rules imply that every polynomial function is continuous everywhere and that a rational function is continuous everywhere where it is defined, if the numerator and the denominator have no common zeros. More generally, the quotient of two continuous functions is continuous outside the zeros of the denominator.
An example of a function for which the above rules are not sufficirent is the sinc function, which is defined by
sinc
(
0
)
=
1
{\displaystyle \operatorname {sinc} (0)=1}
and
sinc
(
x
)
=
sin
x
x
{\displaystyle \operatorname {sinc} (x)={\tfrac {\sin x}{x}}}
for
x
≠
0
{\displaystyle x\neq 0}
. The above rules show immediately that the function is continuous for
x
≠
0
{\displaystyle x\neq 0}
, but, for proving the continuity at
0
{\displaystyle 0}
, one has to prove
lim
x
→
0
sin
x
x
=
1.
{\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}=1.}
As this is true, one gets that the sinc function is continuous function on all real numbers.
=== Examples of discontinuous functions ===
An example of a discontinuous function is the Heaviside step function
H
{\displaystyle H}
, defined by
H
(
x
)
=
{
1
if
x
≥
0
0
if
x
<
0
{\displaystyle H(x)={\begin{cases}1&{\text{ if }}x\geq 0\\0&{\text{ if }}x<0\end{cases}}}
Pick for instance
ε
=
1
/
2
{\displaystyle \varepsilon =1/2}
. Then there is no
δ
{\displaystyle \delta }
-neighborhood around
x
=
0
{\displaystyle x=0}
, i.e. no open interval
(
−
δ
,
δ
)
{\displaystyle (-\delta ,\;\delta )}
with
δ
>
0
,
{\displaystyle \delta >0,}
that will force all the
H
(
x
)
{\displaystyle H(x)}
values to be within the
ε
{\displaystyle \varepsilon }
-neighborhood of
H
(
0
)
{\displaystyle H(0)}
, i.e. within
(
1
/
2
,
3
/
2
)
{\displaystyle (1/2,\;3/2)}
. Intuitively, we can think of this type of discontinuity as a sudden jump in function values.
Similarly, the signum or sign function
sgn
(
x
)
=
{
1
if
x
>
0
0
if
x
=
0
−
1
if
x
<
0
{\displaystyle \operatorname {sgn}(x)={\begin{cases}\;\;\ 1&{\text{ if }}x>0\\\;\;\ 0&{\text{ if }}x=0\\-1&{\text{ if }}x<0\end{cases}}}
is discontinuous at
x
=
0
{\displaystyle x=0}
but continuous everywhere else. Yet another example: the function
f
(
x
)
=
{
sin
(
x
−
2
)
if
x
≠
0
0
if
x
=
0
{\displaystyle f(x)={\begin{cases}\sin \left(x^{-2}\right)&{\text{ if }}x\neq 0\\0&{\text{ if }}x=0\end{cases}}}
is continuous everywhere apart from
x
=
0
{\displaystyle x=0}
.
Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function,
f
(
x
)
=
{
1
if
x
=
0
1
q
if
x
=
p
q
(in lowest terms) is a rational number
0
if
x
is irrational
.
{\displaystyle f(x)={\begin{cases}1&{\text{ if }}x=0\\{\frac {1}{q}}&{\text{ if }}x={\frac {p}{q}}{\text{(in lowest terms) is a rational number}}\\0&{\text{ if }}x{\text{ is irrational}}.\end{cases}}}
is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers,
D
(
x
)
=
{
0
if
x
is irrational
(
∈
R
∖
Q
)
1
if
x
is rational
(
∈
Q
)
{\displaystyle D(x)={\begin{cases}0&{\text{ if }}x{\text{ is irrational }}(\in \mathbb {R} \setminus \mathbb {Q} )\\1&{\text{ if }}x{\text{ is rational }}(\in \mathbb {Q} )\end{cases}}}
is nowhere continuous.
=== Properties ===
==== A useful lemma ====
Let
f
(
x
)
{\displaystyle f(x)}
be a function that is continuous at a point
x
0
,
{\displaystyle x_{0},}
and
y
0
{\displaystyle y_{0}}
be a value such
f
(
x
0
)
≠
y
0
.
{\displaystyle f\left(x_{0}\right)\neq y_{0}.}
Then
f
(
x
)
≠
y
0
{\displaystyle f(x)\neq y_{0}}
throughout some neighbourhood of
x
0
.
{\displaystyle x_{0}.}
Proof: By the definition of continuity, take
ε
=
|
y
0
−
f
(
x
0
)
|
2
>
0
{\displaystyle \varepsilon ={\frac {|y_{0}-f(x_{0})|}{2}}>0}
, then there exists
δ
>
0
{\displaystyle \delta >0}
such that
|
f
(
x
)
−
f
(
x
0
)
|
<
|
y
0
−
f
(
x
0
)
|
2
whenever
|
x
−
x
0
|
<
δ
{\displaystyle \left|f(x)-f(x_{0})\right|<{\frac {\left|y_{0}-f(x_{0})\right|}{2}}\quad {\text{ whenever }}\quad |x-x_{0}|<\delta }
Suppose there is a point in the neighbourhood
|
x
−
x
0
|
<
δ
{\displaystyle |x-x_{0}|<\delta }
for which
f
(
x
)
=
y
0
;
{\displaystyle f(x)=y_{0};}
then we have the contradiction
|
f
(
x
0
)
−
y
0
|
<
|
f
(
x
0
)
−
y
0
|
2
.
{\displaystyle \left|f(x_{0})-y_{0}\right|<{\frac {\left|f(x_{0})-y_{0}\right|}{2}}.}
==== Intermediate value theorem ====
The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states:
If the real-valued function f is continuous on the closed interval
[
a
,
b
]
,
{\displaystyle [a,b],}
and k is some number between
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
,
{\displaystyle f(b),}
then there is some number
c
∈
[
a
,
b
]
,
{\displaystyle c\in [a,b],}
such that
f
(
c
)
=
k
.
{\displaystyle f(c)=k.}
For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m.
As a consequence, if f is continuous on
[
a
,
b
]
{\displaystyle [a,b]}
and
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
{\displaystyle f(b)}
differ in sign, then, at some point
c
∈
[
a
,
b
]
,
{\displaystyle c\in [a,b],}
f
(
c
)
{\displaystyle f(c)}
must equal zero.
==== Extreme value theorem ====
The extreme value theorem states that if a function f is defined on a closed interval
[
a
,
b
]
{\displaystyle [a,b]}
(or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists
c
∈
[
a
,
b
]
{\displaystyle c\in [a,b]}
with
f
(
c
)
≥
f
(
x
)
{\displaystyle f(c)\geq f(x)}
for all
x
∈
[
a
,
b
]
.
{\displaystyle x\in [a,b].}
The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval
(
a
,
b
)
{\displaystyle (a,b)}
(or any set that is not both closed and bounded), as, for example, the continuous function
f
(
x
)
=
1
x
,
{\displaystyle f(x)={\frac {1}{x}},}
defined on the open interval (0,1), does not attain a maximum, being unbounded above.
==== Relation to differentiability and integrability ====
Every differentiable function
f
:
(
a
,
b
)
→
R
{\displaystyle f:(a,b)\to \mathbb {R} }
is continuous, as can be shown. The converse does not hold: for example, the absolute value function
f
(
x
)
=
|
x
|
=
{
x
if
x
≥
0
−
x
if
x
<
0
{\displaystyle f(x)=|x|={\begin{cases}\;\;\ x&{\text{ if }}x\geq 0\\-x&{\text{ if }}x<0\end{cases}}}
is everywhere continuous. However, it is not differentiable at
x
=
0
{\displaystyle x=0}
(but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable.
The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted
C
1
(
(
a
,
b
)
)
.
{\displaystyle C^{1}((a,b)).}
More generally, the set of functions
f
:
Ω
→
R
{\displaystyle f:\Omega \to \mathbb {R} }
(from an open interval (or open subset of
R
{\displaystyle \mathbb {R} }
)
Ω
{\displaystyle \Omega }
to the reals) such that f is
n
{\displaystyle n}
times differentiable and such that the
n
{\displaystyle n}
-th derivative of f is continuous is denoted
C
n
(
Ω
)
.
{\displaystyle C^{n}(\Omega ).}
See differentiability class. In the field of computer graphics, properties related (but not identical) to
C
0
,
C
1
,
C
2
{\displaystyle C^{0},C^{1},C^{2}}
are sometimes called
G
0
{\displaystyle G^{0}}
(continuity of position),
G
1
{\displaystyle G^{1}}
(continuity of tangency), and
G
2
{\displaystyle G^{2}}
(continuity of curvature); see Smoothness of curves and surfaces.
Every continuous function
f
:
[
a
,
b
]
→
R
{\displaystyle f:[a,b]\to \mathbb {R} }
is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows.
==== Pointwise and uniform limits ====
Given a sequence
f
1
,
f
2
,
…
:
I
→
R
{\displaystyle f_{1},f_{2},\dotsc :I\to \mathbb {R} }
of functions such that the limit
f
(
x
)
:=
lim
n
→
∞
f
n
(
x
)
{\displaystyle f(x):=\lim _{n\to \infty }f_{n}(x)}
exists for all
x
∈
D
,
{\displaystyle x\in D,}
, the resulting function
f
(
x
)
{\displaystyle f(x)}
is referred to as the pointwise limit of the sequence of functions
(
f
n
)
n
∈
N
.
{\displaystyle \left(f_{n}\right)_{n\in N}.}
The pointwise limit function need not be continuous, even if all functions
f
n
{\displaystyle f_{n}}
are continuous, as the animation at the right shows. However, f is continuous if all functions
f
n
{\displaystyle f_{n}}
are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous.
=== Directional Continuity ===
Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is right-continuous if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number
ε
>
0
{\displaystyle \varepsilon >0}
however small, there exists some number
δ
>
0
{\displaystyle \delta >0}
such that for all x in the domain with
c
<
x
<
c
+
δ
,
{\displaystyle c<x<c+\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
will satisfy
|
f
(
x
)
−
f
(
c
)
|
<
ε
.
{\displaystyle |f(x)-f(c)|<\varepsilon .}
This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with
c
−
δ
<
x
<
c
{\displaystyle c-\delta <x<c}
yields the notion of left-continuous functions. A function is continuous if and only if it is both right-continuous and left-continuous.
=== Semicontinuity ===
A function f is lower semi-continuous if, roughly, any jumps that might occur only go down, but not up. That is, for any
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists some number
δ
>
0
{\displaystyle \delta >0}
such that for all x in the domain with
|
x
−
c
|
<
δ
,
{\displaystyle |x-c|<\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
satisfies
f
(
x
)
≥
f
(
c
)
−
ϵ
.
{\displaystyle f(x)\geq f(c)-\epsilon .}
The reverse condition is upper semi-continuity.
== Continuous functions between metric spaces ==
The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set
X
{\displaystyle X}
equipped with a function (called metric)
d
X
,
{\displaystyle d_{X},}
that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function
d
X
:
X
×
X
→
R
{\displaystyle d_{X}:X\times X\to \mathbb {R} }
that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces
(
X
,
d
X
)
{\displaystyle \left(X,d_{X}\right)}
and
(
Y
,
d
Y
)
{\displaystyle \left(Y,d_{Y}\right)}
and a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
then
f
{\displaystyle f}
is continuous at the point
c
∈
X
{\displaystyle c\in X}
(with respect to the given metrics) if for any positive real number
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists a positive real number
δ
>
0
{\displaystyle \delta >0}
such that all
x
∈
X
{\displaystyle x\in X}
satisfying
d
X
(
x
,
c
)
<
δ
{\displaystyle d_{X}(x,c)<\delta }
will also satisfy
d
Y
(
f
(
x
)
,
f
(
c
)
)
<
ε
.
{\displaystyle d_{Y}(f(x),f(c))<\varepsilon .}
As in the case of real functions above, this is equivalent to the condition that for every sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
with limit
lim
x
n
=
c
,
{\displaystyle \lim x_{n}=c,}
we have
lim
f
(
x
n
)
=
f
(
c
)
.
{\displaystyle \lim f\left(x_{n}\right)=f(c).}
The latter condition can be weakened as follows:
f
{\displaystyle f}
is continuous at the point
c
{\displaystyle c}
if and only if for every convergent sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
with limit
c
{\displaystyle c}
, the sequence
(
f
(
x
n
)
)
{\displaystyle \left(f\left(x_{n}\right)\right)}
is a Cauchy sequence, and
c
{\displaystyle c}
is in the domain of
f
{\displaystyle f}
.
The set of points at which a function between metric spaces is continuous is a
G
δ
{\displaystyle G_{\delta }}
set – this follows from the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition of continuity.
This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator
T
:
V
→
W
{\displaystyle T:V\to W}
between normed vector spaces
V
{\displaystyle V}
and
W
{\displaystyle W}
(which are vector spaces equipped with a compatible norm, denoted
‖
x
‖
{\displaystyle \|x\|}
) is continuous if and only if it is bounded, that is, there is a constant
K
{\displaystyle K}
such that
‖
T
(
x
)
‖
≤
K
‖
x
‖
{\displaystyle \|T(x)\|\leq K\|x\|}
for all
x
∈
V
.
{\displaystyle x\in V.}
=== Uniform, Hölder and Lipschitz continuity ===
The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way
δ
{\displaystyle \delta }
depends on
ε
{\displaystyle \varepsilon }
and c in the definition above. Intuitively, a function f as above is uniformly continuous if the
δ
{\displaystyle \delta }
does
not depend on the point c. More precisely, it is required that for every real number
ε
>
0
{\displaystyle \varepsilon >0}
there exists
δ
>
0
{\displaystyle \delta >0}
such that for every
c
,
b
∈
X
{\displaystyle c,b\in X}
with
d
X
(
b
,
c
)
<
δ
,
{\displaystyle d_{X}(b,c)<\delta ,}
we have that
d
Y
(
f
(
b
)
,
f
(
c
)
)
<
ε
.
{\displaystyle d_{Y}(f(b),f(c))<\varepsilon .}
Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces.
A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all
b
,
c
∈
X
,
{\displaystyle b,c\in X,}
the inequality
d
Y
(
f
(
b
)
,
f
(
c
)
)
≤
K
⋅
(
d
X
(
b
,
c
)
)
α
{\displaystyle d_{Y}(f(b),f(c))\leq K\cdot (d_{X}(b,c))^{\alpha }}
holds. Any Hölder continuous function is uniformly continuous. The particular case
α
=
1
{\displaystyle \alpha =1}
is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality
d
Y
(
f
(
b
)
,
f
(
c
)
)
≤
K
⋅
d
X
(
b
,
c
)
{\displaystyle d_{Y}(f(b),f(c))\leq K\cdot d_{X}(b,c)}
holds for any
b
,
c
∈
X
.
{\displaystyle b,c\in X.}
The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations.
== Continuous functions between topological spaces ==
Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology).
A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
between two topological spaces X and Y is continuous if for every open set
V
⊆
Y
,
{\displaystyle V\subseteq Y,}
the inverse image
f
−
1
(
V
)
=
{
x
∈
X
|
f
(
x
)
∈
V
}
{\displaystyle f^{-1}(V)=\{x\in X\;|\;f(x)\in V\}}
is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology
T
X
{\displaystyle T_{X}}
), but the continuity of f depends on the topologies used on X and Y.
This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X.
An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions
f
:
X
→
T
{\displaystyle f:X\to T}
to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous.
=== Continuity at a point ===
The translation in the language of neighborhoods of the
(
ε
,
δ
)
{\displaystyle (\varepsilon ,\delta )}
-definition of continuity leads to the following definition of the continuity at a point:
This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images.
Also, as every set that contains a neighborhood is also a neighborhood, and
f
−
1
(
V
)
{\displaystyle f^{-1}(V)}
is the largest subset U of X such that
f
(
U
)
⊆
V
,
{\displaystyle f(U)\subseteq V,}
this definition may be simplified into:
As an open set is a set that is a neighborhood of all its points, a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at every point of X if and only if it is a continuous function.
If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous.
Given
x
∈
X
,
{\displaystyle x\in X,}
a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at
x
{\displaystyle x}
if and only if whenever
B
{\displaystyle {\mathcal {B}}}
is a filter on
X
{\displaystyle X}
that converges to
x
{\displaystyle x}
in
X
,
{\displaystyle X,}
which is expressed by writing
B
→
x
,
{\displaystyle {\mathcal {B}}\to x,}
then necessarily
f
(
B
)
→
f
(
x
)
{\displaystyle f({\mathcal {B}})\to f(x)}
in
Y
.
{\displaystyle Y.}
If
N
(
x
)
{\displaystyle {\mathcal {N}}(x)}
denotes the neighborhood filter at
x
{\displaystyle x}
then
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at
x
{\displaystyle x}
if and only if
f
(
N
(
x
)
)
→
f
(
x
)
{\displaystyle f({\mathcal {N}}(x))\to f(x)}
in
Y
.
{\displaystyle Y.}
Moreover, this happens if and only if the prefilter
f
(
N
(
x
)
)
{\displaystyle f({\mathcal {N}}(x))}
is a filter base for the neighborhood filter of
f
(
x
)
{\displaystyle f(x)}
in
Y
.
{\displaystyle Y.}
=== Alternative definitions ===
Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function.
==== Sequences and nets ====
In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition.
In detail, a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is sequentially continuous if whenever a sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
converges to a limit
x
,
{\displaystyle x,}
the sequence
(
f
(
x
n
)
)
{\displaystyle \left(f\left(x_{n}\right)\right)}
converges to
f
(
x
)
.
{\displaystyle f(x).}
Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If
X
{\displaystyle X}
is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if
X
{\displaystyle X}
is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions.
For instance, consider the case of real-valued functions of one real variable:
==== Closure operator and interior operator definitions ====
In terms of the interior and closure operators, we have the following equivalences,
If we declare that a point
x
{\displaystyle x}
is close to a subset
A
⊆
X
{\displaystyle A\subseteq X}
if
x
∈
cl
X
A
,
{\displaystyle x\in \operatorname {cl} _{X}A,}
then this terminology allows for a plain English description of continuity:
f
{\displaystyle f}
is continuous if and only if for every subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
f
{\displaystyle f}
maps points that are close to
A
{\displaystyle A}
to points that are close to
f
(
A
)
.
{\displaystyle f(A).}
Similarly,
f
{\displaystyle f}
is continuous at a fixed given point
x
∈
X
{\displaystyle x\in X}
if and only if whenever
x
{\displaystyle x}
is close to a subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
then
f
(
x
)
{\displaystyle f(x)}
is close to
f
(
A
)
.
{\displaystyle f(A).}
Instead of specifying topological spaces by their open subsets, any topology on
X
{\displaystyle X}
can alternatively be determined by a closure operator or by an interior operator.
Specifically, the map that sends a subset
A
{\displaystyle A}
of a topological space
X
{\displaystyle X}
to its topological closure
cl
X
A
{\displaystyle \operatorname {cl} _{X}A}
satisfies the Kuratowski closure axioms. Conversely, for any closure operator
A
↦
cl
A
{\displaystyle A\mapsto \operatorname {cl} A}
there exists a unique topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
(specifically,
τ
:=
{
X
∖
cl
A
:
A
⊆
X
}
{\displaystyle \tau :=\{X\setminus \operatorname {cl} A:A\subseteq X\}}
) such that for every subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
cl
A
{\displaystyle \operatorname {cl} A}
is equal to the topological closure
cl
(
X
,
τ
)
A
{\displaystyle \operatorname {cl} _{(X,\tau )}A}
of
A
{\displaystyle A}
in
(
X
,
τ
)
.
{\displaystyle (X,\tau ).}
If the sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are each associated with closure operators (both denoted by
cl
{\displaystyle \operatorname {cl} }
) then a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if
f
(
cl
A
)
⊆
cl
(
f
(
A
)
)
{\displaystyle f(\operatorname {cl} A)\subseteq \operatorname {cl} (f(A))}
for every subset
A
⊆
X
.
{\displaystyle A\subseteq X.}
Similarly, the map that sends a subset
A
{\displaystyle A}
of
X
{\displaystyle X}
to its topological interior
int
X
A
{\displaystyle \operatorname {int} _{X}A}
defines an interior operator. Conversely, any interior operator
A
↦
int
A
{\displaystyle A\mapsto \operatorname {int} A}
induces a unique topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
(specifically,
τ
:=
{
int
A
:
A
⊆
X
}
{\displaystyle \tau :=\{\operatorname {int} A:A\subseteq X\}}
) such that for every
A
⊆
X
,
{\displaystyle A\subseteq X,}
int
A
{\displaystyle \operatorname {int} A}
is equal to the topological interior
int
(
X
,
τ
)
A
{\displaystyle \operatorname {int} _{(X,\tau )}A}
of
A
{\displaystyle A}
in
(
X
,
τ
)
.
{\displaystyle (X,\tau ).}
If the sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are each associated with interior operators (both denoted by
int
{\displaystyle \operatorname {int} }
) then a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if
f
−
1
(
int
B
)
⊆
int
(
f
−
1
(
B
)
)
{\displaystyle f^{-1}(\operatorname {int} B)\subseteq \operatorname {int} \left(f^{-1}(B)\right)}
for every subset
B
⊆
Y
.
{\displaystyle B\subseteq Y.}
==== Filters and prefilters ====
Continuity can also be characterized in terms of filters. A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if whenever a filter
B
{\displaystyle {\mathcal {B}}}
on
X
{\displaystyle X}
converges in
X
{\displaystyle X}
to a point
x
∈
X
,
{\displaystyle x\in X,}
then the prefilter
f
(
B
)
{\displaystyle f({\mathcal {B}})}
converges in
Y
{\displaystyle Y}
to
f
(
x
)
.
{\displaystyle f(x).}
This characterization remains true if the word "filter" is replaced by "prefilter."
=== Properties ===
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
Y
→
Z
{\displaystyle g:Y\to Z}
are continuous, then so is the composition
g
∘
f
:
X
→
Z
.
{\displaystyle g\circ f:X\to Z.}
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous and
X is compact, then f(X) is compact.
X is connected, then f(X) is connected.
X is path-connected, then f(X) is path-connected.
X is Lindelöf, then f(X) is Lindelöf.
X is separable, then f(X) is separable.
The possible topologies on a fixed set X are partially ordered: a topology
τ
1
{\displaystyle \tau _{1}}
is said to be coarser than another topology
τ
2
{\displaystyle \tau _{2}}
(notation:
τ
1
⊆
τ
2
{\displaystyle \tau _{1}\subseteq \tau _{2}}
) if every open subset with respect to
τ
1
{\displaystyle \tau _{1}}
is also open with respect to
τ
2
.
{\displaystyle \tau _{2}.}
Then, the identity map
id
X
:
(
X
,
τ
2
)
→
(
X
,
τ
1
)
{\displaystyle \operatorname {id} _{X}:\left(X,\tau _{2}\right)\to \left(X,\tau _{1}\right)}
is continuous if and only if
τ
1
⊆
τ
2
{\displaystyle \tau _{1}\subseteq \tau _{2}}
(see also comparison of topologies). More generally, a continuous function
(
X
,
τ
X
)
→
(
Y
,
τ
Y
)
{\displaystyle \left(X,\tau _{X}\right)\to \left(Y,\tau _{Y}\right)}
stays continuous if the topology
τ
Y
{\displaystyle \tau _{Y}}
is replaced by a coarser topology and/or
τ
X
{\displaystyle \tau _{X}}
is replaced by a finer topology.
=== Homeomorphisms ===
Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function
f
−
1
{\displaystyle f^{-1}}
need not be continuous. A bijective continuous function with a continuous inverse function is called a homeomorphism.
If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism.
=== Defining topologies via continuous functions ===
Given a function
f
:
X
→
S
,
{\displaystyle f:X\to S,}
where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which
f
−
1
(
A
)
{\displaystyle f^{-1}(A)}
is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f.
Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that
A
=
f
−
1
(
U
)
{\displaystyle A=f^{-1}(U)}
for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X.
A topology on a set S is uniquely determined by the class of all continuous functions
S
→
X
{\displaystyle S\to X}
into all topological spaces X. Dually, a similar idea can be applied to maps
X
→
S
.
{\displaystyle X\to S.}
== Related notions ==
If
f
:
S
→
Y
{\displaystyle f:S\to Y}
is a continuous function from some subset
S
{\displaystyle S}
of a topological space
X
{\displaystyle X}
then a continuous extension of
f
{\displaystyle f}
to
X
{\displaystyle X}
is any continuous function
F
:
X
→
Y
{\displaystyle F:X\to Y}
such that
F
(
s
)
=
f
(
s
)
{\displaystyle F(s)=f(s)}
for every
s
∈
S
,
{\displaystyle s\in S,}
which is a condition that often written as
f
=
F
|
S
.
{\displaystyle f=F{\big \vert }_{S}.}
In words, it is any continuous function
F
:
X
→
Y
{\displaystyle F:X\to Y}
that restricts to
f
{\displaystyle f}
on
S
.
{\displaystyle S.}
This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If
f
:
S
→
Y
{\displaystyle f:S\to Y}
is not continuous, then it could not possibly have a continuous extension. If
Y
{\displaystyle Y}
is a Hausdorff space and
S
{\displaystyle S}
is a dense subset of
X
{\displaystyle X}
then a continuous extension of
f
:
S
→
Y
{\displaystyle f:S\to Y}
to
X
,
{\displaystyle X,}
if one exists, will be unique. The Blumberg theorem states that if
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
is an arbitrary function then there exists a dense subset
D
{\displaystyle D}
of
R
{\displaystyle \mathbb {R} }
such that the restriction
f
|
D
:
D
→
R
{\displaystyle f{\big \vert }_{D}:D\to \mathbb {R} }
is continuous; in other words, every function
R
→
R
{\displaystyle \mathbb {R} \to \mathbb {R} }
can be restricted to some dense subset on which it is continuous.
Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function
f
:
X
→
Y
{\displaystyle f:X\to Y}
between particular types of partially ordered sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is continuous if for each directed subset
A
{\displaystyle A}
of
X
,
{\displaystyle X,}
we have
sup
f
(
A
)
=
f
(
sup
A
)
.
{\displaystyle \sup f(A)=f(\sup A).}
Here
sup
{\displaystyle \,\sup \,}
is the supremum with respect to the orderings in
X
{\displaystyle X}
and
Y
,
{\displaystyle Y,}
respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology.
In category theory, a functor
F
:
C
→
D
{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}
between two categories is called continuous if it commutes with small limits. That is to say,
lim
←
i
∈
I
F
(
C
i
)
≅
F
(
lim
←
i
∈
I
C
i
)
{\displaystyle \varprojlim _{i\in I}F(C_{i})\cong F\left(\varprojlim _{i\in I}C_{i}\right)}
for any small (that is, indexed by a set
I
,
{\displaystyle I,}
as opposed to a class) diagram of objects in
C
{\displaystyle {\mathcal {C}}}
.
A continuity space is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains.
In measure theory, a function
f
:
E
→
R
k
{\displaystyle f:E\to \mathbb {R} ^{k}}
defined on a Lebesgue measurable set
E
⊆
R
n
{\displaystyle E\subseteq \mathbb {R} ^{n}}
is called approximately continuous at a point
x
0
∈
E
{\displaystyle x_{0}\in E}
if the approximate limit of
f
{\displaystyle f}
at
x
0
{\displaystyle x_{0}}
exists and equals
f
(
x
0
)
{\displaystyle f(x_{0})}
. This generalizes the notion of continuity by replacing the ordinary limit with the approximate limit. A fundamental result known as the Stepanov-Denjoy theorem states that a function is measurable if and only if it is approximately continuous almost everywhere.
== See also ==
Direction-preserving function - an analog of a continuous function in discrete spaces.
== References ==
== Bibliography ==
Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485.
"Continuous function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Continuous_functions |
Verificationism, also known as the verification principle or the verifiability criterion of meaning, is a doctrine in philosophy which asserts that a statement is meaningful only if it is either empirically verifiable (can be confirmed through the senses) or a tautology (true by virtue of its own meaning or its own logical form). Verificationism rejects statements of metaphysics, theology, ethics and aesthetics as meaningless in conveying truth value or factual content, though they may be meaningful in influencing emotions or behavior.
Verificationism was a central thesis of logical positivism, a movement in analytic philosophy that emerged in the 1920s by philosophers who sought to unify philosophy and science under a common naturalistic theory of knowledge. The verifiability criterion underwent various revisions throughout the 1920s to 1950s. However, by the 1960s, it was deemed to be irreparably untenable. Its abandonment would eventually precipitate the collapse of the broader logical positivist movement.
== Origins ==
The roots of verificationism may be traced to at least the 19th century, in philosophical principles that aim to ground scientific theory in verifiable experience, such as C.S. Peirce's pragmatism and the work of conventionalist Pierre Duhem, who fostered instrumentalism. Verificationism, as principle, would be conceived in the 1920s by the logical positivists of the Vienna Circle, who sought an epistemology whereby philosophical discourse would be, in their perception, as authoritative and meaningful as empirical science. The movement established grounding in the empiricism of David Hume, Auguste Comte and Ernst Mach, and the positivism of the latter two, borrowing perspectives from Immanuel Kant and defining their exemplar of science in Einstein's general theory of relativity.
Ludwig Wittgenstein's Tractatus, published in 1921, established the theoretical foundations for the verifiability criterion of meaning. Building upon Gottlob Frege's work, the analytic–synthetic distinction was also reformulated, reducing logic and mathematics to semantical conventions. This would render logical truths (being unverifiable by the senses) tenable under verificationism, as tautologies.
== Revisions ==
Logical positivists within the Vienna Circle recognized quickly that the verifiability criterion was too stringent. Specifically, universal generalizations were noted to be empirically unverifiable, rendering vital domains of science and reason, including scientific hypothesis, meaningless under verificationism, absent revisions to its criterion of meaning.
Rudolf Carnap, Otto Neurath, Hans Hahn and Philipp Frank led a faction seeking to make the verifiability criterion more inclusive, beginning a movement they referred to as the "liberalization of empiricism". Moritz Schlick and Friedrich Waismann led a "conservative wing" that maintained a strict verificationism. Whereas Schlick sought to redefine universal generalizations as tautological rules, thereby to reconcile them with the existing criterion, Hahn argued that the criterion itself should be weakened to accommodate non-conclusive verification. Neurath, within the liberal wing, proposed the adoption of coherentism, though challenged by Schlick's foundationalism. However, his physicalism would eventually be adopted over Mach's phenomenalism by most members of the Vienna Circle.
With the publication of the Logical Syntax of Language in 1934, Carnap defined ‘analytic’ in a new way to account for Gödel's incompleteness theorem, who ultimately "thought that Carnap’s approach to mathematics could be refuted." This method allowed Carnap to distinguish between a derivative relation between premises that can be obtained in a finite number of steps and a semantic consequence relation that has on all valuations the same truth value for the premise as the consequent. It follows that all sentences of pure mathematics individually, or their negation, are "a consequence of the null set of premises. This leaves Gödel’s results completely intact as they concerned what is provable, that is, derivable from the null set of premises or from any one consistent axiomatization of mathematical truths."
In 1936, Carnap sought a switch from verification to confirmation. Carnap's confirmability criterion (confirmationism) would not require conclusive verification (thus accommodating for universal generalizations) but allow for partial testability to establish degrees of confirmation on a probabilistic basis. Carnap never succeeded in finalising his thesis despite employing abundant logical and mathematical tools for this purpose. In all of Carnap's formulations, a universal law's degree of confirmation was zero.
In Language, Truth and Logic, published that year, A. J. Ayer distinguished between strong and weak verification. This system espoused conclusive verification, yet allowed for probabilistic inclusion where verifiability is inconclusive. He also distinguished theoretical from practical verifiability, proposing that statements that are verifiable in principle should be meaningful, even if unverifiable in practice.
== Criticisms ==
Philosopher Karl Popper, a graduate of the University of Vienna, though not a member within the ranks of the Vienna Circle, was among the foremost critics of verificationism. He identified three fundamental deficiencies in verifiability as a criterion of meaning:
Verificationism rejects universal generalizations, such as "all swans are white," as meaningless. Popper argues that while universal statements cannot be verified, they can be proven false, a foundation on which he was to propose his criterion of falsifiability.
Verificationism allows existential statements, such as “unicorns exist”, to be classified as scientifically meaningful, despite the absence of any definitive method to show that they are false (one could possibly find a unicorn somewhere not yet examined).
Verificationism is meaningless by virtue of its own criterion because it cannot be empirically verified. Thus the concept is self-defeating.
Popper regarded scientific hypotheses to never be completely verifiable, as well as not confirmable under Carnap's thesis. He also considered metaphysical, ethical and aesthetic statements often rich in meaning and important in the origination of scientific theories.
Other philosophers also voiced their own criticisms of verificationism:
The 1951 article "Two Dogmas of Empiricism", by Willard Van Orman Quine, found no suitable explanations for the concept of analyticity in that they reduced ultimately to circular reasoning. This served to uproot the analytic/synthetic division pivotal to verificationism.
Carl Hempel (1950, 1951) demonstrated that the verifiability criterion was not justifiable in that it was too strong to accommodate key proceedings within science, such as general laws and limits in infinite sequences.
In 1958, Norwood Hanson explained that even direct observations are never truly neutral in that they are laden with theory. ie. Influenced by a system of presuppositions that act as an interpretative framework for those observations. This served to destabilize the foundations of empiricism by challenging the infallibility and objectivity of empirical observation.
Thomas Kuhn's landmark book of 1962, The Structure of Scientific Revolutions—which discussed paradigm shifts in fundamental physics—critically undermined confidence in scientific foundationalism, a theory commonly, if erroneously, attributed to verificationism.
== Falsifiability ==
In The Logic of Scientific Discovery (1959), Popper proposed falsifiability, or falsificationism. Though formulated in the context of what he perceived were intractable problems in both verifiability and confirmability, Popper intended falsifiability, not as a criterion of meaning like verificationism (as commonly misunderstood), but as a criterion to demarcate scientific statements from non-scientific statements.
Notably, the falsifiability criterion would allow for scientific hypotheses (expressed as universal generalizations) to be held as provisionally true until proven false by observation, whereas under verificationism, they would be disqualified immediately as meaningless.
In formulating his criterion, Popper was informed by the contrasting methodologies of Albert Einstein and Sigmund Freud. Appealing to the general theory of relativity and its predicted effects on gravitational lensing, it was evident to Popper that Einstein's theories carried significantly greater predictive risk than Freud's of being falsified by observation. Though Freud found ample confirmation of his theories in observations, Popper would note that this method of justification was vulnerable to confirmation bias, leading in some cases to contradictory outcomes. He would therefore conclude that predictive risk, or falsifiability, should serve as the criterion to demarcate the boundaries of science.
Though falsificationism has been criticized extensively by philosophers for methodological shortcomings in its intended demarcation of science, it would receive acclamatory adoption among scientists. Logical positivists too adopted the criterion, even as their movement ran its course, catapulting Popper, initially a contentious misfit, to carry the richest philosophy out of interwar Vienna.
== Legacy ==
In 1967, John Passmore, a leading historian of 20th-century philosophy, wrote, "Logical positivism is dead, or as dead as a philosophical movement ever becomes". Logical positivism's fall heralded postpositivism, where Popper's view of human knowledge as hypothetical, continually growing and open to change ascended and verificationism, in academic circles, became mostly maligned.
In a 1976 TV interview, A. J. Ayer, who had introduced logical positivism to the English-speaking world in the 1930s was asked what he saw as its main defects, and answered that "nearly all of it was false". However, he soon said that he still held "the same general approach", referring to empiricism and reductionism, whereby mental phenomena resolve to the material or physical and philosophical questions largely resolve to ones of language and meaning. In 1977, Ayer had noted:
"The verification principle is seldom mentioned and when it is mentioned it is usually scorned; it continues, however, to be put to work. The attitude of many philosophers reminds me of the relationship between Pip and Magwitch in Dickens's Great Expectations. They have lived on the money, but are ashamed to acknowledge its source."
In the late 20th and early 21st centuries, the general concept of verification criteria—in forms that differed from those of the logical positivists—was defended by Bas van Fraassen, Michael Dummett, Crispin Wright, Christopher Peacocke, David Wiggins, Richard Rorty, and others.
== See also ==
Epistemic theories of truth
Newton's flaming laser sword
Semantic anti-realism (epistemology)
Triangulation (social science)
Validation
== References == | Wikipedia/Verifiability_theory_of_meaning |
In computer programming, the strategy pattern (also known as the policy pattern) is a behavioral software design pattern that enables selecting an algorithm at runtime. Instead of implementing a single algorithm directly, code receives runtime instructions as to which in a family of algorithms to use.
Strategy lets the algorithm vary independently from clients that use it. Strategy is one of the patterns included in the influential book Design Patterns by Gamma et al. that popularized the concept of using design patterns to describe how to design flexible and reusable object-oriented software. Deferring the decision about which algorithm to use until runtime allows the calling code to be more flexible and reusable.
For instance, a class that performs validation on incoming data may use the strategy pattern to select a validation algorithm depending on the type of data, the source of the data, user choice, or other discriminating factors. These factors are not known until runtime and may require radically different validation to be performed. The validation algorithms (strategies), encapsulated separately from the validating object, may be used by other validating objects in different areas of the system (or even different systems) without code duplication.
Typically, the strategy pattern stores a reference to code in a data structure and retrieves it. This can be achieved by mechanisms such as the native function pointer, the first-class function, classes or class instances in object-oriented programming languages, or accessing the language implementation's internal storage of code via reflection.
== Structure ==
=== UML class and sequence diagram ===
In the above UML class diagram, the Context class does not implement an algorithm directly.
Instead, Context refers to the Strategy interface for performing an algorithm (strategy.algorithm()), which makes Context independent of how an algorithm is implemented.
The Strategy1 and Strategy2 classes implement the Strategy interface, that is, implement (encapsulate) an algorithm.
The UML sequence diagram
shows the runtime interactions: The Context object delegates an algorithm to different Strategy objects. First, Context calls algorithm() on a Strategy1 object,
which performs the algorithm and returns the result to Context.
Thereafter, Context changes its strategy and calls algorithm() on a Strategy2 object,
which performs the algorithm and returns the result to Context.
=== Class diagram ===
== Strategy and open/closed principle ==
According to the strategy pattern, the behaviors of a class should not be inherited. Instead, they should be encapsulated using interfaces. This is compatible with the open/closed principle (OCP), which proposes that classes should be open for extension but closed for modification.
As an example, consider a car class. Two possible functionalities for car are brake and accelerate. Since accelerate and brake behaviors change frequently between models, a common approach is to implement these behaviors in subclasses. This approach has significant drawbacks; accelerate and brake behaviors must be declared in each new car model. The work of managing these behaviors increases greatly as the number of models increases, and requires code to be duplicated across models. Additionally, it is not easy to determine the exact nature of the behavior for each model without investigating the code in each.
The strategy pattern uses composition instead of inheritance. In the strategy pattern, behaviors are defined as separate interfaces and specific classes that implement these interfaces. This allows better decoupling between the behavior and the class that uses the behavior. The behavior can be changed without breaking the classes that use it, and the classes can switch between behaviors by changing the specific implementation used without requiring any significant code changes. Behaviors can also be changed at runtime as well as at design-time. For instance, a car object's brake behavior can be changed from BrakeWithABS() to Brake() by changing the brakeBehavior member to:
== See also ==
Dependency injection
Higher-order function
List of object-oriented programming terms
Mixin
Policy-based design
Type class
Entity–component–system
Composition over inheritance
== References ==
== External links ==
Strategy Pattern in UML (in Spanish)
Geary, David (April 26, 2002). "Strategy for success". Java Design Patterns. JavaWorld. Retrieved 2020-07-20.
Strategy Pattern for C article
Refactoring: Replace Type Code with State/Strategy
The Strategy Design Pattern at the Wayback Machine (archived 2017-04-15) Implementation of the Strategy pattern in JavaScript | Wikipedia/Strategy_design_pattern |
In hydrology, behavioral modeling is a modeling approach that focuses on the modeling of the behavior of hydrological systems.
The behavioral modeling approach makes the main assumption that every system, given its environment, has a most probable behavior. This most probable behavior can be either determined directly based on the observable system characteristics and expert knowledge or, the most frequent case, has to be inferred from the available information and a likelihood function that encodes the probability of some assumed behaviors.
This modeling approach has been proposed by Sivapalan et al. (2006) in watershed hydrology.
== See also ==
Ecohydrology
Geomorphology
Biogeomorphology
Fluvial landforms of streams
== References ==
Sivapalan, M., et al. (2006), Behavioural modelling - A new approach for hydrologic prediction, paper presented at the workshop Preferential flow and transport processes in soil, November 4–9, 2006, Ascona, Switzerland. | Wikipedia/Behavioral_modeling_in_hydrology |
In artificial intelligence, hierarchical task network (HTN) planning is an approach to automated planning in which the dependency among actions can be given in the form of hierarchically structured networks.
Planning problems are specified in the hierarchical task network approach by
providing a set of tasks, which can be:
primitive (initial state) tasks, which roughly correspond to the actions of STRIPS;
compound tasks (intermediate state), which can be seen as composed of a set of simpler tasks;
goal tasks (goal state), which roughly corresponds to the goals of STRIPS, but are more general.
A solution to an HTN problem is then an executable sequence of primitive tasks that can be obtained from the initial task network by decomposing compound tasks into their set of simpler tasks, and by inserting ordering constraints.
A primitive task is an action that can be executed directly given the state in which it is executed supports its precondition. A compound task is a complex task composed of a partially ordered set of further tasks, which can either be primitive or abstract. A goal task is a task of satisfying a condition. The difference between primitive and other tasks is that the primitive actions can be directly executed. Compound and goal tasks both require a sequence of primitive actions to be performed; however, goal tasks are specified in terms of conditions that have to be made true, while compound tasks can only be specified in terms of other tasks via the task network outlined below.
Constraints among tasks are expressed in the form of networks, called (hierarchical) task networks. A task network is a set of tasks and constraints among them. Such a network can be used as the precondition for another compound or goal task to be feasible. This way, one can express that a given task is feasible only if a set of other actions (those mentioned in the network) are done, and they are done in such a way that the constraints among them (specified by the network) are satisfied. One particular formalism for representing hierarchical task networks that has been fairly widely used is TAEMS.
Some of the best-known domain-independent HTN-planning systems are:
NOAH, Nets of Action Hierarchies.
Nonlin, one of the first HTN planning systems.
SIPE-2
O-Plan, Open Planning Architecture
UMCP, the first probably sound and complete HTN planning systems.
I-X/I-Plan
SHOP2, a HTN-planner developed at University of Maryland, College Park.
PANDA, a system designed for hybrid planning, an extension of HTN planning developed at Ulm University, Germany.
HTNPlan-P, preference-based HTN planning.
HTN planning is strictly more expressive than STRIPS, to the point of being undecidable in the general case. However, many syntactic restrictions of HTN planning are decidable, with known complexities ranging from NP-complete to 2-EXPSPACE-complete, and some HTN problems can be efficiently compiled into PDDL, a STRIPS-like language.
== See also ==
STRIPS
Hierarchical control system - a feedback control system well suited for HTN planning
== References == | Wikipedia/Hierarchical_task_network |
In artificial intelligence, model-based reasoning refers to an inference method used in expert systems based on a model of the physical world. With this approach, the main focus of application development is developing the model. Then at run time, an "engine" combines this model knowledge with observed data to derive conclusions such as a diagnosis or a prediction.
== Reasoning with declarative models ==
A robot and dynamical systems as well are controlled by software. The software is implemented as a normal computer program which consists of if-then-statements, for-loops and subroutines. The task for the programmer is to find an algorithm which is able to control the robot, so that it can do a task. In the history of robotics and optimal control there were many paradigm developed. One of them are expert systems, which is focused on restricted domains. Expert systems are the precursor to model based systems.
The main reason why model-based reasoning is researched since the 1990s is to create different layers for modeling and control of a system. This allows to solve more complex tasks and existing programs can be reused for different problems. The model layer is used to monitor a system and to evaluate if the actions are correct, while the control layer determines the actions and brings the system into a goal state.
Typical techniques to implement a model are declarative programming languages like Prolog and Golog. From a mathematical point of view, a declarative model has much in common with the situation calculus as a logical formalization for describing a system. From a more practical perspective, a declarative model means, that the system is simulated with a game engine. A game engine takes a feature as input value and determines the output signal. Sometimes, a game engine is described as a prediction engine for simulating the world.
In 1990, criticism was formulated on model-based reasoning. Pioneers of Nouvelle AI have argued, that symbolic models are separated from underlying physical systems and they fail to control robots. According to behavior-based robotics representative a reactive architecture can overcome the issue. Such a system doesn't need a symbolic model but the actions are connected direct to sensor signals which are grounded in reality.
== Knowledge representation ==
In a model-based reasoning system knowledge can be represented using causal rules. For example, in a medical diagnosis system the knowledge base may contain the following rule:
∀
{\displaystyle \forall }
patients : Stroke(patient)
→
{\displaystyle \rightarrow }
Confused(patient)
∧
{\displaystyle \land }
Unequal(Pupils(patient))
In contrast in a diagnostic reasoning system knowledge would be represented through diagnostic rules such as:
∀
{\displaystyle \forall }
patients : Confused(patient)
→
{\displaystyle \rightarrow }
Stroke(patient)
∀
{\displaystyle \forall }
patients : Unequal(Pupils(patient))
→
{\displaystyle \rightarrow }
Stroke(patient)
There are many other forms of models that may be used. Models might be quantitative (for instance, based on mathematical equations) or qualitative (for instance, based on cause/effect models.) They may include representation of uncertainty. They might represent behavior over time. They might represent "normal" behavior, or might only represent abnormal behavior, as in the case of the examples above. Model types and usage for model-based reasoning are discussed in.
== See also ==
Diagnosis (artificial intelligence), determining if a system's behavior is correct
Behavior selection algorithm
Case-based reasoning, solving new problems based on solutions of past problems
== References ==
Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, p. 260, ISBN 0-13-790395-2
== External links ==
Model-based reasoning at Utrecht University
NASA Intelligent Systems Division | Wikipedia/Model-based_reasoning |
A modeling language is any artificial language that can be used to express data, information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure of a programming language.
== Overview ==
A modeling language can be graphical or textual.
Graphical modeling languages use a diagram technique with named symbols that represent concepts and lines that connect the symbols and represent relationships and various other graphical notation to represent constraints.
Textual modeling languages may use standardized keywords accompanied by parameters or natural language terms and phrases to make computer-interpretable expressions.
An example of a graphical modeling language and a corresponding textual modeling language is EXPRESS.
Not all modeling languages are executable, and for those that are, the use of them doesn't necessarily mean that programmers are no longer required. On the contrary, executable modeling languages are intended to amplify the productivity of skilled programmers, so that they can address more challenging problems, such as parallel computing and distributed systems.
A large number of modeling languages appear in the literature.
== Type of modeling languages ==
=== Graphical types ===
Example of graphical modeling languages in the field of computer science, project management and systems engineering:
Behavior Trees are a formal, graphical modeling language used primarily in systems and software engineering. Commonly used to unambiguously represent the hundreds or even thousands of natural language requirements that are typically used to express the stakeholder needs for a large-scale software-integrated system.
Business Process Modeling Notation (BPMN, and the XML form BPML) is an example of a Process Modeling language.
C-K theory consists of a modeling language for design processes.
DRAKON is a general-purpose algorithmic modeling language for specifying software-intensive systems, a schematic representation of an algorithm or a stepwise process, and a family of programming languages.
EXPRESS and EXPRESS-G (ISO 10303-11) is an international standard general-purpose data modeling language.
Extended Enterprise Modeling Language (EEML) is commonly used for business process modeling across a number of layers.
Flowchart is a schematic representation of an algorithm or a stepwise process.
Fundamental Modeling Concepts (FMC) modeling language for software-intensive systems.
IDEF is a family of modeling languages, which include IDEF0 for functional modeling, IDEF1X for information modeling, IDEF3 for business process modeling, IDEF4 for Object-Oriented Design and IDEF5 for modeling ontologies.
Jackson Structured Programming (JSP) is a method for structured programming based on correspondences between data stream structure and program structure.
LePUS3 is an object-oriented visual Design Description Language and a formal specification language that is suitable primarily for modeling large object-oriented (Java, C++, C#) programs and design patterns.
Lifecycle Modeling Language is an open-standard language for systems engineering that supports the full system lifecycle: conceptual, utilization, support and retirement stages.
Object-Role Modeling (ORM) in the field of software engineering is a method for conceptual modeling, and can be used as a tool for information and rules analysis.
Petri nets use variations on exactly one diagramming technique and topology, namely the bipartite graph. The simplicity of its basic user interface easily enabled extensive tool support over the years, particularly in the areas of model checking, graphically oriented simulation, and software verification.
Southbeach Notation is a visual modeling language used to describe situations in terms of agents that are considered useful or harmful from the modeler's perspective. The notation shows how the agents interact with each other and whether this interaction improves or worsens the situation.
Specification and Description Language (SDL) is a specification language targeted at the unambiguous specification and description of the behavior of reactive and distributed systems.
SysML is a Domain-Specific Modeling language for systems engineering that is defined as a UML profile (customization).
Unified Modeling Language (UML) is a general-purpose modeling language that is an industry standard for specifying software-intensive systems. UML 2.0, the current version, supports thirteen different diagram techniques, and has widespread tool support.
FLINT — language which allows a high-level description of normative systems.
Service-oriented modeling framework (SOMF) is a holistic language for designing enterprise and application level architecture models in the space of enterprise architecture, virtualization, service-oriented architecture (SOA), cloud computing, and more.
Architecture description language (ADL) is a language used to describe and represent the systems architecture of a system.
Architecture Analysis & Design Language (AADL) is a modeling language that supports early and repeated analyses of a system's architecture with respect to performance-critical properties through an extendable notation, a tool framework, and precisely defined semantics.
Examples of graphical modeling languages in other fields of science.
EAST-ADL is a Domain-Specific Modeling language dedicated to automotive system design.
Energy Systems Language (ESL), a language that aims to model ecological energetics & global economics.
IEC 61499 defines Domain-Specific Modeling language dedicated to distribute industrial process measurement and control systems.
=== Textual types ===
Information models can also be expressed in formalized natural languages, such as Gellish. Gellish has natural language variants such as Gellish Formal English and Gellish Formal Dutch (Gellish Formeel Nederlands), etc. Gellish Formal English is an information representation language or semantic modeling language that is defined in the Gellish English Dictionary-Taxonomy, which has the form of a Taxonomy-Ontology (similarly for Dutch). Gellish Formal English is not only suitable to express knowledge, requirements and dictionaries, taxonomies and ontologies, but also information about individual things. All that information is expressed in one language and therefore it can all be integrated, independent of the question whether it is stored in central or distributed or in federated databases. Information models in Gellish Formal English consists of collections of Gellish Formal English expressions, that use natural language terms and formalized phrases. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as:
- the Eiffel tower <is located in> Paris
- Paris <is classified as a> city
whereas information requirements and knowledge can be expressed for example as follows:
- tower <shall be located in a> geographical area
- city <is a kind of> geographical area
Such Gellish Formal English expressions use names of concepts (such as "city") and phrases that represent relation types (such as ⟨is located in⟩ and ⟨is classified as a⟩) that should be selected from the Gellish English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains more than 600 standard relation types and contains definitions of more than 40000 concepts. An information model in Gellish can express facts or make statements, queries and answers.
=== More specific types ===
In the field of computer science recently more specific types of modeling languages have emerged.
==== Algebraic ====
Algebraic Modeling Languages (AML) are high-level programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). One particular advantage of AMLs like AIMMS, AMPL, GAMS, Gekko, Mosel, OPL, MiniZinc, and OptimJ is the similarity of its syntax to the mathematical notation of optimization problems. This allows for a very concise and readable definition of problems in the domain of optimization, which is supported by certain language elements like sets, indices, algebraic expressions, powerful sparse index and data handling variables, constraints with arbitrary names. The algebraic formulation of a model does not contain any hints how to process it.
==== Behavioral ====
Behavioral languages are designed to describe the observable behavior of complex systems consisting of components that
execute concurrently. These languages focus on the description of key concepts such as: concurrency, nondeterminism, synchronization, and communication. The semantic foundations of Behavioral languages are process calculus or process algebra.
==== Discipline-specific ====
A discipline-specific modeling (DspM) language is focused on deliverables affiliated with a specific software development life cycle stage. Therefore, such language offers a distinct vocabulary, syntax, and notation for each stage, such as discovery, analysis, design, architecture, contraction, etc. For example, for the analysis phase of a project, the modeler employs specific analysis notation to deliver an analysis proposition diagram. During the design phase, however, logical design notation is used to depict the relationship between software entities. In addition, the discipline-specific modeling language best practices does not preclude practitioners from combining the various notations in a single diagram.
==== Domain-specific ====
Domain-specific modeling (DSM) is a software engineering methodology for designing and developing systems, most often IT systems such as computer software. It involves the systematic use of a graphical domain-specific language (DSL) to represent the various facets of a system. DSM languages tend to support higher-level abstractions than General-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system.
==== Framework-specific ====
A framework-specific modeling language (FSML) is a kind of domain-specific modeling language which is designed for an object-oriented application framework. FSMLs define framework-provided abstractions as FSML concepts and decompose the abstractions into features. The features represent implementation steps or choices.
A FSML concept can be configured by selecting features and providing values for features. Such a concept configuration represents how the concept should be implemented in the code. In other words, concept configuration describes how the framework should be completed in order to create the implementation of the concept.
==== Information and knowledge modeling ====
Linked data and ontology engineering require 'host languages' to represent entities and the relations between them, constraints between the properties of entities and relations, and metadata attributes. JSON-LD and RDF are two major (and semantically almost equivalent) languages in this context, primarily because they support statement reification and contextualisation which are essential properties to support the higher-order logic needed to reason about models. Model transformation is a common example of such reasoning.
==== Object-oriented ====
Object modeling languages are modeling languages based on a standardized set of symbols and ways of arranging them to model (part of) an object oriented software design or system design.
Some organizations use them extensively in combination with a software development methodology to progress from initial specification to an implementation plan and to communicate that plan to an entire team of developers and stakeholders. Because a modeling language is visual and at a higher-level of abstraction than code, using models encourages the generation of a shared vision that may prevent problems of differing interpretation later in development. Often software modeling tools are used to construct these models, which may then be capable of automatic translation to code.
==== Virtual reality ====
Virtual Reality Modeling Language (VRML), before 1995 known as the Virtual Reality Markup Language is a standard file format for representing 3-dimensional (3D) interactive vector graphics, designed particularly with the World Wide Web in mind.
==== Others ====
Architecture Description Language
Face Modeling Language
Generative Modelling Language
Java Modeling Language
Promela
Rebeca Modeling Language
Service Modeling Language
Web Services Modeling Language
X3D
== Applications ==
Various kinds of modeling languages are applied in different disciplines, including computer science, information management, business process modeling, software engineering, and systems engineering. Modeling languages can be used to specify:
system requirements,
structures and
behaviors.
Modeling languages are intended to be used to precisely specify systems so that stakeholders (e.g., customers, operators, analysts, designers) can better understand the system being modeled.
The more mature modeling languages are precise, consistent and executable. Informal diagramming techniques applied with drawing tools are expected to produce useful pictorial representations of system requirements, structures and behaviors, which can be useful for communication, design, and problem solving but cannot be used programmatically.: 539 Executable modeling languages applied with proper tool support, however, are expected to automate system verification and validation, simulation and code generation from the same representations.
== Quality ==
A review of modelling languages is essential to be able to assign which languages are appropriate for different modelling settings. In the term settings we include stakeholders, domain and the knowledge connected. Assessing the language quality is a means that aims to achieve better models.
=== Framework for evaluation ===
Here language quality is stated in accordance with the SEQUAL framework for quality of models developed by Krogstie, Sindre and Lindland (2003), since this is a framework that connects the language quality to a framework for general model quality. Five areas are used in this framework to describe language quality and these are supposed to express both the conceptual as well as the visual notation of the language. We will not go into a thorough explanation of the underlying quality framework of models but concentrate on the areas used to explain the language quality framework.
==== Domain appropriateness ====
The framework states the ability to represent the domain as domain appropriateness. The statement appropriateness can be a bit vague, but in this particular context it means able to express. You should ideally only be able to express things that are in the domain but be powerful enough to include everything that is in the domain. This requirement might seem a bit strict, but the aim is to get a visually expressed model which includes everything relevant to the domain and excludes everything not appropriate for the domain. To achieve this, the language has to have a good distinction of which notations and syntaxes that are advantageous to present.
==== Participant appropriateness ====
To evaluate the participant appropriateness we try to identify how well the language expresses the knowledge held by the stakeholders. This involves challenges since a stakeholder's knowledge is subjective. The knowledge of the stakeholder is both tacit and explicit. Both types of knowledge are of dynamic character. In this framework only the explicit type of knowledge is taken into account. The language should to a large extent express all the explicit knowledge of the stakeholders relevant to the domain.
==== Modeller appropriateness ====
Last paragraph stated that knowledge of the stakeholders should be presented in a good way. In addition it is imperative that the language should be able to express all possible explicit knowledge of the stakeholders. No knowledge should be left unexpressed due to lacks in the language.
==== Comprehensibility appropriateness ====
Comprehensibility appropriateness makes sure that the social actors understand the model due to a consistent use of the language. To achieve this the framework includes a set of criteria. The general importance that these express is that the language should be flexible, easy to organize and easy to distinguish different parts of the language internally as well as from other languages. In addition to this, the goal should be as simple as possible and that each symbol in the language has a unique representation.
This is in connection to also to the structure of the development requirements.
.
==== Tool appropriateness ====
To ensure that the domain actually modelled is usable for analyzing and further processing, the language has to ensure that it is possible to reason in an automatic way. To achieve this it has to include formal syntax and semantics. Another advantage by formalizing is the ability to discover errors in an early stage. It is not always that the language best fitted for the technical actors is the same as for the social actors.
==== Organizational appropriateness ====
The language used is appropriate for the organizational context, e.g. that the language is standardized within the organization, or that it is supported by tools that are chosen as standard in the organization.
== See also ==
== References ==
== Further reading ==
John Krogstie (2003) "Evaluating UML using a generic quality framework" . SINTEF Telecom and Informatics and IDI, NTNU, Norway
Krogstie and Sølvsberg (2003). Information Systems Engineering: Conceptual Modeling in a Quality Perspective. Institute of computer and information sciences.\
Anna Gunhild Nysetvold and John Krogstie (2005). "Assessing business processing modeling languages using a generic quality framework". Institute of computer and information sciences.
== External links ==
Fundamental Modeling Concepts
Software Modeling Languages Portal
BIP -- Incremental Component-based Construction of Real-time Systems
Gellish Formal English | Wikipedia/Behavioral_modeling_language |
A behavior tree is a mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create very complex tasks composed of simple tasks, without worrying how the simple tasks are implemented. Behavior trees present some similarities to hierarchical state machines with the key difference that the main building block of a behavior is a task rather than a state. Its ease of human understanding make behavior trees less error prone and very popular in the game developer community. Behavior trees have been shown to generalize several other control architectures.
== Background ==
A behavior based control structure has been initially proposed by Rodney Brooks in his paper titled 'A robust layered control system for a mobile robot'. In the initial proposal a list of behaviors could work as alternative one another, later the approach has been extended and generalized in a tree-like organization of behaviors, with extensive application in the game industry as a powerful tool to model the behavior of non-player characters (NPCs).
They have been extensively used in high-profile video games such as Halo, Bioshock, and Spore. Recent works propose behavior trees as a multi-mission control framework for UAV, complex robots, robotic manipulation, and multi-robot systems.
Behavior trees have now reached the maturity to be treated in Game AI textbooks, as well as generic game environments such as Unity (game engine) and Unreal Engine (see links below).
Behavior trees became popular for their development paradigm: being able to create a complex behavior by only programming the NPC's actions and then designing a tree structure (usually through drag and drop) whose leaf nodes are actions and whose inner nodes determine the NPC's decision making. Behavior trees are visually intuitive and easy to design, test, and debug, and provide more modularity, scalability, and reusability than other behavior creation methods.
Over the years, the diverse implementations of behavior trees kept improving both in efficiency and capabilities to satisfy the demands of the industry, until they evolved into event-driven behavior trees. Event-driven behavior trees solved some scalability issues of classical behavior trees by changing how the tree internally handles its execution, and by introducing a new type of node that can react to events and abort running nodes. Nowadays, the concept of event-driven behavior tree is a standard and used in most of the implementations, even though they are still called "behavior trees" for simplicity.
== Key concepts ==
A behavior tree is graphically represented as a directed tree in which the nodes are classified as root, control flow nodes, or execution nodes (tasks). For each pair of connected nodes the outgoing node is called parent and the incoming node is called child. The root has no parents and exactly one child, the control flow nodes have one parent and at least one child, and the execution nodes have one parent and no children. Graphically, the children of a control flow node are placed below it, ordered from left to right.
The execution of a behavior tree starts from the root which sends ticks with a certain frequency to its child. A tick is an enabling signal that allows the execution of a child. When the execution of a node in the behavior tree is allowed, it returns to the parent a status running if its execution has not finished yet, success if it has achieved its goal, or failure otherwise.
=== Control flow node ===
A control flow node is used to control the subtasks of which it is composed. A control flow node may be either a selector (fallback) node or a sequence node. They run each of their subtasks in turn. When a subtask is completed and returns its status (success or failure), the control flow node decides whether to execute the next subtask or not.
==== Selector (fallback) node ====
Fallback nodes are used to find and execute the first child that does not fail. A fallback node will return with a status code of success or running immediately when one of its children returns success or running (see Figure I and the pseudocode below). The children are ticked in order of importance, from left to right.
In pseudocode, the algorithm for a fallback composition is:
1 for i from 1 to n do
2 childstatus ← Tick(child(i))
3 if childstatus = running
4 return running
5 else if childstatus = success
6 return success
7 end
8 return failure
==== Sequence node ====
Sequence nodes are used to find and execute the first child that has not yet succeeded. A sequence node will return with a status code of failure or running immediately when one of its children returns failure or running (see Figure II and the pseudocode below). The children are ticked in order, from left to right.
In pseudocode, the algorithm for a sequence composition is:
1 for i from 1 to n do
2 childstatus ← Tick(child(i))
3 if childstatus = running
4 return running
5 else if childstatus = failure
6 return failure
7 end
8 return success
== Mathematical state space definition ==
In order to apply control theory tools to the analysis of behavior trees, they can be defined as three-tuple.
T
i
=
{
f
i
,
r
i
,
Δ
t
}
,
{\displaystyle T_{i}=\{f_{i},r_{i},\Delta t\},}
where
i
∈
N
{\displaystyle i\in \mathbb {N} }
is the index of the tree,
f
i
:
R
n
→
R
n
{\displaystyle f_{i}:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{n}}
is a vector field representing the right hand side of an ordinary difference equation,
Δ
t
{\displaystyle \Delta t}
is a time step and
r
i
:
R
n
→
{
R
i
,
S
i
,
F
i
}
{\displaystyle r_{i}:\mathbb {R} ^{n}\rightarrow \{R_{i},S_{i},F_{i}\}}
is the return status, that can be equal to either
Running
R
i
{\displaystyle R_{i}}
,
Success
S
i
{\displaystyle S_{i}}
, or
Failure
F
i
{\displaystyle F_{i}}
.
Note: A task is a degenerate behavior tree with no parent and no child.
=== Behavior tree execution ===
The execution of a behavior tree is described by the following standard ordinary difference equations:
x
k
+
1
(
t
k
+
1
)
=
f
i
(
x
k
(
t
k
)
)
{\displaystyle x_{k+1}(t_{k+1})=f_{i}(x_{k}(t_{k}))}
t
k
+
1
=
t
k
+
Δ
t
{\displaystyle t_{k+1}=t_{k}+\Delta t}
where
k
∈
N
{\displaystyle k\in \mathbb {N} }
represent the discrete time, and
x
∈
R
n
{\displaystyle x\in \mathbb {R} ^{n}}
is the state space of the system modelled by the behavior tree.
=== Sequence composition ===
Two behavior trees
T
i
{\displaystyle T_{i}}
and
T
j
{\displaystyle T_{j}}
can be composed into a more complex behavior tree
T
0
{\displaystyle T_{0}}
using a Sequence operator.
T
0
=
sequence
(
T
i
,
T
j
)
.
{\displaystyle T_{0}={\mbox{sequence}}(T_{i},T_{j}).}
Then return status
r
0
{\displaystyle r_{0}}
and the vector field
f
0
{\displaystyle f_{0}}
associated with
T
0
{\displaystyle T_{0}}
are defined (for
S
1
{\displaystyle {\mathcal {S}}_{1}}
) as follows:
r
0
(
x
k
)
=
{
r
j
(
x
k
)
if
x
k
∈
S
1
r
i
(
x
k
)
otherwise
.
{\displaystyle r_{0}(x_{k})={\begin{cases}r_{j}(x_{k})&{\text{ if }}x_{k}\in {\mathcal {S}}_{1}\\r_{i}(x_{k})&{\text{ otherwise }}.\end{cases}}}
f
0
(
x
k
)
=
{
f
j
(
x
k
)
if
x
k
∈
S
1
f
i
(
x
k
)
otherwise
.
{\displaystyle f_{0}(x_{k})={\begin{cases}f_{j}(x_{k})&{\text{ if }}x_{k}\in {\mathcal {S}}_{1}\\f_{i}(x_{k})&{\text{ otherwise }}.\end{cases}}}
== See also ==
Decision tree
Hybrid system
Subsumption architecture
== References ==
== External links ==
ROS behavior tree library
Unreal Engine 4 behavior tree documentation
Behavior trees for AI: How they work
Behavior Trees: Simple yet Powerful AI for your Robot Archived 2020-02-25 at the Wayback Machine
Video Lectures on Behavior Trees | Wikipedia/Behavior_tree_(artificial_intelligence,_robotics_and_control) |
Logology is the study of all things related to science and its practitioners—philosophical, biological, psychological, societal, historical, political, institutional, financial. The term "logology" is back-formed from the suffix "-logy", as in "geology", "anthropology", etc., in the sense of the "study of science".
The word "logology" provides grammatical variants not available with the earlier terms "science of science" and "sociology of science", such as "logologist", "logologize", "logological", and "logologically". The emerging field of metascience is a subfield of logology.
== Origins ==
The early 20th century brought calls, initially from sociologists, for the creation of a new, empirically based science that would study the scientific enterprise itself. The early proposals were put forward with some hesitancy and tentativeness. The new meta-science would be given a variety of names, including "science of knowledge", "science of science", "sociology of science", and "logology".
Florian Znaniecki, who is considered to be the founder of Polish academic sociology, and who in 1954 also served as the 44th president of the American Sociological Association, opened a 1923 article:
[T]hough theoretical reflection on knowledge—which arose as early as Heraclitus and the Eleatics—stretches... unbroken... through the history of human thought to the present day... we are now witnessing the creation of a new science of knowledge [author's emphasis] whose relation to the old inquiries may be compared with the relation of modern physics and chemistry to the 'natural philosophy' that preceded them, or of contemporary sociology to the 'political philosophy' of antiquity and the Renaissance. [T]here is beginning to take shape a concept of a single, general theory of knowledge... permitting of empirical study.... This theory... is coming to be distinguished clearly from epistemology, from normative logic, and from a strictly descriptive history of knowledge."
A dozen years later, Polish husband-and-wife sociologists Stanisław Ossowski and Maria Ossowska (the Ossowscy) took up the same subject in an article on "The Science of Science" whose 1935 English-language version first introduced the term "science of science" to the world. The article postulated that the new discipline would subsume such earlier ones as epistemology, the philosophy of science, the psychology of science, and the sociology of science. The science of science would also concern itself with questions of a practical character such as social and state policy in relation to science, such as the organization of institutions of higher learning, of research institutes, and of scientific expeditions, and the protection of scientific workers, etc. It would concern itself as well with historical questions: the history of the conception of science, of the scientist, of the various disciplines, and of learning in general.
In their 1935 paper, the Ossowscy mentioned the German philosopher Werner Schingnitz (1899–1953) who, in fragmentary 1931 remarks, had enumerated some possible types of research in the science of science and had proposed his own name for the new discipline: scientiology. The Ossowscy took issue with the name:
Those who wish to replace the expression 'science of science' by a one-word term [that] sound[s] international, in the belief that only after receiving such a name [will] a given group of [questions be] officially dubbed an autonomous discipline, [might] be reminded of the name 'mathesiology', proposed long ago for similar purposes [by the French mathematician and physicist André-Marie Ampère (1775–1836)]."
Yet, before long, in Poland, the unwieldy three-word term nauka o nauce, or science of science, was replaced by the more versatile one-word term naukoznawstwo, or logology, and its natural variants: naukoznawca or logologist, naukoznawczy or logological, and naukoznawczo or logologically. And just after World War II, only 11 years after the Ossowscy's landmark 1935 paper, the year 1946 saw the founding of the Polish Academy of Sciences' quarterly Zagadnienia Naukoznawstwa (Logology) –— long before similar journals in many other countries.
The new discipline also took root elsewhere—in English-speaking countries, without the benefit of a one-word name.
== Science ==
=== The term ===
The word "science", from the Latin "scientia" (meaning "knowledge"), signifies somewhat different things in different languages. In English, "science", when unqualified, generally refers to the exact, natural, or hard sciences. The corresponding terms in other languages, for example French, German, and Polish, refer to a broader domain that includes not only the exact sciences (logic and mathematics) and the natural sciences (physics, chemistry, biology, Earth sciences, astronomy, etc.) but also the engineering sciences, social sciences (human geography, psychology, cultural anthropology, sociology, political science, economics, linguistics, archaeology, etc.), and humanities (philosophy, history, classics, literary theory, etc.).
University of Amsterdam humanities professor Rens Bod points out that science—defined as a set of methods that describes and interprets observed or inferred phenomena, past or present, aimed at testing hypotheses and building theories—applies to such humanities fields as philology, art history, musicology, philosophy, religious studies, historiography, and literary studies.
Bod gives a historic example of scientific textual analysis. In 1440 the Italian philologist Lorenzo Valla exposed the Latin document Donatio Constantini, or The Donation of Constantine – which was used by the Catholic Church to legitimize its claim to lands in the Western Roman Empire – as a forgery. Valla used historical, linguistic, and philological evidence, including counterfactual reasoning, to rebut the document. Valla found words and constructions in the document that could not have been used by anyone in the time of Emperor Constantine I, at the beginning of the fourth century C.E. For example, the late Latin word feudum, meaning fief, referred to the feudal system, which would not come into existence until the medieval era, in the seventh century C.E. Valla's methods were those of science, and inspired the later scientifically-minded work of Dutch humanist Erasmus of Rotterdam (1466–1536), Leiden University professor Joseph Justus Scaliger (1540–1609), and philosopher Baruch Spinoza (1632–1677). Here it is not the experimental method dominant in the exact and natural sciences, but the comparative method central to the humanities, that reigns supreme.
=== Knowability ===
Science's search for the truth about various aspects of reality entails the question of the very knowability of reality. Philosopher Thomas Nagel writes: "[In t]he pursuit of scientific knowledge through the interaction between theory and observation... we test theories against their observational consequences, but we also question or reinterpret our observations in light of theory. (The choice between geocentric and heliocentric theories at the time of the Copernican Revolution is a vivid example.) ...
How things seem is the starting point for all knowledge, and its development through further correction, extension, and elaboration is inevitably the result of more seemings—considered judgments about the plausibility and consequences of different theoretical hypotheses. The only way to pursue the truth is to consider what seems true, after careful reflection of a kind appropriate to the subject matter, in light of all the relevant data, principles, and circumstances."
The question of knowability is approached from a different perspective by physicist-astronomer Marcelo Gleiser: "What we observe is not nature itself but nature as discerned through data we collect from machines. In consequence, the scientific worldview depends on the information we can acquire through our instruments. And given that our tools are limited, our view of the world is necessarily myopic. We can see only so far into the nature of things, and our ever shifting scientific worldview reflects this fundamental limitation on how we perceive reality." Gleiser cites the condition of biology before and after the invention of the microscope or gene sequencing; of astronomy before and after the telescope; of particle physics before and after colliders or fast electronics. "[T]he theories we build and the worldviews we construct change as our tools of exploration transform. This trend is the trademark of science."
Writes Gleiser: "There is nothing defeatist in understanding the limitations of the scientific approach to knowledge.... What should change is a sense of scientific triumphalism—the belief that no question is beyond the reach of scientific discourse.
"There are clear unknowables in science—reasonable questions that, unless currently accepted laws of nature are violated, we cannot find answers to. One example is the multiverse: the conjecture that our universe is but one among a multitude of others, each potentially with a different set of laws of nature. Other universes lie outside our causal horizon, meaning that we cannot receive or send signals to them. Any evidence for their existence would be circumstantial: for example, scars in the radiation permeating space because of a past collision with a neighboring universe."
Gleiser gives three further examples of unknowables, involving the origins of the universe; of life; and of mind:
"Scientific accounts of the origin of the universe are incomplete because they must rely on a conceptual framework to even begin to work: energy conservation, relativity, quantum physics, for instance. Why does the universe operate under these laws and not others?
"Similarly, unless we can prove that only one or very few biochemical pathways exist from nonlife to life, we cannot know for sure how life originated on Earth.
"For consciousness, the problem is the jump from the material to the subjective—for example, from firing neurons to the experience of pain or the color red. Perhaps some kind of rudimentary consciousness could emerge in a sufficiently complex machine. But how could we tell? How do we establish—as opposed to conjecture—that something is conscious?" Paradoxically, writes Gleiser, it is through our consciousness that we make sense of the world, even if imperfectly. "Can we fully understand something of which we are a part?"
Among all the sciences (i.e., disciplines of learning, writ large) there seems to exist an inverse relation between precision and intuitiveness. The most intuitive of the disciplines, aptly termed the "humanities", relate to common human experience and, even at their most exact, are thrown back on the comparative method; less intuitive and more precise than the humanities are the social sciences; while, at the base of the inverted pyramid of the disciplines, physics (concerned with mattergy – the matter and energy comprising the universe) is, at its deepest, the most precise discipline and at the same time utterly non-intuitive.
=== Facts and theories ===
Theoretical physicist and mathematician Freeman Dyson explains that "[s]cience consists of facts and theories":
"Facts are supposed to be true or false. They are discovered by observers or experimenters. A scientist who claims to have discovered a fact that turns out to be wrong is judged harshly....
"Theories have an entirely different status. They are free creations of the human mind, intended to describe our understanding of nature. Since our understanding is incomplete, theories are provisional. Theories are tools of understanding, and a tool does not need to be precisely true in order to be useful. Theories are supposed to be more-or-less true... A scientist who invents a theory that turns out to be wrong is judged leniently."
Dyson cites a psychologist's description of how theories are born: "We can't live in a state of perpetual doubt, so we make up the best story possible and we live as if the story were true." Dyson writes: "The inventor of a brilliant idea cannot tell whether it is right or wrong." The passionate pursuit of wrong theories is a normal part of the development of science. Dyson cites, after Mario Livio, five famous scientists who made major contributions to the understanding of nature but also believed firmly in a theory that proved wrong.
Charles Darwin explained the evolution of life with his theory of natural selection of inherited variations, but he believed in a theory of blending inheritance that made the propagation of new variations impossible. He never read Gregor Mendel's studies that showed that the laws of inheritance would become simple when inheritance was considered as a random process. Though Darwin in 1866 did the same experiment that Mendel had, Darwin did not get comparable results because he failed to appreciate the statistical importance of using very large experimental samples. Eventually, Mendelian inheritance by random variation would, no thanks to Darwin, provide the raw material for Darwinian selection to work on.
William Thomson (Lord Kelvin) discovered basic laws of energy and heat, then used these laws to calculate an estimate of the age of the Earth that was too short by a factor of fifty. He based his calculation on the belief that the Earth's mantle was solid and could transfer heat from the interior to the surface only by conduction. It is now known that the mantle is partly fluid and transfers most of the heat by the far more efficient process of convection, which carries heat by a massive circulation of hot rock moving upward and cooler rock moving downward. Kelvin could see the eruptions of volcanoes bringing hot liquid from deep underground to the surface; but his skill in calculation blinded him to processes, such as volcanic eruptions, that could not be calculated.
Linus Pauling discovered the chemical structure of protein and proposed a completely wrong structure for DNA, which carries hereditary information from parent to offspring. Pauling guessed a wrong structure for DNA because he assumed that a pattern that worked for protein would also work for DNA. He overlooked the gross chemical differences between protein and DNA. Francis Crick and James Watson paid attention to the differences and found the correct structure for DNA that Pauling had missed a year earlier.
Astronomer Fred Hoyle discovered the process by which the heavier elements essential to life are created by nuclear reactions in the cores of massive stars. He then proposed a theory of the history of the universe known as steady-state cosmology, which has the universe existing forever without an initial Big Bang (as Hoyle derisively dubbed it). He held his belief in the steady state long after observations proved that the Big Bang had happened.
Albert Einstein discovered the theory of space, time, and gravitation known as general relativity, and then added a cosmological constant, later known as dark energy. Subsequently, Einstein withdrew his proposal of dark energy, believing it unnecessary. Long after his death, observations suggested that dark energy really exists, so that Einstein's addition to the theory may have been right; and his withdrawal, wrong.
To Mario Livio's five examples of scientists who blundered, Dyson adds a sixth: himself. Dyson had concluded, on theoretical principles, that what was to become known as the W-particle, a charged weak boson, could not exist. An experiment conducted at CERN, in Geneva, later proved him wrong. "With hindsight I could see several reasons why my stability argument would not apply to W-particles. [They] are too massive and too short-lived to be a constituent of anything that resembles ordinary matter."
=== Truth ===
Harvard University historian of science Naomi Oreskes points out that the truth of scientific findings can never be assumed to be finally, absolutely settled. The history of science offers many examples of matters that scientists once thought to be settled and which have proven not to be, such as the concepts of Earth being the center of the universe, the absolute nature of time and space, the stability of continents, and the cause of infectious disease.
Science, writes Oreskes, is not a fixed, immutable set of discoveries but "a process of learning and discovery [...]. Science can also be understood as an institution (or better, a set of institutions) that facilitates this work.
It is often asserted that scientific findings are true because scientists use "the scientific method". But, writes Oreskes, "we can never actually agree on what that method is. Some will say it is empiricism: observation and description of the world. Others will say it is the experimental method: the use of experience and experiment to test hypotheses. (This is cast sometimes as the hypothetico-deductive method, in which the experiment must be framed as a deduction from theory, and sometimes as falsification, where the point of observation and experiment is to refute theories, not to confirm them.) Recently a prominent scientist claimed the scientific method was to avoid fooling oneself into thinking something is true that is not, and vice versa."
In fact, writes Oreskes, the methods of science have varied between disciplines and across time. "Many scientific practices, particularly statistical tests of significance, have been developed with the idea of avoiding wishful thinking and self-deception, but that hardly constitutes 'the scientific method.'"
Science, writes Oreskes, "is not simple, and neither is the natural world; therein lies the challenge of science communication. [...] Our efforts to understand and characterize the natural world are just that: efforts. Because we're human, we often fall flat."
"Scientific theories", according to Oreskes, "are not perfect replicas of reality, but we have good reason to believe that they capture significant elements of it."
=== Empiricism ===
Steven Weinberg, 1979 Nobel laureate in physics, and a historian of science, writes that the core goal of science has always been the same: "to explain the world"; and in reviewing earlier periods of scientific thought, he concludes that only since Isaac Newton has that goal been pursued more or less correctly. He decries the "intellectual snobbery" that Plato and Aristotle showed in their disdain for science's practical applications, and he holds Francis Bacon and René Descartes to have been the "most overrated" among the forerunners of modern science (they tried to prescribe rules for conducting science, which "never works").
Weinberg draws parallels between past and present science, as when a scientific theory is "fine-tuned" (adjusted) to make certain quantities equal, without any understanding of why they should be equal. Such adjusting vitiated the celestial models of Plato's followers, in which different spheres carrying the planets and stars were assumed, with no good reason, to rotate in exact unison. But, Weinberg writes, a similar fine-tuning also besets current efforts to understand the "dark energy" that is speeding up the expansion of the universe.
Ancient science has been described as having gotten off to a good start, then faltered. The doctrine of atomism, propounded by the pre-Socratic philosophers Leucippus and Democritus, was naturalistic, accounting for the workings of the world by impersonal processes, not by divine volitions. Nevertheless, these pre-Socratics come up short for Weinberg as proto-scientists, in that they apparently never tried to justify their speculations or to test them against evidence.
Weinberg believes that science faltered early on due to Plato's suggestion that scientific truth could be attained by reason alone, disregarding empirical observation, and due to Aristotle's attempt to explain nature teleologically—in terms of ends and purposes. Plato's ideal of attaining knowledge of the world by unaided intellect was "a false goal inspired by mathematics"—one that for centuries "stood in the way of progress that could be based only on careful analysis of careful observation." And it "never was fruitful" to ask, as Aristotle did, "what is the purpose of this or that physical phenomenon."
A scientific field in which the Greek and Hellenistic world did make progress was astronomy. This was partly for practical reasons: the sky had long served as compass, clock, and calendar. Also, the regularity of the movements of heavenly bodies made them simpler to describe than earthly phenomena. But not too simple: though the sun, moon and "fixed stars" seemed regular in their celestial circuits, the "wandering stars"—the planets—were puzzling; they seemed to move at variable speeds, and even to reverse direction. Writes Weinberg: "Much of the story of the emergence of modern science deals with the effort, extending over two millennia, to explain the peculiar motions of the planets."
The challenge was to make sense of the apparently irregular wanderings of the planets on the assumption that all heavenly motion is actually circular and uniform in speed. Circular, because Plato held the circle to be the most perfect and symmetrical form; and therefore circular motion, at uniform speed, was most fitting for celestial bodies. Aristotle agreed with Plato. In Aristotle's cosmos, everything had a "natural" tendency to motion that fulfilled its inner potential. For the cosmos' sublunary part (the region below the Moon), the natural tendency was to move in a straight line: downward, for earthen things (such as rocks) and water; upward, for air and fiery things (such as sparks). But in the celestial realm things were not composed of earth, water, air, or fire, but of a "fifth element", or "quintessence," which was perfect and eternal. And its natural motion was uniformly circular. The stars, the Sun, the Moon, and the planets were carried in their orbits by a complicated arrangement of crystalline spheres, all centered around an immobile Earth.
The Platonic-Aristotelian conviction that celestial motions must be circular persisted stubbornly. It was fundamental to the astronomer Ptolemy's system, which improved on Aristotle's in conforming to the astronomical data by allowing the planets to move in combinations of circles called "epicycles".
It even survived the Copernican Revolution. Copernicus was conservative in his Platonic reverence for the circle as the heavenly pattern. According to Weinberg, Copernicus was motivated to dethrone the Earth in favor of the Sun as the immobile center of the cosmos largely by aesthetic considerations: he objected to the fact that Ptolemy, though faithful to Plato's requirement that heavenly motion be circular, had departed from Plato's other requirement that it be of uniform speed. By putting the sun at the center—actually, somewhat off-center—Copernicus sought to honor circularity while restoring uniformity. But to make his system fit the observations as well as Ptolemy's system, Copernicus had to introduce still more epicycles. That was a mistake that, writes Weinberg, illustrates a recurrent theme in the history of science: "A simple and beautiful theory that agrees pretty well with observation is often closer to the truth than a complicated ugly theory that agrees better with observation."
The planets, however, do not move in perfect circles but in ellipses. It was Johannes Kepler, about a century after Copernicus, who reluctantly (for he too had Platonic affinities) realized this. Thanks to his examination of the meticulous observations compiled by astronomer Tycho Brahe, Kepler "was the first to understand the nature of the departures from uniform circular motion that had puzzled astronomers since the time of Plato."
The replacement of circles by supposedly ugly ellipses overthrew Plato's notion of perfection as the celestial explanatory principle. It also destroyed Aristotle's model of the planets carried in their orbits by crystalline spheres; writes Weinberg, "there is no solid body whose rotation can produce an ellipse." Even if a planet were attached to an ellipsoid crystal, that crystal's rotation would still trace a circle. And if the planets were pursuing their elliptical motion through empty space, then what was holding them in their orbits?
Science had reached the threshold of explaining the world not geometrically, according to shape, but dynamically, according to force. It was Isaac Newton who finally crossed that threshold. He was the first to formulate, in his "laws of motion", the concept of force. He demonstrated that Kepler's ellipses were the very orbits the planets would take if they were attracted toward the Sun by a force that decreased as the square of the planet's distance from the Sun. And by comparing the Moon's motion in its orbit around the Earth to the motion of, perhaps, an apple as it falls to the ground, Newton deduced that the forces governing them were quantitatively the same. "This," writes Weinberg, "was the climactic step in the unification of the celestial and terrestrial in science."
By formulating a unified explanation of the behavior of planets, comets, moons, tides, and apples, writes Weinberg, Newton "provided an irresistible model for what a physical theory should be"—a model that fit no preexisting metaphysical criterion. In contrast to Aristotle, who claimed to explain the falling of a rock by appeal to its inner striving, Newton was unconcerned with finding a deeper cause for gravity. He declared in a postscript to the second, 1713 edition of his Philosophiæ Naturalis Principia Mathematica: "I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses. It is enough that gravity really exists and acts according to the laws that we have set forth." What mattered were his mathematically stated principles describing this force, and their ability to account for a vast range of phenomena.
About two centuries later, in 1915, a deeper explanation for Newton's law of gravitation was found in Albert Einstein's general theory of relativity: gravity could be explained as a manifestation of the curvature in spacetime resulting from the presence of matter and energy. Successful theories like Newton's, writes Weinberg, may work for reasons that their creators do not understand—reasons that deeper theories will later reveal. Scientific progress is not a matter of building theories on a foundation of reason, but of unifying a greater range of phenomena under simpler and more general principles.
==== Absence of evidence ====
Naomi Oreskes cautions against making "the classic error of conflating absence of evidence with evidence of absence [emphases added]." She cites two examples of this error that were perpetrated in 2016 and 2023.
In 2016 the Cochrane Library, a collection of databases in medicine and other healthcare specialties, published a report that was widely understood to indicate that flossing one's teeth confers no advantage to dental health. But the American Academy of Periodontology, dental professors, deans of dental schools, and clinical dentists all held that clinical practice shows differences in tooth and gum health between those who floss and those who don't.
Oreskes explains that "Cochrane Reviews base their findings on randomized controlled trials (RCTs), often called the 'gold standard' of scientific evidence." But many questions can't be answered well using this method, and some can't be answered at all. "Nutrition is a case in point. [Y]ou can't control what people eat, and when you ask... what they have eaten, many people lie. Flossing is similar. One survey concluded that one in four Americans who claimed to floss regularly was fibbing."
In 2023 Cochrane published a report determining that wearing surgical masks "probably makes little or no difference" in slowing the spread of respiratory illnesses such as COVID-19. Mass media reduced this to the claim that masks did not work. The Cochrane Library's editor-in-chief objected to such characterizations of the review; she said the report had not concluded that "masks don't work", but rather that the "results were inconclusive." The report had made clear that its conclusions were about the quality and capaciousness of available evidence, which the authors felt were insufficient to prove that masking was effective. The report's authors were "uncertain whether wearing [surgical] masks or N95/P2 respirators helps to slow the spread of respiratory viruses." Still, they were also uncertain about that uncertainty [emphasis added], stating that their confidence in their conclusion was "low to moderate."
Subsequently the report's lead author confused the public by stating that mask-wearing "Makes no difference – none of it", and that Covid policies were "evidence-free": he thus perpetrated what Oreskes calls "the [...] error of conflating absence of evidence with evidence of absence." Studies have in fact shown that U.S. states with mask mandates saw a substantial decline in Covid spread within days of mandate orders being signed; in the period from 31 March to 22 May 2020, more than 200,000 cases were avoided.
Oreskes calls the Cochrane report's neglect of the epidemiological evidence – because it didn't meet Cochrane's rigid standard – "methodological fetishism," when scientists "fixate on a preferred methodology and dismiss studies that don't follow it."
=== Artificial intelligence ===
The term "artificial intelligence" (AI) was coined in 1955 by John McCarthy when he and other computer scientists were planning a workshop and did not want to invite Norbert Wiener, the brilliant, pugnacious, and increasingly philosophical (rather than practical) author on feedback mechanisms who had coined the term "cybernetics". The new term artificial intelligence, writes Kenneth Cukier, "set in motion decades of semantic squabbles ('Can machines think?') and fueled anxieties over malicious robots... If McCarthy... had chosen a blander phrase—say, 'automation studies'—the concept might not have appealed as much to Hollywood [movie] producers and [to] journalists..." Similarly Naomi Oreskes has commented: "[M]achine 'intelligence'... isn't intelligence at all but something more like 'machine capability.'"
As machines have become increasingly capable, specific tasks considered to require "intelligence", such as optical character recognition, have often been removed from the definition of AI, a phenomenon known as the "AI effect". It has been quipped that "AI is whatever hasn't been done yet."
Since 1950, when Alan Turing proposed what has come to be called the "Turing test," there has been speculation whether machines such as computers can possess intelligence; and, if so, whether intelligent machines could become a threat to human intellectual and scientific ascendancy—or even an existential threat to humanity. John Searle points out common confusion about the correct interpretation of computation and information technology. "For example, one routinely reads that in exactly the same sense in which Garry Kasparov… beat Anatoly Karpov in chess, the computer called Deep Blue played and beat Kasparov.... [T]his claim is [obviously] suspect. In order for Kasparov to play and win, he has to be conscious that he is playing chess, and conscious of a thousand other things... Deep Blue is conscious of none of these things because it is not conscious of anything at all. Why is consciousness so important? You cannot literally play chess or do much of anything else cognitive if you are totally disassociated from consciousness."
Searle explains that, "in the literal, real, observer-independent sense in which humans compute, mechanical computers do not compute. They go through a set of transitions in electronic states that we can interpret computationally. The transitions in those electronic states are absolute or observer-independent, but the computation is observer-relative. The transitions in physical states are just electrical sequences unless some conscious agent can give them a computational interpretation.... There is no psychological reality at all to what is happening in the [computer]."
"[A] digital computer", writes Searle, "is a syntactical machine. It manipulates symbols and does nothing else. For this reason, the project of creating human intelligence by designing a computer program that will pass the Turing Test... is doomed from the start. The appropriately programmed computer has a syntax [rules for constructing or transforming the symbols and words of a language] but no semantics [comprehension of meaning].... Minds, on the other hand, have mental or semantic content."
Like Searle, Christof Koch, chief scientist and president of the Allen Institute for Brain Science, in Seattle, is doubtful about the possibility of "intelligent" machines attaining consciousness, because "[e]ven the most sophisticated brain simulations are unlikely to produce conscious feelings." According to Koch,
Whether machines can become sentient [is important] for ethical reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to... humans. Per GNW [the Global Neuronal Workspace theory], they turn from mere objects into subjects... with a point of view.... Once computers' cognitive abilities rival those of humanity, their impulse to push for legal and political rights will become irresistible – the right not to be deleted, not to have their memories wiped clean, not to suffer pain and degradation. The alternative, embodied by IIT [Integrated Information Theory], is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself."
Professor of psychology and neural science Gary Marcus points out a so far insuperable stumbling block to artificial intelligence: an incapacity for reliable disambiguation. "[V]irtually every sentence [that people generate] is ambiguous, often in multiple ways. Our brain is so good at comprehending language that we do not usually notice." A prominent example is known as the "pronoun disambiguation problem" ("PDP"): a machine has no way of determining to whom or what a pronoun in a sentence—such as "he", "she" or "it"—refers.
Marcus has described current large language models as "approximations to [...] language use rather than language understanding".
Computer scientist Pedro Domingos writes: "AIs are like autistic savants and will remain so for the foreseeable future.... AIs lack common sense and can easily make errors that a human never would... They are also liable to take our instructions too literally, giving us precisely what we asked for instead of what we actually wanted.
Kai-Fu Lee, a Beijing-based venture capitalist, artificial-intelligence (AI) expert with a Ph.D. in computer science from Carnegie Mellon University, and author of the 2018 book, AI Superpowers: China, Silicon Valley, and the New World Order, emphasized in a 2018 PBS Amanpour interview with Hari Sreenivasan that AI, with all its capabilities, will never be capable of creativity or empathy. Bill Gates, interviewed in 2025 by Walter Isaacson on Amanpour and Company, similarly said that artificial intelligence possesses no sentience and is incapable of human feeling or understanding.
Parallel views were expressed in a 23 May 2025 Firing Line interview by Fei-Fei Li, co-director of the Stanford Institute for Human-Centered Artificial Intelligence. She emphasized that "AI is a tool" that can help humanity in many ways but that it should not be subjected to hyperbolae, either laudatory or alarmist (e.g., that it "will end humanity"); that, in order to avoid harmful applications ("Any technology can harm people"), it requires a "good regulatory framework"; and that AI has no "emotional intelligence" or creativity.
Paul Scharre writes in Foreign Affairs that "Today's AI technologies are powerful but unreliable." George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." Computer scientist Alex Pentland writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force."
"Artificial intelligence" is synonymous with "machine intelligence." The more perfectly adapted an AI program is to a given task, the less applicable it will be to other specific tasks. An abstracted, AI general intelligence is a remote prospect, if feasible at all. Melanie Mitchell notes that an AI program called AlphaGo bested one of the world's best Go players, but that its "intelligence" is nontransferable: it cannot "think" about anything except Go. Mitchell writes: "We humans tend to overestimate AI advances and underestimate the complexity of our own intelligence." Writes Paul Taylor: "Perhaps there is a limit to what a computer can do without knowing that it is manipulating imperfect representations of an external reality."
Humankind may not be able to outsource, to machines, its creative efforts in the sciences, technology, and culture.
Gary Marcus cautions against being taken in by deceptive claims about artificial general intelligence capabilities that are put out in press releases by self-interested companies which tell the press and public "only what the companies want us to know." Marcus writes:
Although deep learning has advanced the ability of machines to recognize patterns in data, it has three major flaws. The patterns that it learns are, ironically, superficial not conceptual; the results it creates are hard to interpret; and the results are difficult to use in the context of other processes, such as memory and reasoning. As Harvard University computer scientist Les Valiant noted, "The central challenge [going forward] is to unify the formulation of... learning and reasoning."
James Gleick writes: "Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that."
=== Uncertainty ===
A central concern for science and scholarship is the reliability and reproducibility of their findings. Of all fields of study, none is capable of such precision as physics. But even there the results of studies, observations, and experiments cannot be considered absolutely certain and must be treated probabilistically; hence, statistically.
In 1925 British geneticist and statistician Ronald Fisher published Statistical Methods for Research Workers, which established him as the father of modern statistics. He proposed a statistical test that summarized the compatibility of data with a given proposed model and produced a "p value". He counselled pursuing results with p values below 0.05 and not wasting time on results above that. Thus arose the idea that a p value less than 0.05 constitutes "statistical significance" – a mathematical definition of "significant" results.
The use of p values, ever since, to determine the statistical significance of experimental results has contributed to an illusion of certainty and to reproducibility crises in many scientific fields, especially in experimental economics, biomedical research, and psychology.
Every statistical model relies on a set of assumptions about how data are collected and analyzed and about how researchers decide to present their results. These results almost always center on null-hypothesis significance testing, which produces a p value. Such testing does not address the truth head-on but obliquely: significance testing is meant to indicate only whether a given line of research is worth pursuing further. It does not say how likely the hypothesis is to be true, but instead addresses an alternative question: if the hypothesis were false, how unlikely would the data be? The importance of "statistical significance", reflected in the p value, can be exaggerated or overemphasized – something that readily occurs with small samples. That has caused replication crises.
Some scientists have advocated "redefining statistical significance", shifting its threshold from 0.05 to 0.005 for claims of new discoveries. Others say such redefining does no good because the real problem is the very existence of a threshold.
Some scientists prefer to use Bayesian methods, a more direct statistical approach which takes initial beliefs, adds in new evidence, and updates the beliefs. Another alternative procedure is to use the surprisal, a mathematical quantity that adjust p values to produce bits – as in computer bits – of information; in that perspective, 0.05 is a weak standard.
When Ronald Fisher embraced the concept of "significance" in the early 20th century, it meant "signifying" but not "important". Statistical "significance" has, since, acquired am excessive connotation of confidence in the validity of the experimental results. Statistician Andrew Gelman says, "The original sin is people wanting certainty when it's not appropriate." "Ultimately", writes Lydia Denworth, "a successful theory is one that stands up repeatedly to decades of scrutiny."
Increasingly, attention is being given to the principles of open science, such as publishing more detailed research protocols and requiring authors to follow prespecified analysis plans and to report when they deviate from them.
== Discovery ==
=== Discoveries and inventions ===
Fifty years before Florian Znaniecki published his 1923 paper proposing the creation of an empirical field of study to study the field of science, Aleksander Głowacki (better known by his pen name, Bolesław Prus) had made the same proposal. In an 1873 public lecture "On Discoveries and Inventions", Prus said:
Until now there has been no science that describes the means for making discoveries and inventions, and the generality of people, as well as many men of learning, believe that there never will be. This is an error. Someday a science of making discoveries and inventions will exist and will render services. It will arise not all at once; first only its general outline will appear, which subsequent researchers will correct and elaborate, and which still later researchers will apply to individual branches of knowledge.
Prus defines "discovery" as "the finding out of a thing that has existed and exists in nature, but which was previously unknown to people"; and "invention" as "the making of a thing that has not previously existed, and which nature itself cannot make."
He illustrates the concept of "discovery":
Until 400 years ago, people thought that the Earth comprised just three parts: Europe, Asia, and Africa; it was only in 1492 that the Genoese, Christopher Columbus, sailed out from Europe into the Atlantic Ocean and, proceeding ever westward, after [10 weeks] reached a part of the world that Europeans had never known. In that new land he found copper-colored people who went about naked, and he found plants and animals different from those in Europe; in short, he had discovered a new part of the world that others would later name "America." We say that Columbus had discovered America, because America had already long existed on Earth.
Prus illustrates the concept of "invention":
[As late as] 50 years ago, locomotives were unknown, and no one knew how to build one; it was only in 1828 that the English engineer Stephenson built the first locomotive and set it in motion. So we say that Stephenson invented the locomotive, because this machine had not previously existed and could not by itself have come into being in nature; it could only have been made by man.
According to Prus, "inventions and discoveries are natural phenomena and, as such, are subject to certain laws." Those are the laws of "gradualness", "dependence", and "combination".
1. The law of gradualness. No discovery or invention arises at once perfected, but is perfected gradually; likewise, no invention or discovery is the work of a single individual but of many individuals, each adding his little contribution.
2. The law of dependence. An invention or discovery is conditional on the prior existence of certain known discoveries and inventions. ...If the rings of Saturn can [only] be seen through telescopes, then the telescope had to have been invented before the rings could have been seen. [...]
3. The law of combination. Any new discovery or invention is a combination of earlier discoveries and inventions, or rests on them. When I study a new mineral, I inspect it, I smell it, I taste it ... I combine the mineral with a balance and with fire...in this way I learn ever more of its properties.
Each of Prus' three "laws" entails important corollaries. The law of gradualness implies the following:
a) Since every discovery and invention requires perfecting, let us not pride ourselves only on discovering or inventing something completely new, but let us also work to improve or get to know more exactly things that are already known and already exist. [...]
b) The same law of gradualness demonstrates the necessity of expert training. Who can perfect a watch, if not a watchmaker with a good comprehensive knowledge of his métier? Who can discover new characteristics of an animal, if not a naturalist?
From the law of dependence flow the following corollaries:
a) No invention or discovery, even one seemingly without value, should be dismissed, because that particular trifle may later prove very useful. There would seem to be no simpler invention than the needle, yet the clothing of millions of people, and the livelihoods of millions of seamstresses, depend on the needle's existence. Even today's beautiful sewing machine would not exist, had the needle not long ago been invented.
b) The law of dependence teaches us that what cannot be done today, might be done later. People give much thought to the construction of a flying machine that could carry many persons and parcels. The inventing of such a machine will depend, among other things, on inventing a material that is, say, as light as paper and as sturdy and fire-resistant as steel.
Finally, Prus' corollaries to his law of combination:
a) Anyone who wants to be a successful inventor, needs to know a great many things—in the most diverse fields. For if a new invention is a combination of earlier inventions, then the inventor's mind is the ground on which, for the first time, various seemingly unrelated things combine. Example: The steam engine combines the kettle for cooking Rumford's Soup, the pump, and the spinning wheel.
[...] What is the connection among zinc, copper, sulfuric acid, a magnet, a clock mechanism, and an urgent message? All these had to come together in the mind of the inventor of the telegraph... [...]
The greater the number of inventions that come into being, the more things a new inventor must know; the first, earliest and simplest inventions were made by completely uneducated people—but today's inventions, particularly scientific ones, are products of the most highly educated minds. [...]
b) A second corollary concerns societies that wish to have inventors. I said that a new invention is created by combining the most diverse objects; let us see where this takes us.
Suppose I want to make an invention, and someone tells me: Take 100 different objects and bring them into contact with one another, first two at a time, then three at a time, finally four at a time, and you will arrive at a new invention. Imagine that I take a burning candle, charcoal, water, paper, zinc, sugar, sulfuric acid, and so on, 100 objects in all, and combine them with one another, that is, bring into contact first two at a time: charcoal with flame, water with flame, sugar with flame, zinc with flame, sugar with water, etc. Each time, I shall see a phenomenon: thus, in fire, sugar will melt, charcoal will burn, zinc will heat up, and so on. Now I will bring into contact three objects at a time, for example, sugar, zinc and flame; charcoal, sugar and flame; sulfuric acid, zinc and water; etc., and again I shall experience phenomena. Finally I bring into contact four objects at a time, for example, sugar, zinc, charcoal, and sulfuric acid. Ostensibly this is a very simple method, because in this fashion I could make not merely one but a dozen inventions. But will such an effort not exceed my capability? It certainly will. A hundred objects, combined in twos, threes and fours, will make over 4 million combinations; so if I made 100 combinations a day, it would take me over 110 years to exhaust them all!
But if by myself I am not up to the task, a sizable group of people will be. If 1,000 of us came together to produce the combinations that I have described, then any one person would only have to carry out slightly more than 4,000 combinations. If each of us performed just 10 combinations a day, together we would finish them all in less than a year and a half: 1,000 people would make an invention which a single man would have to spend more than 110 years to make…
The conclusion is quite clear: a society that wants to win renown with its discoveries and inventions has to have a great many persons working in every branch of knowledge. One or a few men of learning and genius mean nothing today, or nearly nothing, because everything is now done by large numbers. I would like to offer the following simile: Inventions and discoveries are like a lottery; not every player wins, but from among the many players a few must win. The point is not that John or Paul, because they want to make an invention and because they work for it, shall make an invention; but where thousands want an invention and work for it, the invention must appear, as surely as an unsupported rock must fall to the ground.
But, asks Prus, "What force drives [the] toilsome, often frustrated efforts [of the investigators]? What thread will clew these people through hitherto unexplored fields of study?"
[T]he answer is very simple: man is driven to efforts, including those of making discoveries and inventions, by needs; and the thread that guides him is observation: observation of the works of nature and of man.
I have said that the mainspring of all discoveries and inventions is needs. In fact, is there any work of man that does not satisfy some need? We build railroads because we need rapid transportation; we build clocks because we need to measure time; we build sewing machines because the speed of [unaided] human hands is insufficient. We abandon home and family and depart for distant lands because we are drawn by curiosity to see what lies elsewhere. We forsake the society of people and we spend long hours in exhausting contemplation because we are driven by a hunger for knowledge, by a desire to solve the challenges that are constantly thrown up by the world and by life!
Needs never cease; on the contrary, they are always growing. While the pauper thinks about a piece of bread for lunch, the rich man thinks about wine after lunch. The foot traveler dreams of a rudimentary wagon; the railroad passenger demands a heater. The infant is cramped in its cradle; the mature man is cramped in the world. In short, everyone has his needs, and everyone desires to satisfy them, and that desire is an inexhaustible source of new discoveries, new inventions, in short, of all progress.
But needs are general, such as the needs for food, sleep and clothing; and special, such as needs for a new steam engine, a new telescope, a new hammer, a new wrench. To understand the former needs, it suffices to be a human being; to understand the latter needs, one must be a specialist—an expert worker. Who knows better than a tailor what it is that tailors need, and who better than a tailor knows how to find the right way to satisfy the need?
Now consider how observation can lead man to new ideas; and to that end, as an example, let us imagine how, more or less, clay products came to be invented.
Suppose that somewhere there lived on clayey soil a primitive people who already knew fire. When rain fell on the ground, the clay turned doughy; and if, shortly after the rain, a fire was set on top of the clay, the clay under the fire became fired and hardened. If such an event occurred several times, the people might observe and thereafter remember that fired clay becomes hard like stone and does not soften in water. One of the primitives might also, when walking on wet clay, have impressed deep tracks into it; after the sun had dried the ground and rain had fallen again, the primitives might have observed that water remains in those hollows longer than on the surface. Inspecting the wet clay, the people might have observed that this material can be easily kneaded in one's fingers and accepts various forms.
Some ingenious persons might have started shaping clay into various animal forms [...] etc., including something shaped like a tortoise shell, which was in use at the time. Others, remembering that clay hardens in fire, might have fired the hollowed-out mass, thereby creating the first [clay] bowl.
After that, it was a relatively easy matter to perfect the new invention; someone else could discover clay more suitable for such manufactures; someone else could invent a glaze, and so on, with nature and observation at every step pointing out to man the way to invention. [...]
[This example] illustrates how people arrive at various ideas: by closely observing all things and wondering about all things.
Take another example. [S]ometimes, in a pane of glass, we find disks and bubbles, looking through which we see objects more distinctly than with the naked eye. Suppose that an alert person, spotting such a bubble in a pane, took out a piece of glass and showed it to others as a toy. Possibly among them there was a man with weak vision who found that, through the bubble in the pane, he saw better than with the naked eye. Closer investigation showed that bilaterally convex glass strengthens weak vision, and in this way eyeglasses were invented. People may first have cut glass for eyeglasses from glass panes, but in time others began grinding smooth pieces of glass into convex lenses and producing proper eyeglasses.
The art of grinding eyeglasses was known almost 600 years ago. A couple of hundred years later, the children of a certain eyeglass grinder, while playing with lenses, placed one in front of another and found that they could see better through two lenses than through one. They informed their father about this curious occurrence, and he began producing tubes with two magnifying lenses and selling them as a toy. Galileo, the great Italian scientist, on learning of this toy, used it for a different purpose and built the first telescope.
This example, too, shows us that observation leads man by the hand to inventions. This example again demonstrates the truth of gradualness in the development of inventions, but above all also the fact that education amplifies man's inventiveness. A simple lens-grinder formed two magnifying glasses into a toy—while Galileo, one of the most learned men of his time, made a telescope. As Galileo's mind was superior to the craftsman's mind, so the invention of the telescope was superior to the invention of a toy. [...]
The three laws [that have been discussed here] are immensely important and do not apply only to discoveries and inventions, but they pervade all of nature. An oak does not immediately become an oak but begins as an acorn, then becomes a seedling, later a little tree, and finally a mighty oak: we see here the law of gradualness. A seed that has been sown will not germinate until it finds sufficient heat, water, soil and air: here we see the law of dependence. Finally, no animal or plant, or even stone, is something homogeneous and simple but is composed of various organs: here we see the law of combination.
Prus holds that, over time, the multiplication of discoveries and inventions has improved the quality of people's lives and has expanded their knowledge. "This gradual advance of civilized societies, this constant growth in knowledge of the objects that exist in nature, this constant increase in the number of tools and useful materials, is termed progress, or the growth of civilization." Conversely, Prus warns, "societies and people that do not make inventions or know how to use them, lead miserable lives and ultimately perish."
=== Reproducibility ===
A fundamental feature of the scientific enterprise is reproducibility of results. "For decades", writes Shannon Palus, "it has been... an open secret that a [considerable part] of the literature in some fields is plain wrong." This effectively sabotages the scientific enterprise and costs the world many billions of dollars annually in wasted resources. Militating against reproducibility is scientists' reluctance to share techniques, for fear of forfeiting one's advantage to other scientists. Also, scientific journals and tenure committees tend to prize impressive new results rather than gradual advances that systematically build on existing literature. Scientists who quietly fact-check others' work or spend extra time ensuring that their own protocols are easy for other researchers to understand, gain little for themselves.
With a view to improving reproducibility of scientific results, it has been suggested that research-funding agencies finance only projects that include a plan for making their work transparent. In 2016 the U.S. National Institutes of Health introduced new application instructions and review questions to encourage scientists to improve reproducibility. The NIH requests more information on how the study builds on previous work, and a list of variables that could affect the study, such as the sex of animal subjects—a previously overlooked factor that led many studies to describe phenomena found in male animals as universal.
Likewise, the questions that a funder can ask in advance could be asked by journals and reviewers. One solution is "registered reports", a preregistration of studies whereby a scientist submits, for publication, research analysis and design plans before actually doing the study. Peer reviewers then evaluate the methodology, and the journal promises to print the results, no matter what they are. In order to prevent over-reliance on preregistered studies—which could encourage safer, less venturesome research, thus over-correcting the problem—the preregistered-studies model could be operated in tandem with the traditional results-focused model, which may sometimes be more friendly to serendipitous discoveries.
The "replication crisis" is compounded by a finding, published in a study summarized in 2021 by historian of science Naomi Oreskes, that nonreplicable studies are cited oftener than replicable ones: in other words, that bad science seems to get more attention than good science. If a substantial proportion of science is unreplicable, it will not provide a valid basis for decision-making and may delay the use of science for developing new medicines and technologies. It may also undermine the public's trust, making it harder to get people vaccinated or act against climate change.
The study tracked papers – in psychology journals, economics journals, and in Science and Nature – with documented failures of replication. The unreplicable papers were cited more than average, even after news of their unreplicability had been published.
"These results," writes Oreskes, "parallel those of a 2018 study. An analysis of 126,000 rumor cascades on Twitter showed that false news spread faster and reached more people than verified true claims. [I]t was people, not [ro]bots, who were responsible for the disproportionate spread of falsehoods online."
=== Rediscovery ===
A 2016 Scientific American report highlights the role of rediscovery in science. Indiana University Bloomington researchers combed through 22 million scientific papers published over the previous century and found dozens of "Sleeping Beauties"—studies that lay dormant for years before getting noticed. The top finds, which languished longest and later received the most intense attention from scientists, came from the fields of chemistry, physics, and statistics. The dormant findings were wakened by scientists from other disciplines, such as medicine, in search of fresh insights, and by the ability to test once-theoretical postulations. Sleeping Beauties will likely become even more common in the future because of increasing accessibility of scientific literature. The Scientific American report lists the top 15 Sleeping Beauties: 7 in chemistry, 5 in physics, 2 in statistics, and 1 in metallurgy. Examples include:
Herbert Freundlich's "Concerning Adsorption in Solutions" (1906), the first mathematical model of adsorption, when atoms or molecules adhere to a surface. Today both environmental remediation and decontamination in industrial settings rely heavily on adsorption.
A. Einstein, B. Podolsky and N. Rosen, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" Physical Review, vol. 47 (May 15, 1935), pp. 777–780. This famous thought experiment in quantum physics—now known as the EPR paradox, after the authors' surname initials—was discussed theoretically when it first came out. It was not until the 1970s that physics had the experimental means to test quantum entanglement.
J[ohn] Turkevich, P. C. Stevenson, J. Hillier, "A Study of the Nucleation and Growth Processes in the Synthesis of Colloidal Gold", Discuss. Faraday. Soc., 1951, 11, pp. 55–75, explains how to suspend gold nanoparticles in liquid. It owes its awakening to medicine, which now employs gold nanoparticles to detect tumors and deliver drugs.
William S. Hummers and Richard E Offeman, "Preparation of Graphitic Oxide", Journal of the American Chemical Society, vol. 80, no. 6 (March 20, 1958), p. 1339, introduced Hummers' Method, a technique for making graphite oxide. Recent interest in graphene's potential has brought the 1958 paper to attention. Graphite oxide could serve as a reliable intermediate for the 2-D material.
=== Multiple discovery ===
Historians and sociologists have remarked the occurrence, in science, of "multiple independent discovery". Sociologist Robert K. Merton defined such "multiples" as instances in which similar discoveries are made by scientists working independently of each other. "Sometimes the discoveries are simultaneous or almost so; sometimes a scientist will make a new discovery which, unknown to him, somebody else has made years before." Commonly cited examples of multiple independent discovery are the 17th-century independent formulation of calculus by Isaac Newton, Gottfried Wilhelm Leibniz, and others; the 18th-century independent discovery of oxygen by Carl Wilhelm Scheele, Joseph Priestley, Antoine Lavoisier, and others; and the 19th-century independent formulation of the theory of evolution of species by Charles Darwin and Alfred Russel Wallace.
Merton contrasted a "multiple" with a "singleton" — a discovery that has been made uniquely by a single scientist or group of scientists working together. He believed that it is multiple discoveries, rather than unique ones, that represent the common pattern in science.
Multiple discoveries in the history of science provide evidence for evolutionary models of science and technology, such as memetics (the study of self-replicating units of culture), evolutionary epistemology (which applies the concepts of biological evolution to study of the growth of human knowledge), and cultural selection theory (which studies sociological and cultural evolution in a Darwinian manner). A recombinant-DNA-inspired "paradigm of paradigms", describing a mechanism of "recombinant conceptualization", predicates that a new concept arises through the crossing of pre-existing concepts and facts. This is what is meant when one says that a scientist, scholar, or artist has been "influenced by" another — etymologically, that a concept of the latter's has "flowed into" the mind of the former.
The phenomenon of multiple independent discoveries and inventions can be viewed as a consequence of Bolesław Prus' three laws of gradualness, dependence, and combination (see "Discoveries and inventions", above). The first two laws may, in turn, be seen as corollaries to the third law, since the laws of gradualness and dependence imply the impossibility of certain scientific or technological advances pending the availability of certain theories, facts, or technologies that must be combined to produce a given scientific or technological advance.
=== Technology ===
Technology – the application of discoveries to practical matters – showed a remarkable acceleration in what economist Robert J. Gordon has identified as "the special century" that spanned the period up to 1970. By then, he writes, all the key technologies of modern life were in place: sanitation, electricity, mechanized agriculture, highways, air travel, telecommunications, and the like. The one signature technology of the 21st century has been the iPhone. Meanwhile, a long list of much-publicized potential major technologies remain in the prototype phase, including self-driving cars, flying cars, augmented-reality glasses, gene therapy, and nuclear fusion. An urgent goal for the 21st century, writes Gordon, is to undo some of the consequences of the last great technology boom by developing affordable zero- and negative-emissions technologies.
Technology is the sum of techniques, skills, methods, and processes used in the production of goods or services or in the accomplishment of objectives, such as scientific investigation. Paradoxically, technology, so conceived, has sometimes been noted to take primacy over the ends themselves – even to their detriment. Laura Grego and David Wright, writing in 2019 in Scientific American, observe that "Current U.S. missile defense plans are being driven largely by technology, politics and fear. Missile defenses will not allow us to escape our vulnerability to nuclear weapons. Instead large-scale developments will create barriers to taking real steps toward reducing nuclear risks—by blocking further cuts in nuclear arsenals and potentially spurring new deployments."
== Psychology of science ==
=== Habitus ===
Yale University physicist-astronomer Priyamvada Natarajan, writing of the virtually-simultaneous 1846 discovery of the planet Neptune by Urbain Le Verrier and John Couch Adams (after other astronomers, as early as Galileo Galilei in 1612, had unwittingly observed the planet), comments:
The episode is but one of many that proves science is not a dispassionate, neutral, and objective endeavor but rather one in which the violent clash of ideas and personal ambitions often combines with serendipity to propel new discoveries.
=== Nonconformance ===
A practical question concerns the traits that enable some individuals to achieve extraordinary results in their fields of work—and how such creativity can be fostered. Melissa Schilling, a student of innovation strategy, has identified some traits shared by eight major innovators in natural science or technology: Benjamin Franklin (1706–90), Thomas Edison (1847–1931), Nikola Tesla (1856–1943), Maria Skłodowska Curie (1867–1934), Dean Kamen (born 1951), Steve Jobs (1955–2011), Albert Einstein (1879–1955), and Elon Musk (born 1971).
Schilling chose innovators in natural science and technology rather than in other fields because she found much more consensus about important contributions to natural science and technology than, for example, to art or music. She further limited the set to individuals associated with multiple innovations. "When an individual is associated with only a single major invention, it is much harder to know whether the invention was caused by the inventor's personal characteristics or by simply being at the right place at the right time."
The eight individuals were all extremely intelligent, but "that is not enough to make someone a serial breakthrough innovator." Nearly all these innovators showed very high levels of social detachment, or separateness (a notable exception being Benjamin Franklin). "Their isolation meant that they were less exposed to dominant ideas and norms, and their sense of not belonging meant that even when exposed to dominant ideas and norms, they were often less inclined to adopt them." From an early age, they had all shown extreme faith in their ability to overcome obstacles—what psychology calls "self-efficacy".
"Most [of them, writes Schilling] were driven by idealism, a superordinate goal that was more important than their own comfort, reputation, or families. Nikola Tesla wanted to free mankind from labor through unlimited free energy and to achieve international peace through global communication. Elon Musk wants to solve the world's energy problems and colonize Mars. Benjamin Franklin was seeking greater social harmony and productivity through the ideals of egalitarianism, tolerance, industriousness, temperance, and charity. Marie Curie had been inspired by Polish Positivism's argument that Poland, which was under Tsarist Russian rule, could be preserved only through the pursuit of education and technological advance by all Poles—including women."
Most of the innovators also worked hard and tirelessly because they found work extremely rewarding. Some had an extremely high need for achievement. Many also appeared to find work autotelic—rewarding for its own sake. A surprisingly large portion of the breakthrough innovators have been autodidacts—self-taught persons—and excelled much more outside the classroom than inside.
"Almost all breakthrough innovation," writes Schilling, "starts with an unusual idea or with beliefs that break with conventional wisdom.... However, creative ideas alone are almost never enough. Many people have creative ideas, even brilliant ones. But usually we lack the time, knowledge, money, or motivation to act on those ideas." It is generally hard to get others' help in implementing original ideas because the ideas are often initially hard for others to understand and value. Thus each of Schilling's breakthrough innovators showed extraordinary effort and persistence. Even so, writes Schilling, "being at the right place at the right time still matter[ed]."
==== Lichenology ====
When Swiss botanist Simon Schwendener discovered in the 1860s that lichens were a symbiotic partnership between a fungus and an alga, his finding at first met with resistance from the scientific community. After his discovery that the fungus—which cannot make its own food—provides the lichen's structure, while the alga's contribution is its photosynthetic production of food, it was found that in some lichens a cyanobacterium provides the food—and a handful of lichen species contain both an alga and a cyanobacterium, along with the fungus.
A self-taught naturalist, Trevor Goward, has helped create a paradigm shift in the study of lichens and perhaps of all life-forms by doing something that people did in pre-scientific times: going out into nature and closely observing. His essays about lichens were largely ignored by most researchers because Goward has no scientific degrees and because some of his radical ideas are not supported by rigorous data.
When Goward told Toby Spribille, who at the time lacked a high-school education, about some of his lichenological ideas, Goward recalls, "He said I was delusional." Ultimately Spribille passed a high-school equivalency examination, obtained a Ph.D. in lichenology at the University of Graz in Austria, and became an assistant professor of the ecology and evolution of symbiosis at the University of Alberta. In July 2016 Spribille and his co-authors published a ground-breaking paper in Science revealing that many lichens contain a second fungus.
Spribille credits Goward with having "a huge influence on my thinking. [His essays] gave me license to think about lichens in [an unorthodox way] and freed me to see the patterns I worked out in Bryoria with my co-authors." Even so, "one of the most difficult things was allowing myself to have an open mind to the idea that 150 years of literature may have entirely missed the theoretical possibility that there would be more than one fungal partner in the lichen symbiosis." Spribille says that academia's emphasis on the canon of what others have established as important is inherently limiting.
=== Leadership ===
Contrary to previous studies indicating that higher intelligence makes for better leaders in various fields of endeavor, later research suggests that, at a certain point, a higher IQ can be viewed as harmful. Decades ago, psychologist Dean Simonton suggested that brilliant leaders' words may go over people's heads, their solutions could be more complicated to implement, and followers might find it harder to relate to them. At last, in the July 2017 Journal of Applied Psychology, he and two colleagues published the results of actual tests of the hypothesis.
Studied were 379 men and women business leaders in 30 countries, including the fields of banking, retail, and technology. The managers took IQ tests—an imperfect but robust predictor of performance in many areas—and each was rated on leadership style and effectiveness by an average of 8 co-workers. IQ correlated positively with ratings of leadership effectiveness, strategy formation, vision, and several other characteristics—up to a point. The ratings peaked at an IQ of about 120, which is higher than some 80% of office workers. Beyond that, the ratings declined. The researchers suggested that the ideal IQ could be higher or lower in various fields, depending on whether technical or social skills are more valued in a given work culture.
Psychologist Paul Sackett, not involved in the research, comments: "To me, the right interpretation of the work would be that it highlights a need to understand what high-IQ leaders do that leads to lower perceptions by followers. The wrong interpretation would be,'Don't hire high-IQ leaders.'" The study's lead author, psychologist John Antonakis, suggests that leaders should use their intelligence to generate creative metaphors that will persuade and inspire others. "I think the only way a smart person can signal their intelligence appropriately and still connect with the people," says Antonakis, "is to speak in charismatic ways."
== Sociology of science ==
=== Specialization ===
Academic specialization produces great benefits for science and technology by focusing effort on discrete disciplines. But excessively narrow specialization can act as a roadblock to productive collaboration between traditional disciplines.
In 2017, in Manhattan, James Harris Simons, a noted mathematician and retired founder of one of the world's largest hedge funds, inaugurated the Flatiron Institute, a nonprofit enterprise whose goal is to apply his hedge fund's analytical strategies to projects dedicated to expanding knowledge and helping humanity. He has established computational divisions for research in astrophysics, biology, and quantum physics, and an interdisciplinary division for climate modelling that interfaces geology, oceanography, atmospheric science, biology, and climatology.
The latter, fourth Flatiron Institute division was inspired by a 2017 presentation to the institute's leadership by John Grotzinger, a "bio-geoscientist" from the California Institute of Technology, who explained the challenges of climate modelling. Grotzinger was a specialist in historical climate change—specifically, what had caused the great Permian extinction, during which virtually all species died. To properly assess this cataclysm, one had to understand both the rock record and the ocean's composition, but geologists did not interact much with physical oceanographers. Grotzinger's own best collaboration had resulted from a fortuitous lunch with an oceanographer. Climate modelling was an intrinsically difficult problem made worse by the information silos of academia. "If you had it all under one umbrella... it could result [much sooner] in a major breakthrough." Simons and his team found Grotzinger's presentation compelling, and the Flatiron Institute decided to establish its fourth and final computational division.
=== Mentoring ===
Sociologist Harriet Zuckerman, in her 1977 study of natural-science Nobel laureates in the United States, was struck by the fact that more than half (48) of the 92 laureates who did their prize-winning research in the U.S. by 1972 had worked either as students, postdoctorates, or junior collaborators under older Nobel laureates. Furthermore, those 48 future laureates had worked under a total of 71 laureate masters.
Social viscosity ensures that not every qualified novice scientist attains access to the most productive centers of scientific thought. Nevertheless, writes Zuckerman, "To some extent, students of promise can choose masters with whom to work and masters can choose among the cohorts of students who present themselves for study. This process of bilateral assortative selection is conspicuously at work among the ultra-elite of science. Actual and prospective members of that elite select their scientist parents and therewith their scientist ancestors just as later they select their scientist progeny and therewith their scientist descendants."
Zuckerman writes: "[T]he lines of elite apprentices to elite masters who had themselves been elite apprentices, and so on indefinitely, often reach far back into the history of science, long before 1900, when [Alfred] Nobel's will inaugurated what now amounts to the International Academy of Sciences. As an example of the many long historical chains of elite masters and apprentices, consider the German-born English laureate Hans Krebs (1953), who traces his scientific lineage [...] back through his master, the 1931 laureate Otto Warburg. Warburg had studied with Emil Fis[c]her [1852–1919], recipient of a prize in 1902 at the age of 50, three years before it was awarded [in 1905] to his teacher, Adolf von Baeyer [1835–1917], at age 70. This lineage of four Nobel masters and apprentices has its own pre-Nobelian antecedents. Von Baeyer had been the apprentice of F[riedrich] A[ugust] Kekulé [1829–1896], whose ideas of structural formulae revolutionized organic chemistry and who is perhaps best known for the often retold story about his having hit upon the ring structure of benzene in a dream (1865). Kekulé himself had been trained by the great organic chemist Justus von Liebig (1803–1873), who had studied at the Sorbonne with the master J[oseph] L[ouis] Gay-Lussac (1778–1850), himself once apprenticed to Claude Louis Berthollet (1748–1822). Among his many institutional and cognitive accomplishments, Berthollet helped found the École Polytechnique, served as science advisor to Napoleon in Egypt, and, more significant for our purposes here, worked with [Antoine] Lavoisier [1743–1794] to revise the standard system of chemical nomenclature."
=== Collaboration ===
Sociologist Michael P. Farrell has studied close creative groups and writes: "Most of the fragile insights that laid the foundation of a new vision emerged not when the whole group was together, and not when members worked alone, but when they collaborated and repsonded to one another in pairs." François Jacob, who, with Jacques Monod, pioneered the study of gene regulation, notes that by the mid-20th century, most research in molecular biology was conducted by twosomes. "Two are better than one for dreaming up theories and constructing models," writes Jacob. "For with two minds working on a problem, ideas fly thicker and faster. They are bounced from partner to partner.... And in the process, illusions are sooner nipped in the bud." As of 2018, in the previous 35 years, some half of Nobel Prizes in Physiology or Medicine had gone to scientific partnerships. James Somers describes a remarkable partnership between Google's top software engineers, Jeff Dean and Sanjay Ghemawat.
Twosome collaborations have also been prominent in creative endeavors outside the natural sciences and technology; examples are Claude Monet's and Pierre-Auguste Renoir's 1869 joint creation of Impressionism, Pablo Picasso's and Georges Braque's six-year collaborative creation of Cubism, and John Lennon's and Paul McCartney's collaborations on Beatles songs. "Everyone", writes James Somers, "falls into creative ruts, but two people rarely do so at the same time."
The same point was made by Francis Crick, member of a famous scientific duo, Francis Crick and James Watson, who together discovered the structure of the genetic material, DNA. At the end of a PBS television documentary on James Watson, in a video clipping Crick explains to Watson that their collaboration had been crucial to their discovery because, when one of them was wrong, the other would set him straight.
=== Politics ===
==== Big Science ====
What has been dubbed "Big Science" emerged from the United States' World War II Manhattan Project that produced the world's first nuclear weapons; and Big Science has since been associated with physics, which requires massive particle accelerators. In biology, Big Science debuted in 1990 with the Human Genome Project to sequence human DNA. In 2013 neuroscience became a Big Science domain when the U.S. announced a BRAIN Initiative and the European Union announced a Human Brain Project. Major new brain-research initiatives were also announced by Israel, Canada, Australia, New Zealand, Japan, and China.
Earlier successful Big Science projects had habituated politicians, mass media, and the public to view Big Science programs with sometimes uncritical favor.
The U.S.'s BRAIN Initiative was inspired by concern about the spread and cost of mental disorders and by excitement about new brain-manipulation technologies such as optogenetics. After some early false starts, the U.S. National Institute of Mental Health let the country's brain scientists define the BRAIN Initiative, and this led to an ambitious interdisciplinary program to develop new technological tools to better monitor, measure, and simulate the brain. Competition in research was ensured by the National Institute of Mental Health's peer-review process.
In the European Union, the European Commission's Human Brain Project got off to a rockier start because political and economic considerations obscured questions concerning the feasibility of the Project's initial scientific program, based principally on computer modeling of neural circuits. Four years earlier, in 2009, fearing that the European Union would fall further behind the U.S. in computer and other technologies, the European Union had begun creating a competition for Big Science projects, and the initial program for the Human Brain Project seemed a good fit for a European program that might take a lead in advanced and emerging technologies. Only in 2015, after over 800 European neuroscientists threatened to boycott the European-wide collaboration, were changes introduced into the Human Brain Project, supplanting many of the original political and economic considerations with scientific ones.
As of 2019, the European Union's Human Brain Project had not lived up to its extravagant promise.
=== Funding ===
==== Government funding ====
Nathan Myhrvold, former Microsoft chief technology officer and founder of Microsoft Research, argues that the funding of basic science cannot be left to the private sector—that "without government resources, basic science will grind to a halt." He notes that Albert Einstein's general theory of relativity, published in 1915, did not spring full-blown from his brain in a eureka moment; he worked at it for years—finally driven to complete it by a rivalry with mathematician David Hilbert. The history of almost any iconic scientific discovery or technological invention—the lightbulb, the transistor, DNA, even the Internet—shows that the famous names credited with the breakthrough "were only a few steps ahead of a pack of competitors." Some writers and elected officials have used this phenomenon of "parallel innovation" to argue against public financing of basic research: government, they assert, should leave it to companies to finance the research they need.
Myhrvold writes that such arguments are dangerously wrong: without government support, most basic scientific research will never happen. "This is most clearly true for the kind of pure research that has delivered... great intellectual benefits but no profits, such as the work that brought us the Higgs boson, or the understanding that a supermassive black hole sits at the center of the Milky Way, or the discovery of methane seas on the surface of Saturn's moon Titan. Company research laboratories used to do this kind of work: experimental evidence for the Big Bang was discovered at AT&T's Bell Labs, resulting in a Nobel Prize. Now those days are gone."
Even in applied fields such as materials science and computer science, writes Myhrvold, "companies now understand that basic research is a form of charity—so they avoid it." Bell Labs scientists created the transistor, but that invention earned billions for Intel and Microsoft. Xerox PARC engineers invented the modern graphical user interface, but Apple and Microsoft profited most. IBM researchers pioneered the use of giant magnetoresistance to boost hard-disk capacity but soon lost the disk-drive business to Seagate and Western Digital.
Company researchers now have to focus narrowly on innovations that can quickly bring revenue; otherwise the research budget could not be justified to the company's investors. "Those who believe profit-driven companies will altruistically pay for basic science that has wide-ranging benefits—but mostly to others and not for a generation—are naive.... If government were to leave it to the private sector to pay for basic research, most science would come to a screeching halt. What research survived would be done largely in secret, for fear of handing the next big thing to a rival."
Governmental investment is equally vital in the field of biological research. According to William A. Haseltine, a former Harvard Medical School professor and founder of that university's cancer and HIV / AIDS research departments, early efforts to control the COVID-19 pandemic were hampered by governments and industry everywhere having "pulled the plug on coronavirus research funding in 2006 after the first SARS [...] pandemic faded away and again in the years immediately following the MERS [outbreak, also caused by a coronavirus] when it seemed to be controllable. [...] The development of promising anti-SARS and MERS drugs, which might have been active against SARS–CoV-2 [in the Covid-19 pandemic] as well, was left unfinished for lack of money." Haseltine continues:
We learned from the HIV crisis that it was important to have research pipelines already established. [It was c]ancer research in the 1950s, 1960s and 1970s [that] built a foundation for HIV / Aids studies. [During those decades t]he government [had] responded to public concerns, sharply increasing federal funding of cancer research [...]. These efforts [had] culminated in Congress's approval of President Richard Nixon's National Cancer Act in 1971. This [had] built the science we needed to identify and understand HIV in the 1980s, although of course no one knew that payoff was coming.
In the 1980s the Reagan administration did not want to talk about AIDS or commit much funding to HIV research. [But o]nce the news broke that actor Rock Hudson was seriously ill with AIDS, [...] $320 million [were added to] the fiscal 1986 budget for AIDS research. [...] I helped [...] design this first congressionally funded AIDS research program with Anthony Fauci, the doctor now leading [the U.S.] fight against COVID-19. [...]
[The] tool set for virus and pharmaceutical research has improved enormously in the past 36 years since HIV was discovered. What used to take five or 10 years in the 1980s and 1990s in many cases now can be done in five or 10 months. We can rapidly identify and synthesize chemicals to predict which drugs will be effective. We can do cryoelectron microscopy to probe virus structures and simulate molecule-by-molecule interactions in a matter of weeks – something that used to take years. The lesson is to never let down our guard when it comes to funding antiviral research. We would have no hope of beating COVID-19 if it were not for the molecular biology gains we made during earlier virus battles. What we learn this time around will help us [...] during the next pandemic, but we must keep the money coming.
==== Private funding ====
A complementary perspective on the funding of scientific research is given by D.T. Max, writing about the Flatiron Institute, a computational center set up in 2017 in Manhattan to provide scientists with mathematical assistance. The Flatiron Institute was established by James Harris Simons, a mathematician who had used mathematical algorithms to make himself a Wall Street billionaire. The institute has three computational divisions dedicated respectively to astrophysics, biology, and quantum physics, and is working on a fourth division for climate modeling that will involve interfaces of geology, oceanography, atmospheric science, biology, and climatology.
The Flatiron Institute is part of a trend in the sciences toward privately funded research. In the United States, basic science has traditionally been financed by universities or the government, but private institutes are often faster and more focused. Since the 1990s, when Silicon Valley began producing billionaires, private institutes have sprung up across the U.S. In 1997 Larry Ellison launched the Ellison Medical Foundation to study the biology of aging. In 2003 Paul Allen founded the Allen Institute for Brain Science. In 2010 Eric Schmidt founded the Schmidt Ocean Institute.
These institutes have done much good, partly by providing alternatives to more rigid systems. But private foundations also have liabilities. Wealthy benefactors tend to direct their funding toward their personal enthusiasms. And foundations are not taxed; much of the money that supports them would otherwise have gone to the government.
==== Funding biases ====
John P.A. Ioannidis, of Stanford University Medical School, writes that "There is increasing evidence that some of the ways we conduct, evaluate, report and disseminate research are miserably ineffective. A series of papers in 2014 in The Lancet... estimated that 85 percent of investment in biomedical research is wasted. Many other disciplines have similar problems." Ioannidis identifies some science-funding biases that undermine the efficiency of the scientific enterprise, and proposes solutions:
Funding too few scientists: "[M]ajor success [in scientific research] is largely the result of luck, as well as hard work. The investigators currently enjoying huge funding are not necessarily genuine superstars; they may simply be the best connected." Solutions: "Use a lottery to decide which grant applications to fund (perhaps after they pass a basic review).... Shift... funds from senior people to younger researchers..."
No reward for transparency: "Many scientific protocols, analysis methods, computational processes and data are opaque. [M]any top findings cannot be reproduced. That is the case for two out of three top psychology papers, one out of three top papers in experimental economics and more than 75 percent of top papers identifying new cancer drug targets. [S]cientists are not rewarded for sharing their techniques." Solutions: "Create better infrastructure for enabling transparency, openness and sharing. Make transparency a prerequisite for funding. [P]referentially hire, promote or tenure... champions of transparency."
No encouragement for replication: Replication is indispensable to the scientific method. Yet, under pressure to produce new discoveries, researchers tend to have little incentive, and much counterincentive, to try replicating results of previous studies. Solutions: "Funding agencies must pay for replication studies. Scientists' advancement should be based not only on their discoveries but also on their replication track record."
No funding for young scientists: "Werner Heisenberg, Albert Einstein, Paul Dirac and Wolfgang Pauli made their top contributions in their mid-20s." But the average age of biomedical scientists receiving their first substantial grant is 46. The average age for a full professor in the U.S. is 55. Solutions: "A larger proportion of funding should be earmarked for young investigators. Universities should try to shift the aging distribution of their faculty by hiring more young investigators."
Biased funding sources: "Most funding for research and development in the U.S. comes not from the government but from private, for-profit sources, raising unavoidable conflicts of interest and pressure to deliver results favorable to the sponsor." Solutions: "Restrict or even ban funding that has overt conflicts of interest. Journals should not accept research with such conflicts. For less conspicuous conflicts, at a minimum ensure transparent and thorough disclosure."
Funding the wrong fields: "Well-funded fields attract more scientists to work for them, which increases their lobbying reach, fueling a vicious circle. Some entrenched fields absorb enormous funding even though they have clearly demonstrated limited yield or uncorrectable flaws." Solutions: "Independent, impartial assessment of output is necessary for lavishly funded fields. More funds should be earmarked for new fields and fields that are high risk. Researchers should be encouraged to switch fields, whereas currently they are incentivized to focus in one area."
Not spending enough: The U.S. military budget ($886 billion) is 24 times the budget of the National Institutes of Health ($37 billion). "Investment in science benefits society at large, yet attempts to convince the public often make matters worse when otherwise well-intentioned science leaders promise the impossible, such as promptly eliminating all cancer or Alzheimer's disease." Solutions: "We need to communicate how science funding is used by making the process of science clearer, including the number of scientists it takes to make major accomplishments.... We would also make a more convincing case for science if we could show that we do work hard on improving how we run it."
Rewarding big spenders: "Hiring, promotion and tenure decisions primarily rest on a researcher's ability to secure high levels of funding. But the expense of a project does not necessarily correlate with its importance. Such reward structures select mostly for politically savvy managers who know how to absorb money." Solutions: "We should reward scientists for high-quality work, reproducibility and social value rather than for securing funding. Excellent research can be done with little to no funding other than protected time. Institutions should provide this time and respect scientists who can do great work without wasting tons of money."
No funding for high-risk ideas: "The pressure that taxpayer money be 'well spent' leads government funders to back projects most likely to pay off with a positive result, even if riskier projects might lead to more important, but less assured, advances. Industry also avoids investing in high-risk projects... Innovation is extremely difficult, if not impossible, to predict..." Solutions: "Fund excellent scientists rather than projects and give them freedom to pursue research avenues as they see fit. Some institutions such as Howard Hughes Medical Institute already use this model with success." It must be communicated to the public and to policy-makers that science is a cumulative investment, that no one can know in advance which projects will succeed, and that success must be judged on the total agenda, not on a single experiment or result.
Lack of good data: "There is relatively limited evidence about which scientific practices work best. We need more research on research ('meta-research') to understand how to best perform, evaluate, review, disseminate and reward science." Solutions: "We should invest in studying how to get the best science and how to choose and reward the best scientists."
=== Diversity ===
Naomi Oreskes, professor of the history of science at Harvard University, writes about the desirability of diversity in the backgrounds of scientists.
The history of science is rife with [...] cases of misogyny, prejudice and bias. For centuries biologists promoted false theories of female inferiority, and scientific institutions typically barred women's participation. Historian of science [...] Margaret Rossiter has documented how, in the mid-19th century, female scientists created their own scientific societies to compensate for their male colleagues' refusal to acknowledge their work. Sharon Bertsch McGrayne filled an entire volume with the stories of women who should have been awarded the Nobel Prize for work that they did in collaboration with male colleagues – or, worse, that they had stolen by them. [...] Racial bias has been at least as pernicious as gender bias; it was scientists, after all, who codified the concept of race as a biological category that was not simply descriptive but also hierarchical.
[...] [C]ognitive science shows that humans are prone to bias, misperception, motivated reasoning and other intellectual pitfalls. Because reasoning is slow and difficult, we rely on heuristics – intellectual shortcuts that often work but sometimes fail spectacularly. (Believing that men are, in general, better than women in math is one tiring example.) [...]
[...] Science is a collective effort, and it works best when scientific communities are diverse. [H]eterogeneous communities are more likely than homogeneous ones to be able to identify blind spots and correct them. Science does not correct itself; scientists correct one another through critical interrogation. And that means being willing to interrogate not just claims about the external world but claims about [scientists'] own practices and processes as well.
=== Sexual bias ===
Claire Pomeroy, president of the Lasker Foundation, which is dedicated to advancing medical research, points out that women scientists continue to be subjected to discrimination in professional advancement.
Though the percentage of doctorates awarded to women in life sciences in the United States increased from 15 to 52 percent between 1969 and 2009, only a third of assistant professors and less than a fifth of full professors in biology-related fields in 2009 were women. Women make up only 15 percent of permanent department chairs in medical schools and barely 16 percent of medical-school deans.
The problem is a culture of unconscious bias that leaves many women feeling demoralized and marginalized. In one study, science faculty were given identical résumés in which the names and genders of two applicants were interchanged; both male and female faculty judged the male applicant to be more competent and offered him a higher salary.
Unconscious bias also appears as "microassaults" against women scientists: purportedly insignificant sexist jokes and insults that accumulate over the years and undermine confidence and ambition. Writes Claire Pomeroy: "Each time it is assumed that the only woman in the lab group will play the role of recording secretary, each time a research plan becomes finalized in the men's lavatory between conference sessions, each time a woman is not invited to go out for a beer after the plenary lecture to talk shop, the damage is reinforced."
"When I speak to groups of women scientists," writes Pomeroy, "I often ask them if they have ever been in a meeting where they made a recommendation, had it ignored, and then heard a man receive praise and support for making the same point a few minutes later. Each time the majority of women in the audience raise their hands. Microassaults are especially damaging when they come from a high-school science teacher, college mentor, university dean or a member of the scientific elite who has been awarded a prestigious prize—the very people who should be inspiring and supporting the next generation of scientists."
=== Sexual harassment ===
Sexual harassment is more prevalent in academia than in any other social sector except the military. A June 2018 report by the National Academies of Sciences, Engineering, and Medicine states that sexual harassment hurts individuals, diminishes the pool of scientific talent, and ultimately damages the integrity of science.
Paula Johnson, co-chair of the committee that drew up the report, describes some measures for preventing sexual harassment in science. One would be to replace trainees' individual mentoring with group mentoring, and to uncouple the mentoring relationship from the trainee's financial dependence on the mentor. Another way would be to prohibit the use of confidentiality agreements in connection with harassment cases.
A novel approach to the reporting of sexual harassment, dubbed Callisto, that has been adopted by some institutions of higher education, lets aggrieved persons record experiences of sexual harassment, date-stamped, without actually formally reporting them. This program lets people see if others have recorded experiences of harassment from the same individual, and share information anonymously.
=== Deterrent stereotypes ===
Psychologist Andrei Cimpian and philosophy professor Sarah-Jane Leslie have proposed a theory to explain why American women and African-Americans are often subtly deterred from seeking to enter certain academic fields by a misplaced emphasis on genius. Cimpian and Leslie had noticed that their respective fields are similar in their substance but hold different views on what is important for success. Much more than psychologists, philosophers value a certain kind of person: the "brilliant superstar" with an exceptional mind. Psychologists are more likely to believe that the leading lights in psychology grew to achieve their positions through hard work and experience. In 2015, women accounted for less than 30% of doctorates granted in philosophy; African-Americans made up only 1% of philosophy Ph.D.s. Psychology, on the other hand, has been successful in attracting women (72% of 2015 psychology Ph.D.s) and African-Americans (6% of psychology Ph.D.s).
An early insight into these disparities was provided to Cimpian and Leslie by the work of psychologist Carol Dweck. She and her colleagues had shown that a person's beliefs about ability matter a great deal for that person's ultimate success. A person who sees talent as a stable trait is motivated to "show off this aptitude" and to avoid making mistakes. By contrast, a person who adopts a "growth mindset" sees his or her current capacity as a work in progress: for such a person, mistakes are not an indictment but a valuable signal highlighting which of their skills are in need of work. Cimpian and Leslie and their collaborators tested the hypothesis that attitudes, about "genius" and about the unacceptability of making mistakes, within various academic fields may account for the relative attractiveness of those fields for American women and African-Americans. They did so by contacting academic professionals from a wide range of disciplines and asking them whether they thought that some form of exceptional intellectual talent was required for success in their field. The answers received from almost 2,000 academics in 30 fields matched the distribution of Ph.D.s in the way that Cimpian and Leslie had expected: fields that placed more value on brilliance also conferred fewer Ph.D.s on women and African-Americans. The proportion of women and African-American Ph.D.s in psychology, for example, was higher than the parallel proportions for philosophy, mathematics, or physics.
Further investigation showed that non-academics share similar ideas of which fields require brilliance. Exposure to these ideas at home or school could discourage young members of stereotyped groups from pursuing certain careers, such as those in the natural sciences or engineering. To explore this, Cimpian and Leslie asked hundreds of five-, six-, and seven-year-old boys and girls questions that measured whether they associated being "really, really smart" (i.e., "brilliant") with their sex. The results, published in January 2017 in Science, were consistent with scientific literature on the early acquisition of sex stereotypes. Five-year-old boys and girls showed no difference in their self-assessment; but by age six, girls were less likely to think that girls are "really, really smart." The authors next introduced another group of five-, six-, and seven-year-olds to unfamiliar gamelike activities that the authors described as being "for children who are really, really smart." Comparison of boys' and girls' interest in these activities at each age showed no sex difference at age five but significantly greater interest from boys at ages six and seven—exactly the ages when stereotypes emerge.
Cimpian and Leslie conclude that, "Given current societal stereotypes, messages that portray [genius or brilliance] as singularly necessary [for academic success] may needlessly discourage talented members of stereotyped groups."
=== Academic snobbery ===
Largely as a result of his growing popularity, astronomer and science popularizer Carl Sagan, creator of the 1980 PBS TV Cosmos series, came to be ridiculed by scientist peers and failed to receive tenure at Harvard University in the 1960s and membership in the National Academy of Sciences in the 1990s. The eponymous "Sagan effect" persists: as a group, scientists still discourage individual investigators from engaging with the public unless they are already well-established senior researchers.
The operation of the Sagan effect deprives society of the full range of expertise needed to make informed decisions about complex questions, including genetic engineering, climate change, and energy alternatives. Fewer scientific voices mean fewer arguments to counter antiscience or pseudoscientific discussion. The Sagan effect also creates the false impression that science is the domain of older white men (who dominate the senior ranks), thereby tending to discourage women and minorities from considering science careers.
A number of factors contribute to the Sagan effect's durability. At the height of the Scientific Revolution in the 17th century, many researchers emulated the example of Isaac Newton, who dedicated himself to physics and mathematics and never married. These scientists were viewed as pure seekers of truth who were not distracted by more mundane concerns. Similarly, today anything that takes scientists away from their research, such as having a hobby or taking part in public debates, can undermine their credibility as researchers.
Another, more prosaic factor in the Sagan effect's persistence may be professional jealousy.
However, there appear to be some signs that engaging with the rest of society is becoming less hazardous to a career in science. So many people have social-media accounts now that becoming a public figure is not as unusual for scientists as previously. Moreover, as traditional funding sources stagnate, going public sometimes leads to new, unconventional funding streams. A few institutions such as Emory University and the Massachusetts Institute of Technology may have begun to appreciate outreach as an area of academic activity, in addition to the traditional roles of research, teaching, and administration. Exceptional among federal funding agencies, the National Science Foundation now officially favors popularization.
=== Institutional snobbery ===
Like infectious diseases, ideas in academia are contagious. But why some ideas gain great currency while equally good ones remain in relative obscurity had been unclear. A team of computer scientists has used an epidemiological model to simulate how ideas move from one academic institution to another. The model-based findings, published in October 2018, show that ideas originating at prestigious institutions cause bigger "epidemics" than equally good ideas from less prominent places. The finding reveals a big weakness in how science is done. Many highly trained people with good ideas do not obtain posts at the most prestigious institutions; much good work published by workers at less prestigious places is overlooked by other scientists and scholars because they are not paying attention.
Naomi Oreskes remarks on another drawback to deprecating public universities in favor of Ivy League schools: "In 1970 most jobs did not require a college degree. Today nearly all well-paying ones do. With the rise of artificial intelligence and the continued outsourcing of low-skilled and de-skilled jobs overseas, that trend most likely will accelerate. Those who care about equity of opportunity should pay less attention to the lucky few who get into Harvard or other highly selective private schools and more to public education, because for most Americans, the road to opportunity runs through public schools."
=== Public relations ===
Resistance, among some of the public, to accepting vaccination and the reality of climate change may be traceable partly to several decades of partisan attacks on government, leading to distrust of government science and then of science generally.
Many scientists themselves have been loth to involve themselves in public policy debates for fear of losing credibility: they worry that if they participate in public debate on a contested question, they will be viewed as biased and discounted as partisan. However, studies show that most people want to hear from scientists on matters within their areas of expertise. Research also suggests that scientists can feel comfortable offering policy advice within their fields. "The ozone story", writes Naomi Oreskes, "is a case in point: no one knew better than ozone scientists about the cause of the dangerous hole and therefore what needed to be done to fix it."
Oreskes, however, identifies a factor that does "turn off" the public: scientists' frequent use of jargon – of expressions that tend to be misinterpreted by, or incomprehensible to, laypersons.
In climatological parlance, "positive feedback" refers to amplifying feedback loops, such as the ice-albedo feedback. ("Albedo", another piece of jargon, simply means "reflectivity".) The positive loop in question develops when global warming causes Arctic ice to melt, exposing water that is darker and reflects less of the sun's warming rays, leading to more warming, which leads to more melting... and so on. In climatology, such positive feedback is a bad thing; but for most laypersons, "it conjures reassuring images, such as receiving praise from your boss.".
When astronomers say "metals," they mean any element heavier than helium, which includes oxygen and nitrogen, a usage that is massively confusing not just to laypersons but also to chemists. [To astronomers] [t]he Big Dipper isn't a constellation [...] it is an "asterism" [...] In AI, there is machine "intelligence," which isn't intelligence at all but something more like "machine capability." In ecology, there are "ecosystem services," which you might reasonably think refers to companies that clean up oil spills, but it is [actually] ecological jargon for all the good things that the natural world does for us. [T]hen there's [...] the theory of "communication accommodation," which means speaking so that the listener can understand.
=== Publish or perish ===
"[R]esearchers," writes Naomi Oreskes, "are often judged more by the quantity of their output than its quality. Universities [emphasize] metrics such as the numbers of published papers and citations when they make hiring, tenure and promotion decisions."
When – for a number of possible reasons – publication in legitimate peer-reviewed journals is not feasible, this often creates a perverse incentive to publish in "predatory journals", which do not uphold scientific standards. Some 8,000 such journals publish 420,000 papers annually – nearly a fifth of the scientific community's annual output of 2.5 million papers. The papers published in a predatory journal are listed in scientific databases alongside legitimate journals, making it hard to discern the difference.
One reason why some scientists publish in predatory journals is that prestigious scientific journals may charge scientists thousands of dollars for publishing, whereas a predatory journal typically charges less than $200. (Hence authors of papers in the predatory journals are disproportionately located in less wealthy countries and institutions.)
Publishing in predatory journals can be life-threatening when physicians and patients accept spurious claims about medical treatments; and invalid studies can wrongly influence public policy. More such predatory journals are appearing every year. In 2008 Jeffrey Beall, a University of Colorado librarian, developed a list of predatory journals which he updated for several years.
Naomi Oreskes argues that, "[t]o put an end to predatory practices, universities and other research institutions need to find ways to correct the incentives that lead scholars to prioritize publication quantity... Setting a maximum limit on the number of articles that hiring or funding committees can consider might help... as could placing less importance on the number of citations an author gets. After all, the purpose of science is not merely to produce papers. It is to produce papers that tell us something truthful and meaningful about the world."
==== Data fabrication ====
The perverse incentive to "publish or perish" is often facilitated by the fabrication of data. A classic example is the identical-twin-studies results of Cyril Burt, which – soon after Burt's death – were found to have been based on fabricated data.
Writes Gideon Lewish-Kraus:
"One of the confounding things about the social sciences is that observational evidence can produce only correlations. [For example, t]o what extent is dishonesty [which is the subject of a number of social-science studies] a matter of character, and to what extent a matter of situation? Research misconduct is sometimes explained away by incentives – the publishing requirements for the job market, or the acclaim that can lead to consulting fees and Davos appearances. [...] The differences between p-hacking and fraud is one of degree. And once it becomes customary within a field to inflate results, the field selects for researchers inclined to do so."
Joe Simmons, a behavioral-science professor, writes:
"[A] field cannot reward truth if it does not or cannot decipher it, so it rewards other things instead. Interestingness. Novelty. Speed. Impact. Fantasy. And it effectively punishes the opposite. Intuitive Findings. Incremental Progress. Care. Curiosity. Reality."
==== Accelerating science ====
Harvard University historian of science Naomi Oreskes writes that a theme at the 2024 World Economic Forum in Davos, Switzerland, was a "perceived need to 'accelerate breakthroughs in research and technology.'"
"[R]ecent years", however, writes Oreskes, "[have] seen important papers, written by prominent scientists and published in prestigious journals, retracted because of questionable data or methods." For example, the Davos meeting took place after the resignations – over questionably reliable academic papers – in 2023 of Stanford University president Marc Tessier-Lavigne and, in 2024, of Harvard University president Claudine Gay. "In one interesting case, Frances H. Arnold of the California Institute of Technology, who shared the 2018 Nobel Prize in Chemistry, voluntarily retracted a paper when her lab was unable to replicate her results – but after the paper had been published." Such incidents, suggests Oreskes, are likely to erode public trust in science and in experts generally.
Academics at leading universities in the United States and Europe are subject to perverse incentives to produce results – and lots of them – quickly. A study has put the number of papers published around 2023 by scientists and other scholars at over seven million annually, compared with less than a million in 1980. Another study found 265 authors – two-thirds in the medical and life sciences – who published on average a paper every five days.
"Good science [and scholarship take] time", writes Oreskes. "More than 50 years elapsed between the 1543 publication of Copernicus's magnum opus... and the broad scientific acceptance of the heliocentric model... Nearly a century passed between biochemist Friedrich Miescher's identification of the DNA molecule and suggestion that it might be involved in inheritance and the elucidation of its double-helix structure in the 1950s. And it took just about half a century for geologists and geophysicists to accept geophysicist Alfred Wegener's idea of continental drift."
== See also ==
== Notes ==
== References ==
== Bibliography ==
== Further reading ==
== External links ==
American Masters: Decoding Watson - PBS documentary about James Watson, co-discoverer of the structure of DNA, including interviews with Watson, his family, and colleagues. 2019-01-02.
Return to top of page. | Wikipedia/Logology_(study_of_science) |
Renewable energy in Palestine is a small but significant component of the national energy mix, accounting for 1.4% of energy produced in 2012. Palestine has some of the highest rate of solar water heating in the region, and there are a number of solar power projects. A number of issues confront renewable energy development; a lack of national infrastructure and the limited regulatory framework of the Oslo Accords are both barriers to investment.
== Solar power ==
It has been estimated that solar sources have the potential to account for 13% of energy usage in the Palestinian Territories. Over half of all households in Palestine utilise solar energy heaters, although only 3% of houses depend on it as their main source. A 710kw photovoltaic plant was commissioned in September, 2014 in the vicinity of Jericho; it is the largest plant in Palestine to date. Research has indicated that, although a very high percentage of Palestinian houses are connected to the central grid, powering remote villages with small-scale photovoltaic systems would be more economically feasible than extending the grid.
Israeli authorities seized a solar/diesel hybrid electric system from the Palestinian village of Jubbet ad-Dib in July, 2017. The system was funded by the Dutch government and installed by joint Israeli-Palestinian organisation Comet-ME, leading the Dutch Foreign Ministry to lodge a complaint. The Coordinator of Government Activities in the Territories told reporters that the solar panels were erected “without the necessary permits, and that stop work orders had previously been sent to the village authorities,” although a Haaretz report indicated that the confiscation orders were only delivered during the raid, meaning there was no chance to contest them in court. Residents of the village, located in Area C between a number of Israeli settlements, had been attempting to implement and gain approval for solar power projects since 2009.
== Wind power ==
It has been estimated that wind energy has the potential to account for 6.6% of energy usage in the Palestinian Territories.
== Biomass ==
About half of the Palestinian population - mainly in the rural areas, refugee camps, and Bedouins of North and South Governorates - are exposed daily to harmful emissions and other health risks from biomass burning that typically takes place in traditional stoves without adequate ventilation. The majority of individuals exposed to enhanced concentrations of pollutants are women and young children.
== National policy ==
The Palestinian Energy Authority (PEA) published a 'General Renewable Energy Strategy' in 2012, aiming for 10% of total domestic energy production and 5% of total energy consumption to come from renewable sources by 2020.
== Barriers ==
There are a number of barriers to development of renewable energy resources in Palestine, including regulatory issues resulting from the Israeli occupation, and this meant the government was unable to achieve its target of 25 megawatts by 2015. However, renewable energy has a large potential to reduce reliance on imported energy and address a number of social issues.
== References ==
== External links ==
Media related to Renewable energy in Palestine at Wikimedia Commons | Wikipedia/Renewable_energy_in_Palestine |
Cryogenic energy storage (CES) is the use of low temperature (cryogenic) liquids such as liquid air or liquid nitrogen to store energy.
The technology is primarily used for the large-scale storage of electricity. Following grid-scale demonstrator plants, a 250 MWh commercial plant is now under construction in the UK, and a 400 MWh store is planned in the USA.
== Grid energy storage ==
=== Process ===
When it is cheaper (usually at night), electricity is used to cool air from the atmosphere to -195 °C using the Claude Cycle to the point where it liquefies. The liquid air, which takes up one-thousandth of the volume of the gas, can be kept for a long time in a large vacuum flask at atmospheric pressure. At times of high demand for electricity, the liquid air is pumped at high pressure into a heat exchanger, which acts as a boiler. Air from the atmosphere at ambient temperature, or hot water from an industrial heat source, is used to heat the liquid and turn it back into a gas. The massive increase in volume and pressure from this is used to drive a turbine to generate electricity.
=== Efficiency ===
In isolation, the process is only 25% efficient. This is increased to around 50% when used with a low-grade cold store, such as a large gravel bed, to capture the cold generated by evaporating the cryogen. The cold is re-used during the next refrigeration cycle.
Efficiency is further increased when used in conjunction with a power plant or other source of low-grade heat that would otherwise be lost to the atmosphere. Highview Power claims an AC to AC round-trip efficiency of 70%, by using an otherwise waste heat source from the compressor and other process wasted low grade heat at 115 °C with the IMechE (Institution of Mechanical Engineers) agreeing these efficiency estimates for a commercial-scale plant are realistic. However this number was not checked or confirmed by independent professional institutions.
=== Advantages ===
The system is based on proven technology, used safely in many industrial processes, and does not require any particularly rare elements or expensive components to manufacture. Dr Tim Fox, the head of Energy at the IMechE says "It uses standard industrial components - which reduces commercial risk; it will last for decades and it can be fixed with a spanner."
=== Applications ===
==== Economics ====
The technology is only economic where there is large variation in the wholesale price of electricity over time. Typically this will be where it is difficult to vary generation in response to changing demand. The technology thus complements growing energy sources like wind and solar, and allows a greater penetration of such renewables into the energy mix. It is less useful where electricity is mostly provided by dispatchable generation, like coal or gas-fired thermal plants, or hydro-electricity.
Cryogenic plants can also provide grid services, including grid balancing, voltage support, frequency response and synchronous inertia.
==== Locations ====
Unlike other grid-scale energy storage technologies which require specific geographies such as mountain reservoirs (pumped-storage hydropower) or underground salt caverns (compressed-air energy storage), a cryogenic energy storage plant can be located just about anywhere.
To achieve the greatest efficiencies, a cryogenic plant should be located near a source of low-grade heat which would otherwise be lost to the atmosphere. Often this would be a thermal power station that could be expected to be also generating electricity at times of peak demand and the highest prices. Colocation with a source of unused cold, such as an LNG regasification facility is also an advantage.
== Grid-scale demonstrators ==
=== United Kingdom ===
In April 2014, the UK government announced it had given £8 million to Viridor and Highview Power to fund the next stage of the demonstration. The resulting grid-scale demonstrator plant at Pilsworth Landfill facility in Bury, Greater Manchester, UK, started operation in April 2018. The design was based on research by the Birmingham Centre for Cryogenic Energy Storage (BCCES) associated with the University of Birmingham, and has storage for up to 15 MWh, and can generate a peak supply of 5 MW (so when fully charged lasts for three hours at maximum output) and is designed for an operational life of 40 years.
=== United States ===
In 2019, the Washington State Department of Commerce's Clean Energy Fund announced it would provide a grant to help Tacoma Power partner with Praxair to build a 15 MW / 450 MWh liquid air energy storage plant. It will store up to 850,000 gallons of liquid nitrogen to help balance power loads.
== Commercial plants ==
=== United Kingdom ===
In October 2019, Highview Power announced that it planned to build a 50 MW / 250 MWh commercial plant in Carrington, Greater Manchester.
Construction began in November 2020,
with commercial operation planned for 2022.
At 250 MWh, the plant would match the storage capacity of the world's largest existing lithium-ion battery, the Gateway Energy Storage facility in California. In November 2022 Highview Power stated that they were still trying to raise money "to construct a storage plant in Carrington that has a 30 megawatts capacity and can store 300 megawatt hours of electricity" with commissioning planned for "the end of 2024."
In 2024, Highview Power announced it had raised £300 million invesments the UK Infrastructure Bank and Centrica and would begin immediate construction of a 50MW/300MWh facility at Carrington. Commercial operation is planned to start in early 2026.
=== United States ===
In December 2019, Highview announced plans to build a 50 MW plant in northern Vermont, with the proposed facility able to store eight hours of energy, for a 400 MWh storage capacity.
=== Chile ===
In June 2021, Highview announced that it was developing a 50MW / 500MWh storage plant in the Atacama region of Chile.
== History ==
=== Transport ===
Both liquid air and liquid nitrogen have been used experimentally to power cars. A liquid air powered car called Liquid Air was built between 1899 and 1902 but it couldn't at the time compete in terms of efficiency with other engines.
More recently, a liquid nitrogen vehicle was built. Peter Dearman, a garage inventor in Hertfordshire, UK who had initially developed a liquid air powered car, then put the technology to use as grid energy storage The Dearman engine differs from former nitrogen engine designs in that the nitrogen is heated by combining it with the heat exchange fluid inside the cylinder of the engine.
=== Electricity storage pilots ===
In 2010, the technology was piloted at a UK power station.
A 300 kW, 2.5 MWh storage capacity pilot cryogenic energy system developed by researchers at the University of Leeds and Highview Power that uses liquid air (with the CO2 and water removed as they would turn solid at the storage temperature) as the energy store, and low-grade waste heat to boost the thermal re-expansion of the air, operated at an 80 MW biomass power station in Slough, UK, from 2010 until 2014 when it was relocated to the university of Birmingham. The efficiency is less than 15% because of low efficiency hardware components used, but the engineers are targeting an efficiency of about 60 percent for the next generation of CES based on operation experiences of this system.
== See also ==
United States Department of Energy International Energy Storage Database
== References == | Wikipedia/Cryogenic_energy_storage |
Hot dry rock (HDR) is an extremely abundant source of geothermal energy that is difficult to access. A vast store of thermal energy is contained within hot – but essentially dry and impervious crystalline basement rocks found almost everywhere deep beneath Earth's surface. A method for the extraction of useful amounts of geothermal energy from HDR originated at the Los Alamos National Laboratory in 1970, and Laboratory researchers were awarded a US patent covering it.
This technology has been tested extensively with multiple deep wells drilled in several field areas around world including the US, Japan, Australia, France, and the UK and investment of billions of research funds. It continues to be the focus, along with a related technique called Enhanced Geothermal System (EGS), for sizable government-led research studies involving costly deep drilling and rock studies. Thermal energy has been recovered in reasonably sustainable tests over periods of years and in some cases electrical power generation was also achieved. However no commercial projects are ongoing or likely due to the high cost and limited capacity of the engineered reservoirs, associated wells, and pumping systems. Commonly tests have opened just one or more fractures such that the reservoir surface heat exchange areas are limited. For this technology to successfully compete with other energy sources, drilling costs would have to drop drastically or new approaches that result in much more extensive, complex, and higher rate flow paths through actual fracture networks would have to be established. The enthusiasm in the research community is justified by the vast extent of the energy supply and the low environmental impact of the method, however significant breakthroughs will be required to make this a commercial energy resource.
== Overview ==
Although often confused with the relatively limited hydrothermal resource already commercialized to a large extent, HDR geothermal energy is very different. Whereas hydrothermal energy production can exploit hot fluids already in place in Earth's crust, an HDR system (consisting of the pressurized HDR reservoir, the boreholes drilled from the surface, and the surface injection pumps and associated plumbing) recovers Earth's heat from hot but dry regions via the closed-loop circulation of pressurized fluid. This fluid, injected from the surface under high pressure, opens pre-existing joints in the basement rock, creating a man-made reservoir which can be as much as a cubic kilometer in size. The fluid injected into the reservoir absorbs thermal energy from the high-temperature rock surfaces and then conveys the heat to the surface for practical use.
== History ==
The idea of deep hot dry rocks heat mining was described by Konstantin Tsiolkovsky (1898), Charles Parsons (1904), and Vladimir Obruchev (1920).
In 1963 in Paris, a geothermal heating system that used the heat of natural fractured rocks was built.
The Fenton Hill project was the first system for extracting HDR geothermal energy from an artificial formed reservoir; it was created in 1977.
== Technology ==
=== Planning and control ===
As the reservoir is formed by the pressure-dilation of the joints, the elastic response of the surrounding rock mass results in a region of tightly compressed, sealed rock at the periphery—making the HDR reservoir totally confined and contained. Such a reservoir is therefore fully engineered, in that the physical characteristics (size, depth at which it is created) as well as the operating parameters (injection and production pressures, production temperature, etc.) can be pre-planned and closely controlled. On the other hand the tight compression and confined nature of the reservoir severely limits that amount and the rate at which energy can be extracted.
=== Drilling and pressurization ===
As described by Brown, an HDR geothermal energy system is developed, first, by using conventional drilling to access a region of deep, hot basement rock. Once it has been determined the selected region contains no open faults or joints (by far the most common situation), an isolated section of the first borehole is pressurized at a level high enough to open several sets of previously sealed joints in the rock mass. By continuous pumping (hydraulic stimulation), a very large region of stimulated rock is created (the HDR reservoir) which consists of an interconnected array of joint flow paths within the rock mass. The opening of these flow paths causes movement along the pressure-activated joints, generating seismic signals (microearthquakes). Analysis of these signals yields information about the location and dimensions of the reservoir being developed.
=== Production wells ===
Typically, an HDR reservoir forms in the shape of an ellipsoid, with its longest axis orthogonal to the least principal Earth stress. This pressure-stimulated region is then accessed by two production wells, drilled to intersect the HDR reservoir near the elongated ends of the stimulated region. In most cases, the initial borehole becomes the injection well for the three-well, pressurized water-circulating system.
=== Operation ===
In operation, fluid is injected at pressures high enough to hold open the interconnected network of joints against the Earth stresses, and to effectively circulate fluid through the HDR reservoir at a high rate. During routine energy production, the injection pressure is maintained at just below the level that would cause further pressure-stimulation of the surrounding rock mass, in order to maximize energy production while limiting further reservoir growth. However, the limited reservoir size limits reservoir energy. Meanwhile high pressure operation adds significant cost to piping and pumping systems.
=== Productivity ===
The volume of the newly created array of opened joints within the HDR reservoir is much less than 1% of the volume of the pressure-stimulated rock mass. As these joints continue to pressure and cooling -dilate, the overall flow impedance across the reservoir is reduced, leading to a high thermal productivity. If the cooling leads to cooling fractures in a way that exposes more rock then it is possible that these reservoirs may improve over time. To date reservoir energy growth is only reported to come from new expensive high pressure well stimulation efforts.
== Feasibility studies ==
The feasibility of mining heat from the deep Earth was proven in two separate HDR reservoir flow demonstrations—each involving about one year of circulation—conducted by the Los Alamos National Laboratory between 1978 and 1995. These groundbreaking tests took place at the Laboratory's Fenton Hill HDR test site in the Jemez Mountains of north-central New Mexico, at depths of over 8,000 ft (2,400 m) and rock temperatures in excess of 180 °C. The results of these tests demonstrated conclusively the engineering viability of the revolutionary new HDR geothermal energy concept. The two separate reservoirs created at Fenton Hill are still the only truly confined HDR geothermal energy reservoirs flow-tested anywhere in the world. Although these tests demonstrated that HDR systems could be constructed, the flow rates and energy extractions rates did not justify the cost of the wells.
== Fenton Hill tests ==
=== Phase I ===
The first HDR reservoir tested at Fenton Hill, the Phase I reservoir, was created in June 1977 and then flow-tested for 75 days, from January to April 1978, at a thermal power level of 4 MW. The final water loss rate, at a surface injection pressure of 900 psi (6.2 MPa), was 2 US gallons per minute (7.6 L/min) (2% of the injection rate). This initial reservoir was shown to essentially consist of a single pressure-dilated, near-vertical joint, with a vanishingly small flow impedance of 0.5 psi/US gal/min (0.91 kPa/L/min).
The initial Phase I reservoir was enlarged in 1979 and further flow-tested for almost a year in 1980. Of greatest importance, this flow test confirmed that the enlarged reservoir was also confined, and exhibited a low water loss rate of 6 gpm. This reservoir consisted of the single near-vertical joint of the initial reservoir (which, as noted above, had been flow-tested for 75 days in early 1978) augmented by a set of newly pressure-stimulated near-vertical joints that were somewhat oblique to the strike of the original joint.
=== Phase II ===
A deeper and hotter HDR reservoir (Phase II) was created during a massive hydraulic fracturing (MHF) operation in late 1983. It was first flow-tested in the spring of 1985, by an initial closed-loop flow test (ICFT) that lasted a little over a month. Information garnered from the ICFT provided the basis for a subsequent long-term flow test (LTFT), carried out from 1992 to 1995.
The LTFT comprised several individual steady-state flow runs, interspersed with numerous additional experiments. In 1992–1993, two steady-state circulation periods were implemented, the first for 112 days and the second for 55 days. During both tests, water was routinely produced at a temperature of over 180 °C and a rate of 90–100 US gal/min (20–23 m3/h), resulting in continuous thermal energy production of approximately 4 MW. Over this time span, the reservoir pressure was maintained (even during shut-in periods) at a level of about 15 MPa.
Beginning in mid-1993, the reservoir was shut in for a period of nearly two years and the applied pressure was allowed to drop to essentially zero. In the spring of 1995, the system was re-pressurized and a third continuous circulation run of 66 days was conducted. Remarkably, the production parameters observed in the two earlier tests were rapidly re-established, and steady-state energy production resumed at the same level as before. Observations during both the shut-in and operational phases of all these flow-testing periods provided clear evidence that the rock at the boundary of this man-made reservoir had been compressed by the pressurization and resultant expansion of the reservoir region.
As a result of the LTFT, water loss was eliminated as a major concern in HDR operations. Over the period of the LTFT, water consumption fell to just 7% of the quantity of water injected; and data indicated it would have continued to decline under steady-state circulation conditions. Dissolved solids and gases in the produced fluid rapidly reached equilibrium values at low concentrations (about one-tenth the salinity of sea water), and the fluid remained geochemically benign throughout the test period. Routine operation of the automated surface plant showed that HDR energy systems could be run using the same economical staffing schedules that a number of unmanned commercial hydrothermal plants already employ.
=== Test results ===
The Fenton Hill tests clearly demonstrated advantages of a fully engineered HDR reservoir over naturally occurring hydrothermal resources, including EGS. With all the essential physical characteristics of the reservoir—including rock volume, fluid capacity, temperature, etc.—established during the engineered creation of the reservoir zone, and the entire reservoir volume enclosed by a hyperstressed periphery of sealed rock, any variations in operating conditions are totally determined by intentional changes made at the surface. In contrast, a natural hydrothermal “reservoir”—which is essentially open and therefore unconfined(having boundaries that are highly variable)—is inherently subject to changes in natural conditions. On the other hand the less confined, more complex, lower pressure, and more pervasively fractured natural systems support much higher well flow rates and low cost development of energy generation.
Another advantage of an HDR reservoir is that its confined nature makes it highly suitable for load-following operations, whereby the rate of energy production is varied to meet the varying demand for electric power—a process that can greatly increase the economic competitiveness of the technology. This concept was evaluated near the end of the Phase II testing period, when energy production was increased by 60% for 4 hours each day, by a programmed vent-down of the high-pressure reservoir regions surrounding the production borehole. Within two days it became possible to computerize the process, such that production was automatically increased and decreased according to the desired schedule for the rest of the test period. The transitions between the two production levels took less than 5 minutes, and at each level steady-state production was consistently maintained. Such load-following operations could not be implemented in a natural hydrothermal system or even in an EGS system because of the unconfined volume and boundary conditions. Load following almost never improves economics for geothermal development because the fuel cost is effectively paid up front, so delaying use just hurts the economics. Normal geothermal systems have also (by necessity) been applied to follow loads but this kind of generation increases maintenance costs and generally reduces revenue (in spite of the higher prices for some of the load).
The experiments at Fenton Hill have clearly demonstrated that HDR technology is unique, not only with respect to how the pressurized reservoir is created and then circulated, but also because of the management flexibility it offers. It has in common with normal hydrothermal technology only that both are based on wells that produce hot water that runs generators.
== Soultz tests ==
In 1986 the HDR system project of France and Germany in Soultz-sous-Forêts was started. In 1991 wells were drilled to 2.2 km depth and were stimulated. However, the attempt to create a reservoir was unsuccessful as high water losses was observed.
In 1995 wells were deepened to 3.9 km and stimulated. A reservoir was created successfully in 1997 and a four-month circulation test with 25 L/s (6.6 USgal/s) flow rate without water loss was attained.
In 2003 wells were deepened to 5.1 km. Stimulations were done to create a third reservoir, during circulation tests in 2005-2008 water was produced at a temperature of about 160 °C with low water loss. Construction of a power plant was begun.
The power plant started to produce electricity in 2016, it was installed with a gross capacity of 1.7 MWe. The 1.7 MW test plant is purely a demonstration plant. In comparison normal geothermal power plant development typically involves initial plants from 10 to 100 MW. These plants can be commercially successful but are much cheaper than HDR system, with shallower wells, that produce orders of magnitude more energy, into inexpensive pipelines and power plants. It seems possible that breakthroughs will occur that allow us to access the tremendous amounts of stored heat energy in deep rock using HDR technology but very few breakthroughs appear to be on the horizon especially when compared to the rapid progress being made on much lower risk solar/battery combinations.
== Unconfirmed systems ==
There have been numerous reports of the testing of unconfined geothermal systems pressure-stimulated in crystalline basement rock: for instance at the Rosemanowes quarry in Cornwall, England; at the Hijiori and Ogachi calderas in Japan; and in the Cooper Basin, Australia. However, all these “engineered” geothermal systems, while developed under programs directed toward the investigation of HDR technologies, have proven to be open—as evidenced by the high water losses observed during pressurized circulation. In essence, they are all EGS or hydrothermal systems, not true HDR reservoirs.
== Related terminology ==
=== Enhanced geothermal systems ===
The EGS concept was first described by Los Alamos researchers in 1990, at a geothermal symposium sponsored by the United States Department of Energy (DOE)—many years before the DOE coined the term EGS in an attempt to emphasize the geothermal aspect of heat mining rather than the unique characteristics of HDR.
=== HWR versus HDR ===
Hot Wet Rock (HWR) hydrothermal technology makes use of hot fluids found naturally in basement rock; but such HWR conditions are rare. By far the bulk of the world's geothermal resource base (over 98%) is in the form of basement rock that is hot but dry—with no naturally available water. This means that HDR technology is applicable almost everywhere on Earth (hence the claim that HDR geothermal energy is ubiquitous). On the other hand an uneconomic resource is actually just energy storage and not useful.
Typically, the temperature in those vast regions of the accessible crystalline basement rock increases with depth. This geothermal gradient, which is the principal HDR resource variable, ranges from less than 20 °C/km to over 60 °C/km, depending upon location. The concomitant HDR economic variable is the cost of drilling to depths at which rock temperatures are sufficiently high to permit the development of a suitable reservoir. The advent of new technologies for drilling hard crystalline basement rocks, such as new PDC (polycrystalline diamond compact) drill bits, drilling turbines or fluid-driven percussive technologies (such as Mudhammer ) may significantly improve HDR economics in the near future.
=== Possible confusion ===
As noted above, in the late 1990s the DOE began referring to all attempts to extract geothermal energy from basement rock as "EGS," which has led to both biographical and technical confusion. Biographically, a large number of publications exist that discuss work to extract energy from HDR without any mention of the term EGS. Thus, an internet search using the term EGS would not identify these publications.
But the technical distinction between HDR and EGS, as clarified in this article, may be even more important. Some sources describe the permeability of the Earth's basement rock as a continuum ranging from totally impermeable HDR to slightly permeable HWR to highly permeable conventional hydrothermal. However, this continuum concept is not technically correct. A more appropriate view would be to consider impermeable HDR rock as a separate state from that of the continuum of permeable rock—just as one would consider a completely closed faucet as distinct from one that is open to any degree, whether the flow be a trickle or a flood. In the same way, HDR technology should be regarded as totally distinct from EGS. Unfortunately it is not easy to open the faucet to obtain significant flow.
== Further reading ==
A definitive book on HDR development, including a full account of the experiments at Fenton Hill, was published by Springer-Verlag in April 2012.
== Glossary ==
DOE, Department of Energy (United States)
EGS, Enhanced geothermal system
HDR, Hot dry rock
HWR, Hot wet rock
ICFT, Initial closed-loop flow test
LTFT, Long-term flow test
MHF, Massive hydraulic fracturing
PDC, Polycrystalline diamond compact (drill bit)
== References == | Wikipedia/Hot_dry_rock_geothermal_energy |
Renewable and Sustainable Energy Reviews is a peer-reviewed scientific journal covering research on sustainable energy. It is published in 12 issues per year by Elsevier and the editor-in-chief is Aoife M. Foley (Queen's University Belfast). According to the Journal Citation Reports, the journal has a 2021 impact factor of 16.799.
According to the most recent data from 2023, the journal ranks 7th out of 270 in Renewable Energy, Sustainability and the Environment (based on Scopus), and 9th out of 170 in Energy & Fuels (based on the Web of Science impact factor).
The journal considers articles based on the themes of energy resources, applications, utilization, environment, techno-socio-economic aspects, systems, and sustainability.
== References ==
== External links ==
Official website | Wikipedia/Renewable_and_Sustainable_Energy_Reviews |
A kinetic energy recovery system (KERS) is an automotive system for recovering a moving vehicle's kinetic energy under braking. The recovered energy is stored in a reservoir (for example a flywheel or high voltage batteries) for later use under acceleration. Examples include complex high end systems such as the Zytek, Flybrid, Torotrak and Xtrac used in Formula One racing and simple, easily manufactured and integrated differential based systems such as the Cambridge Passenger/Commercial Vehicle Kinetic Energy Recovery System (CPC-KERS).
Xtrac and Flybrid are both licensees of Torotrak's technologies, which employ a small and sophisticated ancillary gearbox incorporating a continuously variable transmission (CVT). The CPC-KERS is similar as it also forms part of the driveline assembly. However, the whole mechanism including the flywheel sits entirely in the vehicle's hub (looking like a drum brake). In the CPC-KERS, a differential replaces the CVT and transfers torque between the flywheel, drive wheel and road wheel.
== Use in motorsport ==
=== History ===
The first of these systems to be revealed was the Flybrid. This system weighs 24 kg (53 lbs) and has an energy capacity of 400 kJ after allowing for internal losses. A maximum power boost of 60 kW (81.6 PS, 80.4 HP) for 6.67 seconds is available. The 240 mm (9.4 in) diameter flywheel weighs 5.0 kg (11 lbs) and revolves at up to 64,500 rpm. Maximum torque at the flywheel is 18 Nm (13.3 ftlbs), and the torque at the gearbox connection is correspondingly higher for the change in speed. The system occupies a volume of 13 litres.
Already in 2006, a first KERS system based on supercapacitors has been studied at EPFL (Ecole Polytechnique Fédérale de Lausanne) in the framework of the development of the "Formula S2000". A 180kJ system has been developed in collaboration with other institutes.
Two minor incidents were reported during testing of various KERS systems in 2008. The first occurred when the Red Bull Racing team tested their KERS battery for the first time in July: it malfunctioned and caused a fire scare that led to the team's factory being evacuated. The second was less than a week later when a BMW Sauber mechanic was given an electric shock when he touched Christian Klien's KERS-equipped car during a test at the Jerez circuit.
=== Formula One ===
Formula One has stated that they support responsible solutions to the world's environmental challenges, and the FIA allowed the use of 60 kW (82 PS; 80 bhp) KERS in the regulations for the 2009 Formula One season. Teams began testing systems in 2008: energy can either be stored as mechanical energy (as in a flywheel) or as electrical energy (as in a battery or supercapacitor).
With the introduction of KERS in the 2009 season, only four teams used it at some point in the season: Ferrari, Renault, BMW and McLaren. Eventually, during the season, Renault and BMW stopped using the system. Nick Heidfeld was the first driver to take a podium position with a KERS equipped car, at the Malaysian Grand Prix. McLaren Mercedes became the first team to win an F1 GP using a KERS equipped car when Lewis Hamilton won the Hungarian Grand Prix on July 26, 2009. Their second KERS equipped car finished fifth. At the following race, Lewis Hamilton became the first driver to take pole position with a KERS car, his teammate, Heikki Kovalainen qualifying second. This was also the first instance of an all KERS front row. On August 30, 2009, Kimi Räikkönen won the Belgian Grand Prix with his KERS equipped Ferrari. It was the first time that KERS contributed directly to a race victory, with second placed Giancarlo Fisichella claiming "Actually, I was quicker than Kimi. He only took me because of KERS at the beginning".
Although KERS was still legal in F1 in the 2010 season, all the teams had agreed not to use it. New rules for the 2011 F1 season which raised the minimum weight limit of the car and driver by 20 kg to 640 kg, along with the FOTA teams agreeing to the use of KERS devices once more, meant that KERS returned for the 2011 season. Use of KERS was still optional as in the 2009 season; and at the start of the 2011 season three teams chose not to use it.
WilliamsF1 developed their own flywheel-based KERS system but decided not to use it in their F1 cars due to packaging issues, and have instead developed their own electrical KERS system. However, they set up Williams Hybrid Power to sell their developments. In 2012 it was announced that the Audi Le Mans R18 hybrid cars would use Williams Hybrid Power.
Since 2014, the power capacity of the KERS units were increased from 60 kilowatts (80 bhp) to 120 kilowatts (160 bhp). This was introduced to balance the sport's move from 2.4 litre V8 engines to 1.6 litre V6 turbo engines.
=== Working diagram for KERS ===
=== Autopart makers ===
Bosch Motorsport Service is developing a KERS for use in motor racing. These electricity storage systems for hybrid and engine functions include a lithium-ion battery with scalable capacity or a flywheel, a four to eight kilogram electric motor (with a maximum power level of 60 kW (81 hp)), as well as the KERS controller for power and battery management. Bosch also offers a range of electric hybrid systems for commercial and light-duty applications.
=== Car manufacturers ===
Several automakers have been testing KERS systems. At the 2008 1000 km of Silverstone, Peugeot Sport unveiled the Peugeot 908 HY, a hybrid electric variant of the diesel 908, with KERS. Peugeot planned to campaign the car in the 2009 Le Mans Series season, although it was not allowed to score championship points.
McLaren began testing of their KERS system in September 2008 at Jerez in preparation for the 2009 F1 season, although at that time it was not yet known if they would be operating an electrical or mechanical system. In November 2008, it was announced that Freescale Semiconductor would collaborate with McLaren Electronic Systems to further develop its KERS for McLaren's Formula One cars from 2010 onwards. Both parties believed this collaboration would improve McLaren's KERS system and help the system to transfer its technology to road cars.
Toyota has used a supercapacitor for regeneration on its Supra HV-R hybrid race car that won the Tokachi 24-Hour endurance race in July 2007. This Supra became the first hybrid car in the history of motorsport to win such a race.
At the NAIAS 2011, Porsche unveiled a RSR variant of their Porsche 918 concept car which uses a flywheel-based KERS that sits beside the driver in the passenger compartment and boosts the dual electric motors driving the front wheels and the 565 BHP V8 gasoline engine driving the rear to a combined power output of 767 BHP. This system has many problems including the imbalance caused to the vehicle due to the flywheel. Porsche is currently developing an electrical storage system.
In 2011, Mazda has announced i-ELOOP, a system which uses a variable-voltage alternator to convert kinetic energy to electric power during deceleration. The energy, stored in a double-layer capacitor, is used to supply power needed by vehicle electrical systems. When used in conjunction with Mazda's start-stop system, i-Stop, the company claims fuel savings of up to 10%.
Bosch and PSA Peugeot Citroën have developed a hybrid system that uses hydraulics as a way to transfer energy to and from a compressed nitrogen tank. An up to 45% reduction in fuel consumption is claimed, corresponding to 2.9 L/100 km (81 mpg, 69 g CO2/km) on the NEDC cycle for a compact frame like Peugeot 208. The system is claimed to be much more affordable than competing electric and flywheel systems and was expected on road cars by 2016 but was abandoned in 2015.
In 2020, FIAT launched the series of the FIAT Panda mild-hybrid with KERS technology.
=== Motorcycles ===
KTM racing boss Harald Bartol revealed that the factory raced with a secret kinetic energy recovery system fitted to Tomoyoshi Koyama's motorcycle during the 125cc race of the 2008 Valencian Community motorcycle Grand Prix. Koyama finished 7th. The system was later ruled illegal and thus was banned. The Lit C-1 electric motorcycle will also use a KERS as a regenerative braking system.
=== Bicycles ===
KERS is also possible on a bicycle. The EPA, working with students from the University of Michigan, developed the hydraulic Regenerative Brake Launch Assist (RBLA)
This has also been demonstrated by mounting a flywheel on a bike frame and connecting it with a CVT to the back wheel. By shifting the gear, 20% of the kinetic energy can be stored in the flywheel, ready to give an acceleration boost by reshifting the gear.
=== Races ===
Automobile Club de l'Ouest, the organizer behind the annual 24 Hours of Le Mans event and the Le Mans Series, has promoted the use of kinetic energy recovery systems in the LMP1 class since the late 2000s. Peugeot was the first manufacturer to unveil a fully functioning LMP1 car in the form of the 908 HY at the 2008 Autosport 1000 km race at Silverstone.
The 2011 24 Hours of Le Mans saw Hope Racing enter with a Flybrid Systems mechanical KERS, to be the first car ever to compete at the event with a hybrid. The system consisted of high speed slipping clutches which transfer torque to and from the vehicle, coupled to a 60,000 rpm flywheel.
Audi and Toyota both developed LMP1 cars with kinetic energy recovery systems for the 2012 and 2013 24 Hours of Le Mans. The Audi R18 e-tron quattro uses a flywheel-based system, while the Toyota TS030 Hybrid uses a supercapacitor-based system. When Porsche announced its return to Le Mans in 2014, it also unveiled an LMP1 car with a kinetic energy recovery system. The Porsche 919 Hybrid, introduced in 2014, uses a battery system, in contrast to the previous Porsche 911 GT3 R Hybrid that used a flywheel system.
== Use in public transport ==
=== London buses ===
A KERS using a carbon fibre flywheel, originally developed for the Williams Formula One racing team, has been modified for retrofitting to existing London double-decker buses. Buses (500 from the Go-Ahead Group) were fitted with this technology from 2014 to 2016, anticipating a fuel efficiency improvement of approximately 20%. The team who developed the technology were awarded the Dewar Trophy of the Royal Automobile Club in 2015.
=== Parry People Mover ===
Parry People Mover railcars use a small engine and large flywheel to move. The system also supports regenerative braking.
== See also ==
Regenerative brake
Make Cars Green
== References == | Wikipedia/Kinetic_Energy_Recovery_System |
Approximately 6% of primary energy in French Polynesia is generated from renewable energy sources. Approximately 30% of electricity is generated renewably, primarily Hydroelectricity and solar power. Renewable generation is concentrated on Tahiti, with other parts of French Polynesia almost entirely reliant on fossil fuels. Wind power is not used, with only two small facilities, both of which became non-functional due to lack of maintenance.
In December 2013 the Assembly of French Polynesia adopted a Law on the Guiding Principles of the Energy Policy of French Polynesia, requiring that a minimum of 50% of electricity be generated from renewable sources by 2020. This was replaced in November 2015 by the 2015-2030 Energy Transition Plan (PTE), which set a target of 75% renewables by 2030. The ETP was replaced in February 2022 by a multi-annual energy plan (PPE), and the 75% by 2030 target was retained.
In July 2016 the government announced that hybrid solar PV / battery / diesel power plants would be constructed on eight remote islands. In April 2021 the government called for tenders for 30MW of solar farms with batteries for Tahiti. Winners of the tenders were announced in March 2022.
In September 2022 Électricité de Tahiti performed a test to run the island of Tahiti entirely on renewables for an hour, using hydroelectricity and photovoltaics, with the Putu Uira battery system stabilising the grid. This was followed by a longer test a week later. Following the test EDT announced that would increasingly rely on renewables to power Tahiti during periods of good weather and low demand.
In July 2021 the French government agreed to provide a 7.1 billion XPF energy transition fund to decarbonise electricity production, particularly on remote islands. An agreement to implement the fund was signed in February 2023.
== References == | Wikipedia/Renewable_energy_in_French_Polynesia |
Energy storage as a service (ESaaS) allows a facility to benefit from the advantages of an energy storage system by entering into a service agreement without purchasing the system. Energy storage systems provide a range of services to generate revenue, create savings, and improve electricity resiliency. The operation of the ESaaS system is a unique combination of an advanced battery storage system, an energy management system, and a service contract which can deliver value to a business by providing reliable power more economically.
== History ==
Scott Foster, Energy Director of the United Nations Economic Commission for Europe, is one of the leading global advocates for energy as service. He coined the term 'iEnergy' to propagate an annual/monthly subscription fee for energy, rather than the present-day commodity-led pay per kilowatt of electricity system. Foster believes a service-led system would put the onus on the energy supplier to improve reliability and offer the best possible service to customers.
The term ESaaS was developed and trademarked by Constant Power Inc., a Toronto-based company, in 2016. The service has been designed to work in the North American open electricity markets. Notable other companies offering Energy Storage-as-a-Service include GI Energy Archived 2017-10-20 at the Wayback Machine, AES Corporation, TROES Corp., Stem Inc, and Younicos.
== Components ==
ESaaS is the combination of an energy storage system, a control and monitoring system, and a service contract.
The most common energy storage systems used for ESaaS are lithium-ion or flow batteries due to their compact size, non-invasive installation, high efficiencies, and fast reaction times but other storage mediums may be used such as compressed air, flywheels, or pumped hydro. The batteries are sized based on the facility's needs and is paired with a power inverter to convert the DC power to AC power in order to connect directly to the facility’s electricity supply.
ESaaS systems are remotely monitored and controlled by the ESaaS operator using a Supervisory Control and Data Acquisition (SCADA) system. The SCADA communicates with the facility's Energy Management System (EMS), Power Conversion System (PCS), and Battery Management System (BMS). The ESaaS operator is responsible for ensuring the ESaaS system is monitoring and responding to the facility’s needs as well as overriding commands to participate in regional incentive programs such as coincident peak management and demand response programs in real time.
The facility benefiting from the ESaaS system is linked to the ESaaS system operator through a service contract. The contract specifies the length of the service term, payment structure, and list of services the facility wishes to participate in.
== Services ==
ESaaS is used to perform a variety of services including:
Coincident Peak Management
During times of high regional demand, Independent Service Operators (ISOs)/Regional Transmission Organizations (RTOs) offer incentives for facilities to reduce or curtail their load. ESaaS allows a facility to isolate or offset their load during these high regional demand periods to decrease demand from the electricity grid to benefit from the incentives. The system is designed to work in conjunction or independent of facility curtailment.
Demand Response
ISOs/RTOs offer facilities payment for curtailing their energy demand when dispatched by the grid operator. ESaaS allows facilities to participate in these programs by off-setting all or a portion of a facility load during a demand response occurrence. A facility can benefit from the incentive without interrupting their facility operation.
Power Factor Correction
During charging and discharging, active and reactive power may be balanced prior to supplying a facility. By balancing the amount of active and reactive power to a facility, the power factor and resulting facility electrical efficiency may be improved. This improvement may reduce a facility's monthly peak demand charge.
Power Quality
ESaaS actively monitors electricity supply to a facility. In times of intermittent power supply, ESaaS acts as an uninterruptible power supply (UPS) to ensure uninterrupted, reliable power supply to eliminate unexpected fluctuations. Fluctuating and intermittent power affects equipment operation which may cause costly delays and defects in production.
Back-up Power
If the electricity grid experiences a power outage, ESaaS offers a back-up power service to continue powering all or a portion of a facility's electricity demand. Depending on the size of the ESaaS installation, ESaaS may maintain facility operation for the duration of a grid failure.
Peak Shaving
ESaaS actively monitors a facility’s energy profile to normalize the electricity draw from the electricity grid. The ESaaS system stores energy when the facility demand is lower than average and discharges the stored energy when the facility demand is higher than average. The result is a steady draw of electricity from the electricity grid and a lower monthly peak demand charge.
Energy Arbitrage
ESaaS actively monitors local electricity spot prices to store energy when the price is low to be utilized when electricity prices are high. This is commonly referred to as arbitrage. The net different in price results in cost savings.
Market Ancillary Services
ESaaS enables facilities to participate in the local ISO/RTO markets to provide services such as frequency regulation, operating reserve, and dispatchable generation. By participating in the local market, facilities can generate revenue through the ESaaS contract.
Transmission Support
ESaaS may provide services to ease congestion and constraint on electricity transmission networks by storing energy during heavy transmission periods to be released during less congested periods. The use of this service can prolong the life of infrastructure and defers system upgrades.
== Markets served ==
ESaaS primarily benefits large energy consumers with an average demand of over 500 kW, although, the service may benefit smaller facilities depending on regional incentives. Current early adopters of ESaaS are manufacturers (chemical, electrical, lighting, metal, petrochemical, plastics), commercial (retail, large offices, medium offices, multi-residential, supermarkets), public facilities (colleges, universities, hotels, hospitality, schools), and resources (oil & extraction, pulp & paper, metals & ore, food processing, greenhouses).
== Benefits ==
=== System benefactor does not require installation capital ===
To participate in an ESaaS service, the installation system benefactor does not require any capital outlay. Upon installing an ESaaS service, the facility sees immediate savings and/or revenue generation. Initial capital is often a hurdle for facilities to adopt an energy storage system since in most cases, the payback period of an energy storage system is 5–10 years.
=== System operated by a third-party system operator ===
Source:
ESaaS is a contracted service that is automatically controlled by a third party. This eliminates responsibility for the facility to allocate resources to manage their energy profile allowing a facility to operate their core business. The system operators have knowledge of local electricity sectors that continually monitor and update system protocols as regional markets change. The information is used to optimize the value realized by the ESaaS system while still meeting facility requirements.
=== Environmental ===
For most ESaaS services, energy is stored during night time, off-peak hours when energy production is created from non-carbon emitting sources. The energy is then used to offset the required carbon emitting production during peak-times. The load shifting capability provided by ESaaS displaces heavy emitting generation requirements.
== Pricing ==
ESaaS contracts may be structured as a cost sharing model or a fixed monthly price over a contracted term. Cost sharing models share the economical benefits of ESaaS after they are realized by the customer. The fixed price is based on potential economic benefit and applicable programs in the region of deployment. The ESaaS contract price is always less than the economic value provided by the service to ensure the client retains a net positive value through the service.
== See also ==
as a service
== References == | Wikipedia/Energy_storage_as_a_service |
Blade element momentum theory is a theory that combines both blade element theory and momentum theory. It is used to calculate the local forces on a propeller or wind-turbine blade. Blade element theory is combined with momentum theory to alleviate some of the difficulties in calculating the induced velocities at the rotor.
This article emphasizes application of blade element theory to ground-based wind turbines, but the principles apply as well to propellers. Whereas the streamtube area is reduced by a propeller, it is expanded by a wind turbine. For either application, a highly simplified but useful approximation is the Rankine–Froude "momentum" or "actuator disk" model (1865, 1889). This article explains the application of the "Betz limit" to the efficiency of a ground-based wind turbine.
Froude's blade element theory (1878) is a mathematical process to determine the behavior of propellers, later refined by Glauert (1926). Betz (1921) provided an approximate correction to momentum "Rankine–Froude actuator-disk" theory to account for the sudden rotation imparted to the flow by the actuator disk (NACA TN 83, "The Theory of the Screw Propeller" and NACA TM 491, "Propeller Problems"). In blade element momentum theory, angular momentum is included in the model, meaning that the wake (the air after interaction with the rotor) has angular momentum. That is, the air begins to rotate about the z-axis immediately upon interaction with the rotor (see diagram below). Angular momentum must be taken into account since the rotor, which is the device that extracts the energy from the wind, is rotating as a result of the interaction with the wind.
== Rankine–Froude model ==
The "Betz limit," not yet taking advantage of Betz' contribution to account for rotational flow with emphasis on propellers, applies the Rankine–Froude "actuator disk" theory to obtain the maximum efficiency of a stationary wind turbine. The following analysis is restricted to axial motion of the air:
In our streamtube we have fluid flowing from left to right, and an actuator disk that represents the rotor. We will assume that the rotor is infinitesimally thin. From above, we can see that at the start of the streamtube, fluid flow is normal to the actuator disk. The fluid interacts with the rotor, thus transferring energy from the fluid to the rotor. The fluid then continues to flow downstream. Thus we can break our system/streamtube into two sections: pre-acuator disk, and post-actuator disk. Before interaction with the rotor, the total energy in the fluid is constant. Furthermore, after interacting with the rotor, the total energy in the fluid is constant.
Bernoulli's equation describes the different forms of energy that are present in fluid flow where the net energy is constant, i.e. when a fluid is not transferring any energy to some other entity such as a rotor. The energy consists of static pressure, gravitational potential energy, and kinetic energy. Mathematically, we have the following expression:
1
2
ρ
v
2
+
P
+
ρ
g
h
=
const.
{\displaystyle {\frac {1}{2}}\rho v^{2}+P+\rho gh={\text{const.}}}
where
ρ
{\displaystyle \rho }
is the density of the fluid,
v
{\displaystyle v}
is the velocity of the fluid along a streamline,
P
{\displaystyle P}
is the static pressure energy,
g
{\displaystyle g}
is the acceleration due to gravity, and
h
{\displaystyle h}
is the height above the ground. For the purposes of this analysis, we will assume that gravitational potential energy is unchanging during fluid flow from left to right such that we have the following:
1
2
ρ
v
2
+
P
=
c
o
n
s
t
.
{\displaystyle {\frac {1}{2}}\rho v^{2}+P=\mathrm {const.} }
Thus, if we have two points on a streamline, point 1 and point 2, and at point 1 the velocity of the fluid along the streamline is
v
1
{\displaystyle v_{1}}
and the pressure at 1 is
P
1
{\displaystyle P_{1}}
, and at point 2 the velocity of the fluid along the streamline is
v
2
{\displaystyle v_{2}}
and the pressure at 2 is
P
2
{\displaystyle P_{2}}
, and no energy has been extracted from the fluid between points 1 and 2, then we have the following expression:
1
2
ρ
v
1
2
+
P
1
=
1
2
ρ
v
2
2
+
P
2
{\displaystyle {\frac {1}{2}}\rho v_{1}^{2}+P_{1}={\frac {1}{2}}\rho v_{2}^{2}+P_{2}}
Now let us return to our initial diagram. Consider pre-actuator flow. Far upstream, the fluid velocity is
v
∞
{\displaystyle v_{\infty }}
; the fluid velocity then decreases and pressure increases as it approaches the rotor. In accordance with mass conservation, the mass flow rate through the rotor must be constant. The mass flow rate,
m
˙
{\displaystyle {\dot {m}}}
, through a surface of area
A
{\displaystyle A}
is given by the following expression:
m
˙
=
ρ
A
v
{\displaystyle {\dot {m}}=\rho Av}
where
ρ
{\displaystyle \rho }
is the density and
v
{\displaystyle v}
is the velocity of the fluid along a streamline. Thus, if mass flow rate is constant, increases in area must result in decreases in fluid velocity along a streamline. This means the kinetic energy of the fluid is decreasing. If the flow is expanding but not transferring energy, then Bernoulli applies. Thus the reduction in kinetic energy is countered by an increase in static pressure energy.
So we have the following situation pre-rotor: far upstream, fluid pressure is the same as atmospheric,
P
∞
{\displaystyle P_{\infty }}
; just before interaction with the rotor, fluid pressure has increased and so kinetic energy has decreased. This can be described mathematically using Bernoulli's equation:
1
2
ρ
v
∞
2
+
P
∞
=
1
2
ρ
(
v
∞
(
1
−
a
)
)
2
+
P
D
+
{\displaystyle {\frac {1}{2}}\rho v_{\infty }^{2}+P_{\infty }={\frac {1}{2}}\rho \left(v_{\infty }(1-a)\right)^{2}+P_{D+}}
where we have written the fluid velocity at the rotor as
v
∞
(
1
−
a
)
{\displaystyle v_{\infty }(1-a)}
, where
a
{\displaystyle a}
is the axial induction factor. The pressure of the fluid on the upstream side of the actuator disk is
P
D
+
{\displaystyle P_{D+}}
. We are treating the rotor as an actuator disk that is infinitely thin. Thus we will assume no change in fluid velocity across the actuator disk. Since energy has been extracted from the fluid, the pressure must have decreased.
Now consider post-rotor: immediately after interacting with the rotor, the fluid velocity is still
v
∞
(
1
−
a
)
{\displaystyle v_{\infty }(1-a)}
, but the pressure has dropped to a value
P
D
−
{\displaystyle P_{D-}}
; far downstream, pressure of the fluid has reached equilibrium with the atmosphere; this has been accomplished in the natural and dynamically slow process of decreasing the velocity of flow in the stream tube in order to maintain dynamic equilibrium ( i.e.
P
→
P
∞
{\displaystyle P\rightarrow P_{\infty }}
far downstream. Assuming no further energy transfer, we can apply Bernoulli for downstream:
1
2
ρ
(
v
∞
(
1
−
a
)
)
2
+
P
D
−
=
1
2
ρ
v
w
2
+
P
∞
{\displaystyle {\frac {1}{2}}\rho \left(v_{\infty }(1-a)\right)^{2}+P_{D-}={\frac {1}{2}}\rho v_{w}^{2}+P_{\infty }}
where
v
w
=
{\displaystyle v_{w}=}
The velocity far downstream in the Wake
Thus we can obtain an expression for the pressure difference between fore and aft of the rotor:
P
D
+
−
P
D
−
=
1
2
ρ
(
v
∞
2
−
v
w
2
)
{\displaystyle P_{D+}-P_{D-}={\frac {1}{2}}\rho (v_{\infty }^{2}-v_{w}^{2})}
If we have a pressure difference across the area of the actuator disc, there is a force acting on the actuator disk, which can be determined from
F
=
Δ
P
A
{\displaystyle F=\Delta PA}
:
1
2
ρ
(
v
∞
2
−
v
w
2
)
A
D
{\displaystyle {\frac {1}{2}}\rho (v_{\infty }^{2}-v_{w}^{2})A_{D}}
where
A
D
{\displaystyle A_{D}}
is the area of the actuator disk. If the rotor is the only thing absorbing energy from the fluid, the rate of change in axial momentum of the fluid is the force that is acting on the rotor. The rate of change of axial momentum can be expressed as the difference between the initial and final axial velocities of the fluid, multiplied by the mass flow rate:
F
=
d
p
d
t
=
m
˙
(
v
∞
−
v
w
)
=
ρ
A
D
v
D
(
v
∞
−
v
w
)
=
ρ
A
D
(
1
−
a
)
v
∞
(
v
∞
−
v
w
)
{\displaystyle F={\frac {\mathrm {d} p}{\mathrm {d} t}}={\dot {m}}(v_{\infty }-v_{w})=\rho A_{D}v_{D}(v_{\infty }-v_{w})=\rho A_{D}(1-a)v_{\infty }(v_{\infty }-v_{w})}
Thus we can arrive at an expression for the fluid velocity far downstream:
v
w
=
(
1
−
2
a
)
v
∞
{\displaystyle v_{w}=(1-2a)v_{\infty }}
This force is acting at the rotor. The power taken from the fluid is the force acting on the fluid multiplied by the velocity of the fluid at the point of power extraction:
P
o
w
e
r
e
x
t
=
F
v
D
=
2
a
(
1
−
a
)
2
v
∞
3
ρ
A
D
{\displaystyle \mathrm {Power} _{ext}=Fv_{D}=2a(1-a)^{2}v_{\infty }^{3}\rho A_{D}}
=== Maximum power ===
Suppose we are interested in finding the maximum power that can be extracted from the fluid. The power in the fluid is given by the following expression:
P
o
w
e
r
=
1
2
ρ
A
v
3
{\displaystyle \mathrm {Power} ={\frac {1}{2}}\rho Av^{3}}
where
ρ
{\displaystyle \rho }
is the fluid density as before,
v
{\displaystyle v}
is the fluid velocity, and
A
{\displaystyle A}
is the area of an imaginary surface through which the fluid is flowing. The power extracted from the fluid by a rotor in the scenario described above is some fraction of this power expression. We will call the fraction the power co-efficient,
C
p
{\displaystyle C_{p}}
. Thus the power extracted,
P
o
w
e
r
e
x
t
{\displaystyle \mathrm {Power} _{ext}}
is given by the following expression:
P
o
w
e
r
e
x
t
=
P
o
w
e
r
×
C
p
{\displaystyle \mathrm {Power} _{ext}=\mathrm {Power} \times C_{p}}
Our question is this: what is the maximum value of
C
p
{\displaystyle C_{p}}
using the Betz model?
Let us return to our derived expression for the power transferred from the fluid to the rotor (
P
o
w
e
r
e
x
t
{\displaystyle \mathrm {Power} _{ext}}
). We can see that the power extracted is dependent on the axial induction factor. If we differentiate
P
o
w
e
r
e
x
t
{\displaystyle \mathrm {Power} _{ext}}
with respect to
a
{\displaystyle a}
, we get the following result:
d
P
o
w
e
r
e
x
t
d
a
=
2
v
∞
3
ρ
A
D
×
(
(
1
−
a
)
2
−
2
a
(
1
−
a
)
)
{\displaystyle {\frac {\mathrm {d} \mathrm {Power} _{ext}}{\mathrm {d} a}}=2v_{\infty }^{3}\rho A_{D}\times \left((1-a)^{2}-2a(1-a)\right)}
If we have maximised our power extraction, we can set the above to zero. This allows us to determine the value of
a
{\displaystyle a}
which yields maximum power extraction. This value is a
1
/
3
{\displaystyle 1/3}
. Thus we are able to find that
C
P
m
a
x
=
16
/
27
{\displaystyle C_{P~max}=16/27}
. In other words, the rotor cannot extract more than 59 per cent of the power in the fluid.
== Blade element momentum theory ==
Compared to the Rankine–Froude model, Blade element momentum theory accounts for the angular momentum of the rotor. Consider the left hand side of the figure below. We have a streamtube, in which there is the fluid and the rotor. We will assume that there is no interaction between the contents of the streamtube and everything outside of it. That is, we are dealing with an isolated system. In physics, isolated systems must obey conservation laws. An example of such is the conservation of angular momentum. Thus, the angular momentum within the streamtube must be conserved. Consequently, if the rotor acquires angular momentum through its interaction with the fluid, something else must acquire equal and opposite angular momentum. As already mentioned, the system consists of just the fluid and the rotor, the fluid must acquire angular momentum in the wake. As we related the change in axial momentum with some induction factor
a
{\displaystyle a}
, we will relate the change in angular momentum of the fluid with the tangential induction factor,
a
′
{\displaystyle a'}
.
Consider the following setup.
We will break the rotor area up into annular rings of infinitesimally small thickness. We are doing this so that we can assume that axial induction factors and tangential induction factors are constant throughout the annular ring. An assumption of this approach is that annular rings are independent of one another i.e. there is no interaction between the fluids of neighboring annular rings.
=== Bernoulli for rotating wake ===
Let us now go back to Bernoulli:
1
2
ρ
v
1
2
+
P
1
=
1
2
ρ
v
2
2
+
P
2
{\displaystyle {\frac {1}{2}}\rho v_{1}^{2}+P_{1}={\frac {1}{2}}\rho v_{2}^{2}+P_{2}}
The velocity is the velocity of the fluid along a streamline. The streamline may not necessarily run parallel to a particular co-ordinate axis, such as the z-axis. Thus the velocity may consist of components in the axes that make up the co-ordinate system. For this analysis, we will use cylindrical polar co-ordinates
(
r
,
θ
,
z
)
{\displaystyle (r,~\theta ,~z)}
. Thus
v
2
=
v
r
2
+
v
θ
2
+
v
z
2
{\displaystyle v^{2}=v_{r}^{2}+v_{\theta }^{2}+v_{z}^{2}}
.
NOTE: We will in fact, be working in cylindrical co-ordinates for all aspects e.g.
F
=
F
r
r
^
+
F
θ
θ
^
+
F
z
z
^
{\displaystyle \mathbf {F} =F_{r}{\hat {\mathbf {r} }}+F_{\theta }{\hat {\theta }}+F_{z}{\hat {\mathbf {z} }}}
Now consider the setup shown above. As before, we can break the setup into two components: upstream and downstream.
==== Pre-rotor ====
P
∞
+
1
2
ρ
v
u
2
=
P
D
+
+
1
2
ρ
v
D
2
{\displaystyle P_{\infty }+{\frac {1}{2}}\rho v_{u}^{2}=P_{D+}+{\frac {1}{2}}\rho v_{D}^{2}}
where
v
u
{\displaystyle v_{u}}
is the velocity of the fluid along a streamline far upstream, and
v
D
{\displaystyle v_{D}}
is the velocity of the fluid just prior to the rotor. Written in cylindrical polar co-ordinates, we have the following expression:
P
∞
+
1
2
ρ
v
∞
2
=
P
D
+
+
1
2
ρ
(
v
∞
(
1
−
a
)
)
2
{\displaystyle P_{\infty }+{\frac {1}{2}}\rho v_{\infty }^{2}=P_{D+}+{\frac {1}{2}}\rho (v_{\infty }(1-a))^{2}}
where
v
∞
{\displaystyle v_{\infty }}
and
v
∞
(
1
−
a
)
{\displaystyle v_{\infty }(1-a)}
are the z-components of the velocity far upstream and just prior to the rotor respectively. This is exactly the same as the upstream equation from the Betz model.
As can be seen from the figure above, the flow expands as it approaches the rotor, a consequence of the increase in static pressure and the conservation of mass. This would imply that
v
r
≠
0
{\displaystyle v_{r}\neq 0}
upstream. However, for the purpose of this analysis, that effect will be neglected.
==== Post-rotor ====
P
D
−
+
1
2
ρ
v
D
2
=
P
∞
+
1
2
ρ
v
w
2
{\displaystyle P_{D-}+{\frac {1}{2}}\rho v_{D}^{2}=P_{\infty }+{\frac {1}{2}}\rho v_{w}^{2}}
where
v
D
{\displaystyle v_{D}}
is the velocity of the fluid just after interacting with the rotor. This can be written as
v
D
2
=
v
D
,
r
2
+
v
D
,
θ
2
+
v
D
,
z
2
{\displaystyle v_{D}^{2}=v_{D,~r}^{2}+v_{D,~\theta }^{2}+v_{D,~z}^{2}}
. The radial component of the velocity will be zero; this must be true if we are to use the annular ring approach; to assume otherwise would suggest interference between annular rings at some point downstream. Since we assume that there is no change in axial velocity across the disc,
v
D
,
z
=
(
1
−
a
)
v
∞
{\displaystyle v_{D,~z}=(1-a)v_{\infty }}
. Angular momentum must be conserved in an isolated system. Thus the rotation of the wake must not die away. Thus
v
θ
{\displaystyle v_{\theta }}
in the downstream section is constant. Thus Bernoulli simplifies in the downstream section:
P
D
−
+
1
2
ρ
v
D
,
z
2
=
P
∞
+
1
2
ρ
v
w
,
z
2
=
P
D
−
+
1
2
ρ
(
v
∞
(
1
−
a
)
)
2
{\displaystyle P_{D-}+{\frac {1}{2}}\rho v_{D,~z}^{2}=P_{\infty }+{\frac {1}{2}}\rho v_{w,~z}^{2}=P_{D-}+{\frac {1}{2}}\rho (v_{\infty }(1-a))^{2}}
In other words, the Bernoulli equations up and downstream of the rotor are the same as the Bernoulli expressions in the Betz model. Therefore, we can use results such as power extraction and wake speed that were derived in the Betz model i.e.
v
w
,
z
=
(
1
−
2
a
)
v
∞
{\displaystyle v_{w,z}=(1-2a)v_{\infty }}
P
o
w
e
r
=
2
a
(
1
−
a
)
2
v
∞
3
ρ
A
D
{\displaystyle \mathrm {Power} =2a(1-a)^{2}v_{\infty }^{3}\rho A_{D}}
This allows us to calculate maximum power extraction for a system that includes a rotating wake. This can be shown to give the same value as that of the Betz model i.e. 0.59. This method involves recognising that the torque generated in the rotor is given by the following expression:
δ
Q
=
2
π
r
δ
r
×
ρ
U
∞
(
1
−
a
)
×
2
a
′
r
ω
{\displaystyle \delta \mathbf {Q} =2\pi r\delta r\times \rho U_{\infty }(1-a)\times 2a'r\omega }
with the necessary terms defined immediately below.
=== Blade forces ===
Consider fluid flow around an airfoil. The flow of the fluid around the airfoil gives rise to lift and drag forces. By definition, lift is the force that acts on the airfoil normal to the apparent fluid flow speed seen by the airfoil. Drag is the forces that acts tangential to the apparent fluid flow speed seen by the airfoil. What do we mean by an apparent speed? Consider the diagram below:
The speed seen by the rotor blade is dependent on three things: the axial velocity of the fluid,
v
∞
(
1
−
a
)
{\displaystyle v_{\infty }(1-a)}
; the tangential velocity of the fluid due to the acceleration round an airfoil,
a
′
ω
r
{\displaystyle a'\omega r}
; and the rotor motion itself,
ω
r
{\displaystyle \omega r}
. That is, the apparent fluid velocity is given as below:
v
=
ω
r
(
1
+
a
′
)
θ
^
+
v
∞
(
1
−
a
)
z
^
{\displaystyle \mathbf {v} =\omega r(1+a'){\hat {\mathbf {\theta } }}+v_{\infty }(1-a){\hat {\mathbf {z} }}}
Thus the apparent wind speed is just the magnitude of this vector i.e.:
|
v
|
2
=
(
ω
r
(
1
+
a
′
)
)
2
+
(
v
∞
(
1
−
a
)
)
2
=
W
2
{\displaystyle |\mathbf {v} |^{2}=(\omega r(1+a'))^{2}+(v_{\infty }(1-a))^{2}=W^{2}}
We can also work out the angle
ϕ
{\displaystyle \phi }
from the above figure:
sin
ϕ
=
v
∞
(
1
−
a
)
W
{\displaystyle \sin \phi ={\frac {v_{\infty }(1-a)}{W}}}
Supposing we know the angle
β
{\displaystyle \beta }
, we can then work out
α
{\displaystyle \alpha }
simply by using the relation
α
=
ϕ
−
β
{\displaystyle \alpha =\phi -\beta }
; we can then work out the lift co-efficient,
c
L
{\displaystyle c_{L}}
, and the drag co-efficient
c
D
{\displaystyle c_{D}}
, from which we can work out the lift and drag forces acting on the blade.
Consider the annular ring, which is partially occupied by blade elements. The length of each blade section occupying the annular ring is
δ
r
{\displaystyle \delta r}
(see figure below).
The lift acting on those parts of the blades/airfoils each with chord
c
{\displaystyle c}
is given by the following expression:
δ
L
=
1
2
ρ
N
W
2
c
×
c
L
(
α
)
δ
r
{\displaystyle \delta L={\frac {1}{2}}\rho NW^{2}c\times c_{L}(\alpha )\delta r}
where
c
L
{\displaystyle c_{L}}
is the lift co-efficient, which is a function of the angle of attack, and
N
{\displaystyle N}
is the number of blades. Additionally, the drag acting on that part of the blades/airfoils with chord
c
{\displaystyle c}
is given by the following expression:
δ
D
=
1
2
ρ
N
W
2
c
×
c
D
(
α
)
δ
r
{\displaystyle \delta D={\frac {1}{2}}\rho NW^{2}c\times c_{D}(\alpha )\delta r}
Remember that these forces calculated are normal and tangential to the apparent speed. We are interested in forces in the
z
^
{\displaystyle {\hat {\mathbf {z} }}}
and
θ
^
{\displaystyle {\hat {\theta }}}
axes. Thus we need to consider the diagram below:
Thus we can see the following:
δ
F
θ
=
δ
L
sin
ϕ
−
δ
D
cos
ϕ
{\displaystyle \delta F_{\theta }=\delta L\sin \phi -\delta D\cos \phi }
δ
F
z
=
δ
L
cos
ϕ
+
δ
D
sin
ϕ
{\displaystyle \delta F_{z}=\delta L\cos \phi +\delta D\sin \phi }
F
θ
{\displaystyle F_{\theta }}
is the force that is responsible for the rotation of the rotor blades;
F
z
{\displaystyle F_{z}}
is the force that is responsible for the bending of the blades.
Recall that for an isolated system the net angular momentum of the system is conserved. If the rotor acquired angular momentum, so must the fluid in the wake. Let us suppose that the fluid in the wake acquires a tangential velocity
v
θ
=
2
a
′
ω
r
{\displaystyle v_{\theta }=2a'\omega r}
. Thus the torque in the air is given by
|
δ
Q
|
=
ρ
(
2
π
r
δ
r
)
U
∞
(
1
−
a
)
×
(
2
Ω
a
′
r
2
)
{\displaystyle |\mathbf {\delta {Q}} |=\rho (2\pi r\delta r)U_{\infty }(1-a)\times (2\Omega a'r^{2})}
By the conservation of angular momentum, this balances the torque in the blades of the rotor; thus,
1
2
ρ
W
2
N
c
(
c
l
sin
ϕ
−
c
d
cos
ϕ
)
r
δ
r
=
ρ
(
2
π
r
δ
r
)
U
∞
(
1
−
a
)
×
(
2
Ω
a
′
r
2
)
{\displaystyle {\frac {1}{2}}\rho W^{2}Nc(c_{l}\sin \phi -c_{d}\cos \phi )r\delta r=\rho (2\pi r\delta r)U_{\infty }(1-a)\times (2\Omega a'r^{2})}
1
2
W
2
N
c
(
c
l
sin
ϕ
−
c
d
cos
ϕ
)
=
4
π
U
∞
(
1
−
a
)
×
Ω
a
′
r
2
{\displaystyle {\frac {1}{2}}W^{2}Nc(c_{l}\sin \phi -c_{d}\cos \phi )=4\pi U_{\infty }(1-a)\times \Omega a'r^{2}}
Furthermore, the rate of change of linear momentum in the air is balanced by the out-of-plane bending force acting on the blades,
δ
F
z
{\displaystyle \delta F_{z}}
. From momentum theory, the rate of change of linear momentum in the air is as follows:
δ
F
z
=
ρ
(
2
π
r
δ
r
)
U
∞
(
1
−
a
)
×
(
v
∞
−
v
w
)
{\displaystyle \delta F_{z}=\rho (2\pi r\delta r)U_{\infty }(1-a)\times (v_{\infty }-v_{w})}
which may be expressed as
δ
F
z
=
ρ
(
4
π
r
δ
r
)
U
∞
2
a
(
1
−
a
)
{\displaystyle \delta F_{z}=\rho (4\pi r\delta r)U_{\infty }^{2}a(1-a)}
Balancing this with the out-of-plane bending force gives
1
2
W
2
N
c
(
c
l
cos
ϕ
+
c
d
sin
ϕ
)
=
ρ
(
4
π
r
δ
r
)
U
∞
2
a
(
1
−
a
)
{\displaystyle {\frac {1}{2}}W^{2}Nc(c_{l}\cos \phi +c_{d}\sin \phi )=\rho (4\pi r\delta r)U_{\infty }^{2}a(1-a)}
Let us now make the following definitions:
C
y
=
c
l
sin
ϕ
−
c
d
cos
ϕ
{\displaystyle C_{y}=c_{l}\sin \phi -c_{d}\cos \phi }
C
x
=
c
l
cos
ϕ
+
c
d
sin
ϕ
{\displaystyle C_{x}=c_{l}\cos \phi +c_{d}\sin \phi }
So we have the following equations:
Let us make reference to the following equation which can be seen from analysis of the above figure:
Thus, with these three equations, it is possible to get the following result through some algebraic manipulation:
a
1
−
a
=
C
x
σ
r
4
sin
2
ϕ
{\displaystyle {\frac {a}{1-a}}={\frac {C_{x}\sigma _{r}}{4\sin ^{2}\phi }}}
We can derive an expression for
a
′
{\displaystyle a'}
in a similar manner. This allows us to understand what is going on with the rotor and the fluid. Equations of this sort are then solved by iterative techniques.
=== Assumptions and possible drawbacks of BEM models ===
Assumes that each annular ring is independent of every other annular ring.
Does not account for wake expansion.
Does not account for tip losses, though correction factors can be included.
Does not account for yaw, though it can be made to do so.
Based on steady flow (non-turbulent).
== References == | Wikipedia/Blade_element_momentum_theory |
Compressed-air-energy storage (CAES) is a way to store energy for later use using compressed air. At a utility scale, energy generated during periods of low demand can be released during peak load periods.
The first utility-scale CAES project was in the Huntorf power plant in Elsfleth, Germany, and is still operational as of 2024. The Huntorf plant was initially developed as a load balancer for fossil-fuel-generated electricity, but the global shift towards renewable energy renewed interest in CAES systems, to help highly intermittent energy sources like photovoltaics and wind satisfy fluctuating electricity demands.
One ongoing challenge in large-scale design is the management of thermal energy, since the compression of air leads to an unwanted temperature increase that not only reduces operational efficiency but can also lead to damage. The main difference between various architectures lies in thermal engineering. On the other hand, small-scale systems have long been used for propulsion of mine locomotives. Contrasted with traditional batteries, systems can store energy for longer periods of time and have less upkeep.
== Types ==
Compression of air creates heat; the air is warmer after compression. Expansion removes heat. If no extra heat is added, the air will be much colder after expansion. If the heat generated during compression can be stored and used during expansion, then the efficiency of the storage improves considerably. There are several ways in which a CAES system can deal with heat. Air storage can be adiabatic, diabatic, isothermal, or near-isothermal.
=== Adiabatic ===
Adiabatic storage continues to store the energy produced by compression and returns it to the air as it is expanded to generate power. This is a subject of an ongoing study, with no utility-scale plants as of 2015. The theoretical efficiency of adiabatic storage approaches 100% with perfect insulation, but in practice, round trip efficiency is expected to be 70%. Heat can be stored in a solid such as concrete or stone, or in a fluid such as hot oil (up to 300 °C) or molten salt solutions (600 °C). Storing the heat in hot water may yield an efficiency around 65%.
Packed beds have been proposed as thermal storage units for adiabatic systems. A study numerically simulated an adiabatic compressed air energy storage system using packed bed thermal energy storage. The efficiency of the simulated system under continuous operation was calculated to be between 70.5% and 71%.
Advancements in adiabatic CAES involve the development of high-efficiency thermal energy storage systems that capture and reuse the heat generated during compression. This innovation has led to system efficiencies exceeding 70%, significantly higher than traditional Diabatic systems.
=== Diabatic ===
Diabatic storage dissipates much of the heat of compression with intercoolers (thus approaching isothermal compression) into the atmosphere as waste, essentially wasting the energy used to perform the work of compression. Upon removal from storage, the temperature of this compressed air is the one indicator of the amount of stored energy that remains in this air. Consequently, if the air temperature is too low for the energy recovery process, then the air must be substantially re-heated prior to expansion in the turbine to power a generator. This reheating can be accomplished with a natural-gas-fired burner for utility-grade storage or with a heated metal mass. As recovery is often most needed when renewable sources are quiescent, the fuel must be burned to make up for the wasted heat. This degrades the efficiency of the storage-recovery cycle. While this approach is relatively simple, the burning of fuel adds to the cost of the recovered electrical energy and compromises the ecological benefits associated with most renewable energy sources. Nevertheless, this is thus far the only system that has been implemented commercially.
The McIntosh, Alabama, CAES plant requires 2.5 MJ of electricity and 1.2 MJ lower heating value (LHV) of gas for each MJ of energy output, corresponding to an energy recovery efficiency of about 27%. A General Electric 7FA 2x1 combined cycle plant, one of the most efficient natural gas plants in operation, uses 1.85 MJ (LHV) of gas per MJ generated, a 54% thermal efficiency.
To improve the efficiency of Diabatic CAES systems, modern designs incorporate heat recovery units that capture waste heat during compression, thereby reducing energy losses and enhancing overall performance.
=== Isothermal ===
Isothermal compression and expansion approaches attempt to maintain operating temperature by constant heat exchange to the environment. In a reciprocating compressor, this can be achieved by using a finned piston and low cycle speeds. Current challenges in effective heat exchangers mean that they are only practical for low power levels. The theoretical efficiency of isothermal energy storage approaches 100% for perfect heat transfer to the environment. In practice, neither of these perfect thermodynamic cycles is obtainable, as some heat losses are unavoidable, leading to a near-isothermal process. Recent developments in isothermal CAES focus on advanced thermal management techniques and materials that maintain constant air temperatures during compression and expansion, minimizing energy losses and improving system efficiency.
=== Near-isothermal ===
Near-isothermal compression (and expansion) is a process in which a gas is compressed in very close proximity to a large incompressible thermal mass such as a heat-absorbing and -releasing structure (HARS) or a water spray. A HARS is usually made up of a series of parallel fins. As the gas is compressed, the heat of compression is rapidly transferred to the thermal mass, so the gas temperature is stabilized. An external cooling circuit is then used to maintain the temperature of the thermal mass. The isothermal efficiency (Z) is a measure of where the process lies between an adiabatic and isothermal process. If the efficiency is 0%, then it is totally adiabatic; with an efficiency of 100%, it is totally isothermal. Typically with a near-isothermal process, an isothermal efficiency of 90–95% can be expected.
=== Hybrid CAES systems ===
Hybrid Compressed Air Energy Storage (H-CAES) systems integrate renewable energy sources, such as wind or solar power, with traditional CAES technology. This integration allows for the storage of excess renewable energy generated during periods of low demand, which can be released during peak demand to enhance grid stability and reduce reliance on fossil fuels. For instance, the Apex CAES Plant in Texas combines wind energy with CAES to provide a consistent energy output, addressing the intermittency of renewable energy sources.
=== Other ===
One implementation of isothermal CAES uses high-, medium-, and low-pressure pistons in series. Each stage is followed by an airblast venturi pump that draws ambient air over an air-to-air (or air-to-seawater) heat exchanger between each expansion stage. Early compressed-air torpedo designs used a similar approach, substituting seawater for air. The venturi warms the exhaust of the preceding stage and admits this preheated air to the following stage. This approach was widely adopted in various compressed-air vehicles such as H. K. Porter, Inc.'s mining locomotives and trams. Here, the heat of compression is effectively stored in the atmosphere (or sea) and returned later on.
== Compressors and expanders ==
Compression can be done with electrically-powered turbo-compressors and expansion with turbo-expanders or air engines driving electrical generators to produce electricity.
== Storage ==
Air storage vessels vary in the thermodynamic conditions of the storage and on the technology used:
Constant volume storage (solution-mined caverns, above-ground vessels, aquifers, automotive applications, etc.)
Constant pressure storage (underwater pressure vessels, hybrid pumped hydro / compressed air storage)
=== Constant-volume storage ===
This storage system uses a chamber with specific boundaries to store large amounts of air. This means from a thermodynamic point of view that this system is a constant-volume and variable-pressure system. This causes some operational problems for the compressors and turbines, so the pressure variations have to be kept below a certain limit, as do the stresses induced on the storage vessels.
The storage vessel is often a cavern created by solution mining (salt is dissolved in water for extraction) or by using an abandoned mine; use of porous and permeable rock formations (rocks that have interconnected holes, through which liquid or air can pass), such as those in which reservoirs of natural gas are found, has also been studied.
In some cases, an above-ground pipeline was tested as a storage system, giving some good results. Obviously, the cost of the system is higher, but it can be placed wherever the designer chooses, whereas an underground system needs some particular geologic formations (salt domes, aquifers, depleted gas fields, etc.).
=== Constant-pressure storage ===
In this case, the storage vessel is kept at constant pressure, while the gas is contained in a variable-volume vessel. Many types of storage vessels have been proposed, generally relying on liquid displacement to achieve isobaric operation. In such cases, the storage vessel is positioned hundreds of meters below ground level, and the hydrostatic pressure (head) of the water column above the storage vessel maintains the pressure at the desired level.
This configuration allows:
Improvement of the energy density of the storage system because all the air contained can be used (the pressure is constant in all charge conditions, full or empty, so the turbine has no problem exploiting it, while with constant-volume systems, if the pressure goes below a safety limit, then the system needs to stop).
Removal of the requirement of throttling prior to the expansion.
Avoidance of mixing of heat at different temperatures in the Thermal Energy Storage system, which leads to irreversibility.
Improvement of the efficiency of the turbomachinery, which will work under constant-inlet conditions.
Use of various geographic locations for the positioning of the CAES plant (coastal lines, floating platforms, etc.).
On the other hand, the cost of this storage system is higher due to the need to position the storage vessel on the bottom of the chosen water reservoir (often the ocean) and due to the cost of the vessel itself.
A different approach consists of burying a large bag buried under several meters of sand instead of water.
Plants operate on a peak-shaving daily cycle, charging at night and discharging during the day. Heating the compressed air using natural gas or geothermal heat to increase the amount of energy being extracted has been studied by the Pacific Northwest National Laboratory.
Compressed-air energy storage can also be employed on a smaller scale, such as exploited by air cars and air-driven locomotives, and can use high-strength (e.g., carbon-fiber) air-storage tanks. In order to retain the energy stored in compressed air, this tank should be thermally isolated from the environment; otherwise, the energy stored will escape in the form of heat, because compressing air raises its temperature.
== Environmental Impact ==
CAES systems are often considered an environmentally friendly alternative to other large-scale energy storage technologies due to their reliance on naturally occurring resources, such as salt caverns for air storage and ambient air as the working medium. Unlike lithium-ion batteries, which require the extraction of finite resources such as lithium and cobalt, CAES has a minimal environmental footprint during its lifecycle.
However, the construction of CAES facilities presents unique challenges. Underground air storage requires geological formations such as salt domes, which are geographically limited. Inappropriate siting or mismanagement during construction can lead to disruptions in local ecosystems, land subsidence, or groundwater contamination.
On the positive side, CAES systems integrated with renewable energy sources contribute to a significant reduction in greenhouse gas emissions by enabling the storage and dispatch of clean energy during peak demand. Additionally, repurposing depleted natural gas fields or other geological formations for air storage can mitigate environmental impacts and extend the usefulness of existing infrastructure.
=== Economic Considerations ===
The cost of implementing CAES systems depends heavily on the geological conditions of the site, the scale of the facility, and the type of CAES process used (adiabatic, diabatic, or isothermal). Initial capital expenditures are significant, often ranging from $500 to $1,200 per kW for large-scale systems. These costs primarily include the development of underground storage caverns, compression and expansion equipment, and thermal energy storage units (for advanced systems).
Despite the high upfront costs, CAES facilities have long operational lifespans, often exceeding 30 years, with low maintenance and operational costs compared to lithium-ion battery storage systems, which require periodic replacements. This long-term cost efficiency makes CAES particularly attractive for electric utility companies and grid operators.
=== Policy and Regulation ===
Market trends suggest growing interest in CAES technology due to increasing renewable energy integration and the need for grid-scale energy storage. Government incentives and declining costs of advanced components, such as high-efficiency compressors and turbines, are further enhancing the economic feasibility of CAES.
Government policies and regulatory frameworks are critical in determining the pace of CAES adoption and development. Countries like Germany and the United States have implemented various incentives, including tax credits and grants, to promote energy storage technologies. For instance, the U.S. Department of Energy’s Energy Storage Grand Challengeincludes CAES as a key focus area for research and development funding.
One of the significant regulatory hurdles for CAES is the permitting process for underground air storage facilities. Environmental impact assessments, land use approvals, and safety standards for high-pressure storage systems can delay or increase costs for CAES projects. For example, projects sited near urban areas often face additional scrutiny due to concerns about noise pollution, air quality, and potential risks associated with high-pressure air storage.
Internationally, efforts are underway to standardize the design, operation, and safety protocols for CAES systems. Organizations like the International Energy Agency (IEA) and regional bodies such as the European Union have been instrumental in developing frameworks to support the integration of CAES into modern energy grids. As renewable energy adoption accelerates, policies aimed at addressing intermittency challenges will likely prioritize grid-scale solutions like CAES.
== History ==
Citywide compressed air energy systems for delivering mechanical power directly via compressed air have been built since 1870. Cities such as Paris, France; Birmingham, England; Dresden, Rixdorf, and Offenbach, Germany; and Buenos Aires, Argentina, installed such systems. Victor Popp constructed the first systems to power clocks by sending a pulse of air every minute to change their pointer arms. They quickly evolved to deliver power to homes and industries. As of 1896, the Paris system had 2.2 MW of generation distributed at 550 kPa in 50 km of air pipes for motors in light and heavy industry. Usage was measured in cubic meters. The systems were the main source of house-delivered energy in those days and also powered the machines of dentists, seamstresses, printing facilities, and bakeries.
The first utility-scale diabatic compressed-air energy storage project was the 290-megawatt Huntorf plant opened in 1978 in Germany using a salt dome cavern with a capacity of 580 megawatt-hours (2,100 GJ) and a 42% efficiency.
A plant that could store up to 2,860 megawatt-hours (10,300 GJ) (and produce up to 110 MW for 26 hours) was built in McIntosh, Alabama in 1991. The Alabama facility's $65 million cost equals $590 per kW of power capacity and about $23 per kW⋅h of storage capacity. It uses a nineteen-million-cubic-foot (540,000 m3) solution-mined salt cavern to store air at up to 1,100 psi (7,600 kPa). Although the compression phase is approximately 82% efficient, the expansion phase requires the combustion of natural gas at one-third the rate of a gas turbine producing the same amount of electricity at 54% efficiency.
In 2012, General Compression completed construction of a two-megawatt near-isothermal project in Gaines County, Texas, the world's third such project. The project uses no fuel. It appears to have stopped operating in 2016.
In 2017 FLASC from the University of Malta deployed an isothermal compressed energy storage prototype in the Grand Harbour on the Maltese islands. The prototype was a 300W and 530Wh small scale test operating at 11.5bar which achieved more than 96% thermal efficiency. Currently there are ongoing projects to set up these systems for offshore wind energy storage in the Netherlands, and a "one stop shop" for renewable energy and storage designed for small islands to be trialed on Oinousses in Greece.
A 60 MW, 300 MW⋅h facility with 60% efficiency opened in Jiangsu, China, using a salt cavern (2022).
A 2.5 MW, 4 MW⋅h compressed CO2 closed-cycle facility started operating in Sardinia, Italy (2022).
In 2022, Zhangjiakou connected the world's first 100 MW storage system to the grid in north China. It uses supercritical thermal storage, supercritical heat exchange, and high-load compression and expansion technologies. The plant can store 400 MW⋅h with 70.4% efficiency. Construction of a 350 MW, 1.4 GW⋅h salt cave project started in Shangdong at a cost of $208 million, operating in 2024 with 64% efficiency, and construction of a four-hour, 700 MW, 2.8 GW⋅h facility started in China in 2024.
== Largest CAES facilities ==
== Projects ==
In 2009, the US Department of Energy awarded $24.9 million in matching funds for phase one of a 300 MW, $356 million Pacific Gas and Electric Company installation using a saline porous rock formation being developed near Bakersfield in Kern County, California. The goals of the project were to build and validate an advanced design.
In 2010, the US Department of Energy provided $29.4 million in funding to conduct preliminary work on a 150-MW salt-based project being developed by Iberdrola USA in Watkins Glen, New York. The goal is to incorporate smart grid technology to balance renewable intermittent energy sources.
The first adiabatic project, a 200-megawatt facility called ADELE, was planned for construction in Germany (2013) with a target of 70% efficiency by using 600 °C (1,112 °F) air at 100 bars of pressure. This project was delayed for undisclosed reasons until at least 2016.
Storelectric Ltd planned to build a 40-MW 100% renewable energy pilot plant in Cheshire, UK, with 800 MWh of storage capacity (2017).
Hydrostor completed the first commercial A-CAES system in Goderich, Ontario, supplying service with 2.2MW / 10MWh storage to the Ontario Grid (2019). It was the first A-CAES system to achieve commercial operation in decades.
The European-Union-funded RICAS (adiabatic) project in Austria was to use crushed rock to store heat from the compression process to improve efficiency (2020). The system was expected to achieve 70–80% efficiency.
Apex planned a plant for Anderson County, Texas, to go online in 2016. This project has been delayed until at least 2020.
Canadian company Hydrostor planned to build four Advance plants in Toronto, Goderich, Angas, and Rosamond (2020). Some included partial heat storage in water, improving efficiency to 65%.
As of 2022, the Gem project at Rosamond in Kern County, California, was planned to provide 500 MW / 4,000 MWh of storage. The Pecho project in San Luis Obispo, California, was planned to be 400 MW / 3,200 MWh. The Broken Hill project in New South Wales, Australia was 200 MW / 1,600 MWh.
In 2023, Alliant Energy announced plans to construct a 200-MWh compressed CO2 facility based on the Sardinia facility in Columbia County, Wisconsin. It will be the first of its kind in the United States.
Compressed air energy storage may be stored in undersea caves in Northern Ireland.
== Storage thermodynamics ==
In order to achieve a near-thermodynamically-reversible process so that most of the energy is saved in the system and can be retrieved, and losses are kept negligible, a near-reversible isothermal process or an isentropic process is desired.
=== Isothermal storage ===
In an isothermal compression process, the gas in the system is kept at a constant temperature throughout. This necessarily requires an exchange of heat with the gas; otherwise, the temperature would rise during charging and drop during discharge. This heat exchange can be achieved by heat exchangers (intercooling) between subsequent stages in the compressor, regulator, and tank. To avoid wasted energy, the intercoolers must be optimized for high heat transfer and low pressure drop. Smaller compressors can approximate isothermal compression even without intercooling, due to the relatively high ratio of surface area to volume of the compression chamber and the resulting improvement in heat dissipation from the compressor body itself.
When one obtains perfect isothermal storage (and discharge), the process is said to be "reversible". This requires that the heat transfer between the surroundings and the gas occur over an infinitesimally small temperature difference. In that case, there is no exergy loss in the heat transfer process, and so the compression work can be completely recovered as expansion work: 100% storage efficiency. However, in practice, there is always a temperature difference in any heat transfer process, and so all practical energy storage obtains efficiencies lower than 100%.
To estimate the compression/expansion work in an isothermal process, it may be assumed that the compressed air obeys the ideal gas law:
p
V
=
n
R
T
=
constant
.
{\displaystyle pV=nRT={\text{constant}}.}
For a process from an initial state A to a final state B, with absolute temperature
T
=
T
A
=
T
B
{\displaystyle T=T_{A}=T_{B}}
constant, one finds the work required for compression (negative) or done by the expansion (positive) to be
W
A
→
B
=
∫
V
A
V
B
p
d
V
=
∫
V
A
V
B
n
R
T
V
d
V
=
n
R
T
∫
V
A
V
B
1
V
d
V
=
n
R
T
(
ln
V
B
−
ln
V
A
)
=
n
R
T
ln
V
B
V
A
=
p
A
V
A
ln
p
A
p
B
=
p
B
V
B
ln
p
A
p
B
,
{\displaystyle {\begin{aligned}W_{A\to B}&=\int _{V_{A}}^{V_{B}}p\,dV=\int _{V_{A}}^{V_{B}}{\frac {nRT}{V}}dV=nRT\int _{V_{A}}^{V_{B}}{\frac {1}{V}}dV\\&=nRT(\ln {V_{B}}-\ln {V_{A}})=nRT\ln {\frac {V_{B}}{V_{A}}}=p_{A}V_{A}\ln {\frac {p_{A}}{p_{B}}}=p_{B}V_{B}\ln {\frac {p_{A}}{p_{B}}},\\\end{aligned}}}
where
p
V
=
p
A
V
A
=
p
B
V
B
{\displaystyle pV=p_{A}V_{A}=p_{B}V_{B}}
, and so
V
B
V
A
=
p
A
p
B
{\displaystyle {\frac {V_{B}}{V_{A}}}={\frac {p_{A}}{p_{B}}}}
.
Here
p
{\displaystyle p}
is the absolute pressure,
V
A
{\displaystyle V_{A}}
is the (unknown) volume of gas compressed,
V
B
{\displaystyle V_{B}}
is the volume of the vessel,
n
{\displaystyle n}
is the amount of substance of gas (mol), and
R
{\displaystyle R}
is the ideal gas constant.
If there is a constant pressure outside of the vessel, which is equal to the starting pressure
p
A
{\displaystyle p_{A}}
, the positive work of the outer pressure reduces the exploitable energy (negative value). This adds a term to the equation above:
W
A
→
B
=
p
A
V
A
ln
p
A
p
B
+
(
V
A
−
V
B
)
p
A
=
p
B
V
B
ln
p
A
p
B
+
(
p
B
−
p
A
)
V
B
.
{\displaystyle W_{A\to B}=p_{A}V_{A}\ln {\frac {p_{A}}{p_{B}}}+(V_{A}-V_{B})p_{A}=p_{B}V_{B}\ln {\frac {p_{A}}{p_{B}}}+(p_{B}-p_{A})V_{B}.}
Example
How much energy can be stored in a 1 m3 storage vessel at a pressure of 70 bars (7.0 MPa), if the ambient pressure is 1 bar (0.10 MPa)? In this case, the process work is
W
=
p
B
V
B
ln
p
A
p
B
+
(
p
B
−
p
A
)
V
B
{\displaystyle W=p_{B}V_{B}\ln {\frac {p_{A}}{p_{B}}}+(p_{B}-p_{A})V_{B}}
=
= 7.0 MPa × 1 m3 × ln(0.1 MPa/7.0 MPa) + (7.0 MPa − 0.1 MPa) × 1 m3 = −22.8 MJ.
The negative sign means that work is done on the gas by the surroundings. Process irreversibilities (such as in heat transfer) will result in less energy being recovered from the expansion process than is required for the compression process. If the environment is at a constant temperature, for example, then the thermal resistance in the intercoolers will mean that the compression occurs at a temperature somewhat higher than the ambient temperature, and the expansion will occur at a temperature somewhat lower than the ambient temperature. So a perfect isothermal storage system is impossible to achieve.
=== Adiabatic (isentropic) storage ===
An adiabatic process is one where there is no heat transfer between the fluid and the surroundings: the system is insulated against heat transfer. If the process is furthermore internally reversible (frictionless, to the ideal limit), then it will additionally be isentropic.
An adiabatic storage system does away with the intercooling during the compression process and simply allows the gas to heat up during compression and likewise cool down during expansion. This is attractive since the energy losses associated with the heat transfer are avoided, but the downside is that the storage vessel must be insulated against heat loss. It should also be mentioned that real compressors and turbines are not isentropic, but instead have an isentropic efficiency of around 85%. The result is that round-trip storage efficiency for adiabatic systems is also considerably less than perfect.
=== Large storage system thermodynamics ===
Energy storage systems often use large caverns. This is the preferred system design due to the very large volume and thus the large quantity of energy that can be stored with only a small pressure change. The gas is compressed adiabatically with little temperature change (approaching a reversible isothermal system) and heat loss (approaching an isentropic system). This advantage is in addition to the low cost of constructing the gas storage system, using the underground walls to assist in containing the pressure. The cavern space can be insulated to improve efficiency.
Undersea insulated airbags that have similar thermodynamic properties to large cavern storage have been suggested.
== Vehicle applications ==
=== Practical constraints in transportation ===
In order to use air storage in vehicles or aircraft for practical land or air transportation, the energy storage system must be compact and lightweight. Energy density and specific energy are the engineering terms that define these desired qualities.
==== Specific energy, energy density, and efficiency ====
As explained in the thermodynamics of the gas storage section above, compressing air heats it, and expansion cools it. Therefore, practical air engines require heat exchangers in order to avoid excessively high or low temperatures, and even so do not reach ideal constant-temperature conditions or ideal thermal insulation.
Nevertheless, as stated above, it is useful to describe the maximum energy storable using the isothermal case, which works out to about 100 kJ/m3 [ ln(PA/PB)].
Thus if 1.0 m3 of air from the atmosphere is very slowly compressed into a 5 L bottle at 20 MPa (200 bar), then the potential energy stored is 530 kJ. A highly efficient air motor can transfer this into kinetic energy if it runs very slowly and manages to expand the air from its initial 20 MPa pressure down to 100 kPa (bottle completely "empty" at atmospheric pressure). Achieving high efficiency is a technical challenge both due to heat loss to the ambient and to unrecoverable internal gas heat. If the bottle above is emptied to 1 MPa, then the extractable energy is about 300 kJ at the motor shaft.
A standard 20-MPa, 5-L steel bottle has a mass of 7.5 kg, and a superior one 5 kg. High-tensile-strength fibers such as carbon fiber or Kevlar can weigh below 2 kg in this size, consistent with the legal safety codes. One cubic meter of air at 20 °C has a mass of 1.204 kg at standard temperature and pressure. Thus, theoretical specific energies are from roughly 70 kJ/kg at the motor shaft for a plain steel bottle to 180 kJ/kg for an advanced fiber-wound one, whereas practical achievable specific energies for the same containers would be from 40 to 100 kJ/kg.
==== Safety ====
As with most technologies, compressed air has safety concerns, mainly catastrophic tank rupture. Safety regulations make this a rare occurrence at the cost of higher weight and additional safety features such as pressure relief valves. Regulations may limit the legal working pressure to less than 40% of the rupture pressure for steel bottles (for a safety factor of 2.5) and less than 20% for fiber-wound bottles (safety factor 5). Commercial designs adopt the ISO 11439 standard. High-pressure bottles are fairly strong so that they generally do not rupture in vehicle crashes.
=== Comparison with batteries ===
Advanced fiber-reinforced bottles are comparable to the rechargeable lead–acid battery in terms of energy density. Batteries provide nearly-constant voltage over their entire charge level, whereas the pressure varies greatly while using a pressure vessel from full to empty. It is technically challenging to design air engines to maintain high efficiency and sufficient power over a wide range of pressures. Compressed air can transfer power at very high flux rates, which meets the principal acceleration and deceleration objectives of transportation systems, particularly for hybrid vehicles.
Compressed air systems have advantages over conventional batteries, including longer lifetimes of pressure vessels and lower material toxicity. Newer battery designs such as those based on lithium iron phosphate chemistry suffer from neither of these problems. Compressed air costs are potentially lower; however, advanced pressure vessels are costly to develop and safety-test and at present are more expensive than mass-produced batteries.
As with electric storage technology, compressed air is only as "clean" as the source of the energy that it stores. Life cycle assessment addresses the question of overall emissions from a given energy storage technology combined with a given mix of generation on a power grid.
=== Engine ===
A pneumatic motor or compressed-air engine uses the expansion of compressed air to drive the pistons of an engine, turn the axle, or to drive a turbine.
The following methods can increase efficiency:
A continuous expansion turbine at high efficiency
Multiple expansion stages
Use of waste heat, notably in a hybrid heat engine design
Use of environmental heat
A highly efficient arrangement uses high, medium, and low pressure pistons in series, with each stage followed by an airblast venturi that draws ambient air over an air-to-air heat exchanger. This warms the exhaust of the preceding stage and admits this preheated air to the following stage. The only exhaust gas from each stage is cold air, which can be as cold as −15 °C (5 °F); the cold air may be used for air conditioning in a car.
Additional heat can be supplied by burning fuel, as in 1904 for the Whitehead torpedo. This improves the range and speed available for a given tank volume at the cost of the additional fuel.
==== Cars ====
Since about 1990, several companies have claimed to be developing compressed-air cars, but none is available. Typically, the main claimed advantages are no roadside pollution, low cost, use of cooking oil for lubrication, and integrated air conditioning.
The time required to refill a depleted tank is important for vehicle applications. "Volume transfer" moves pre-compressed air from a stationary tank to the vehicle tank almost instantaneously. Alternatively, a stationary or on-board compressor can compress air on demand, possibly requiring several hours.
==== Ships ====
Large marine diesel engines have started using compressed air, typically stored in large bottles between 20 and 30 bar, acting directly on the pistons via special starting valves to turn the crankshaft prior to beginning fuel injection. This arrangement is more compact and cheaper than an electric starter motor would be at such scales and able to supply the necessary burst of extremely high power without placing a prohibitive load on the ship's electrical generators and distribution system. Compressed air is commonly also used, at lower pressures, to control the engine and act as the spring force acting on the cylinder exhaust valves, and to operate other auxiliary systems and power tools on board, sometimes including pneumatic PID controllers. One advantage of this approach is that, in the event of an electrical blackout, ship systems powered by stored compressed air can continue functioning uninterrupted, and generators can be restarted without an electrical supply. Another is that pneumatic tools can be used in commonly-wet environments without the risk of electric shock.
==== Hybrid vehicles ====
While the air storage system offers a relatively low power density and vehicle range, its high efficiency is attractive for hybrid vehicles that use a conventional internal combustion engine as the main power source. The air storage can be used for regenerative braking and to optimize the cycle of the piston engine, which is not equally efficient at all power/RPM levels.
Bosch and PSA Peugeot Citroën have developed a hybrid system that uses hydraulics as a way to transfer energy to and from a compressed nitrogen tank. An up-to-45% reduction in fuel consumption is claimed, corresponding to 2.9 L / 100 km (81 mpg, 69 g CO2/km) on the New European Driving Cycle (NEDC) for a compact frame like Peugeot 208. The system is claimed to be much more affordable than competing electric and flywheel KERS systems and is expected on road cars by 2016.
=== History of air engines ===
Air engines have been used since the 19th century to power mine locomotives, pumps, drills, and trams, via centralized, city-level distribution. Racecars use compressed air to start their internal combustion engine (ICE), and large diesel engines may have starting pneumatic motors.
== Types of systems ==
=== Hybrid systems ===
Brayton cycle engines compress and heat air with a fuel suitable for an internal combustion engine. For example, burning natural gas or biogas heats compressed air, and then a conventional gas turbine engine or the rear portion of a jet engine expands it to produce work.
Compressed air engines can recharge an electric battery. The apparently-defunct Energine promoted its Pne-PHEV or Pneumatic Plug-in Hybrid Electric Vehicle-system.
==== Existing hybrid systems ====
Huntorf, Germany in 1978, and McIntosh, Alabama, U.S. in 1991 commissioned hybrid power plants. Both systems use off-peak energy for air compression and burn natural gas in the compressed air during the power-generating phase.
==== Future hybrid systems ====
The Iowa Stored Energy Park (ISEP) would have used aquifer storage rather than cavern storage. The ISEP was an innovative, 270-megawatt, $400 million compressed air energy storage (CAES) project proposed for in-service near Des Moines, Iowa, in 2015. The project was terminated after eight years in development because of site geological limitation, according to the U.S. Department of Energy.
Additional facilities are under development in Norton, Ohio. FirstEnergy, an Akron, Ohio, electric utility, obtained development rights to the 2,700-MW Norton project in November 2009.
The RICAS2020 project attempts to use an abandoned mine for adiabatic CAES with heat recovery. The compression heat is stored in a tunnel section filled with loose stones, so the compressed air is nearly cool when entering the main pressure storage chamber. The cool compressed air regains the heat stored in the stones when released back through a surface turbine, leading to higher overall efficiency. A two-stage process has theoretical higher efficiency of around 70%.
=== Underwater storage ===
==== Bag/tank ====
Deep water in lakes and the ocean can provide pressure without requiring high-pressure vessels or drilling. The air goes into inexpensive, flexible containers such as plastic bags. Obstacles include the limited number of suitable locations and the need for high-pressure pipelines between the surface and the containers. Given the low cost of the containers, great pressure (and great depth) may not be as important. A key benefit of such systems is that charge and discharge pressures are a constant function of depth. Carnot inefficiencies can be increased by using multiple charge and discharge stages and using inexpensive heat sources and sinks such as cold water from rivers or hot water from solar ponds.
==== Hydroelectric ====
A nearly isobaric solution is possible by using the compressed gas to drive a hydroelectric system. This solution requires large pressure tanks on land (as well as underwater airbags). Hydrogen gas is the preferred fluid, since other gases suffer from substantial hydrostatic pressures at even relatively modest depths (~500 meters).
European electrical utility company E.ON has provided €1.4 million (£1.1 million) in funding to develop undersea air storage bags. Hydrostor in Canada is developing a commercial system of underwater storage "accumulators" for compressed air energy storage, starting at the 1- to 4-MW scale.
==== Buoy ====
When excess wind energy is available from offshore wind turbines, a spool-tethered buoy can be pushed below the surface. When electricity demand rises, the buoy is allowed to rise towards the surface, generating power.
=== Nearly isothermal compression ===
A number of methods of nearly isothermal compression are being developed. Fluid Mechanics has a system with a heat absorbing and releasing structure (HARS) attached to a reciprocating piston. Light Sail injects a water spray into a reciprocating cylinder. SustainX uses an air-water foam mix inside a semi-custom, 120-rpm compressor/expander. All these systems ensure that the air is compressed with high thermal diffusivity compared to the speed of compression. Typically these compressors can run at speeds up to 1000 rpm. To ensure high thermal diffusivity, the average distance a gas molecule is from a heat-absorbing surface is about 0.5 mm. These nearly-isothermal compressors can also be used as nearly-isothermal expanders and are being developed to improve the round-trip efficiency of CAES.
== See also ==
Alternative fuel vehicle
Fireless locomotive
Grid energy storage
Hydraulic accumulator
List of energy storage power plants
Pneumatics
Zero-emissions vehicle
Cryogenic energy storage
Compressed-air engine
== References ==
== External links ==
Compressed Air System of Paris – technical notes Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 (Special supplement, Scientific American, 1921)
Solution to some of country's energy woes might be little more than hot air (Sandia National Labs, DoE).
MSNBC article, Cities to Store Wind Power for Later Use, January 4, 2006
Power storage: Trapped wind
Catching The Wind In A Bottle A group of Midwest utilities is building a plant that will store excess wind power underground
New York Times Article: Technology; Using Compressed Air To Store Up Electricity
Compressed Air Energy Storage, Entropy and Efficiency | Wikipedia/Compressed_air_energy_storage |
Net Energy Gain (NEG) is a concept used in energy economics that refers to the difference between the energy expended to harvest an energy source and the amount of energy gained from that harvest. When the NEG of a resource is greater than zero, extraction yields excess energy. If the NEG is below zero, it requires more energy to extract the resource than can be extracted from it. The net energy gain, which can be expressed in joules, differs from the net financial gain that may result from the energy harvesting process, in that various sources of energy (e.g. natural gas, coal, etc.) may be priced differently for the same amount of energy.
== Calculating NEG ==
A net energy gain is achieved by expending less energy acquiring a source of energy than is contained in the source to be consumed. That is
N
E
G
=
E
n
e
r
g
y
Consumable
−
E
n
e
r
g
y
Expended
.
{\displaystyle NEG=Energy_{\hbox{Consumable}}-Energy_{\hbox{Expended}}.}
Factors to consider when calculating NEG is the type of energy, the way energy is used and acquired, and the methods used to store or transport the energy. It is also possible to overcomplicate the equation by an infinite number of externalities and inefficiencies that may be present during the energy harvesting process.
== Sources of energy ==
The definition of an energy source is not rigorous. Anything that can provide energy to anything else can qualify. Wood in a stove is full of potential thermal energy; in a car, mechanical energy is acquired from the combustion of gasoline, and the combustion of coal is converted from thermal to mechanical, and then to electrical energy.
Examples of energy sources include:
Fossil fuels
Nuclear fuels (e.g., uranium and plutonium)
Radiation from the sun
Mechanical energy from wind, rivers, tides, etc.
Bio-fuels derived from biomass, in turn having consumed soil nutrients during growth.
Heat from within the earth (geothermal energy)
The term net energy gain can be used in slightly different ways:
=== Non-sustainables ===
The usual definition of net energy gain compares the energy required to extract energy (that is, to find it, remove it from the ground, refine it, and ship it to the energy user) with the amount of energy produced and transmitted to a user from some (typically underground) energy resource. To better understand this, assume an economy has a certain amount of finite oil reserves that are still underground, unextracted. To get to that energy, some of the extracted oil needs to be consumed in the extraction process to run the engines driving the pumps, therefore after extraction the net energy produced will be less than the amount of energy in the ground before extraction, because some had to be used up.
The extraction energy can be viewed in one of two ways: profitable extractable (NEG>0) or nonprofitable extractable (NEG<0). For instance, in the Athabasca Oil Sands, the highly diffuse nature of the tar sands and low price of crude oil rendered them uneconomical to mine until the late 1950s (NEG<0). Since then, the price of oil has risen and a new steam extraction technique has been developed, allowing the sands to become the largest oil provider in Alberta (NEG>0).
=== Sustainables ===
The situation is different with sustainable energy sources, such as hydroelectric, wind, solar, and geothermal energy sources, because there is no bulk reserve to account for (other than the Sun's lifetime), but the energy continuously trickles, so only the energy required for extraction is considered.
In all energy extraction cases, the life cycle of the energy-extraction device is crucial for the NEG-ratio. If an extraction device is defunct after 10 years, its NEG will be significantly lower than if it operates for 30 years. Therefore, the 'energy payback time (sometimes referred to as energy amortization) can be used instead, which is the time, usually given in years, a plant must operate until the running NEG becomes positive (i.e. until the amount of energy needed for the plant infrastructure has been harvested from the plant).
=== Biofuels ===
Net energy gain of biofuels has been a particular source of controversy for ethanol derived from corn (bioethanol). The actual net energy of biofuel production is highly dependent on both the bio source that is converted into energy, how it is grown and harvested (and in particular the use of petroleum-derived fertilizer), and how efficient the process of conversion to usable energy is. Details on this can be found in the Ethanol fuel energy balance article. Similar considerations also apply to biodiesel and other fuels.
== ISO 13602 ==
ISO 13602-1 provides methods to analyse, characterize and compare technical energy systems (TES) with all their inputs, outputs and risk factors. It contains rules and guidelines for the methodology for such analyses.
ISO 13602-1 describes a means of to establish relations between inputs and outputs (net energy) and thus to facilitate certification, marking, and labelling, comparable characterizations, coefficient of performance, energy resource planning, environmental impact assessments, meaningful energy statistics and forecasting of the direct natural energy resource or energyware inputs, technical energy system investments and the performed and expected future energy service outputs.
In ISO 13602-1:2002, renewable resource is defined as "natural resource for which the ratio of the creation of the natural resource to the output of that resource from nature to the technosphere is equal to or greater than one".
=== Examples ===
During the 1920s, 50 barrels (7.9 m3) of crude oil were extracted for every barrel of crude used in the extraction and refining process. Today only 5 barrels (0.79 m3) are harvested for every barrel used. When the net energy gain of an energy source reaches zero, then the source is no longer contributing energy to an economy.
== See also ==
ISO 13600
Energy economics
Energy return on investment
Energyware and energy carrier
Solar cell#Declining costs and exponential capacity growth
Energy cannibalism
== References ==
== External links ==
ISO 13602-1:2002 Methods for analysis of technical energy systems.
The Importance of ISO and IEC International Energy Standards.
Technical energy systems
Thinking clearly about biofuels: ending the irrelevant net energy debate and developing better performance metrics for alternative fuels. | Wikipedia/Net_energy_gain |
Seasonal thermal energy storage (STES), also known as inter-seasonal thermal energy storage,is the storage of heat or cold for periods of up to several months. The thermal energy can be collected whenever it is available and be used whenever needed, such as in the opposing season. For example, heat from solar collectors or waste heat from air conditioning equipment can be gathered in hot months for space heating use when needed, including during winter months. Waste heat from industrial process can similarly be stored and be used much lateror the natural cold of winter air can be stored for summertime air conditioning.
STES stores can serve district heating systems, as well as single buildings or complexes. Among seasonal storages used for heating, the design peak annual temperatures generally are in the range of 27 to 80 °C (81 to 180 °F), and the temperature difference occurring in the storage over the course of a year can be several tens of degrees. Some systems use a heat pump to help charge and discharge the storage during part or all of the cycle. For cooling applications, often only circulation pumps are used.
Sorption and thermochemical heat storage are considered the most suitable for seasonal storage due to the theoretical absence of heat loss between charging and discharging. However, studies have shown that actual heat losses currently are usually significant.
Examples for district heating include Drake Landing Solar Community where ground storage provides 97% of yearly consumption without heat pumps,and Danish pond storage with boosting.
== STES technologies ==
There are several types of STES technology, covering a range of applications from single small buildings to community district heating networks. Generally, efficiency increases and the specific construction cost decreases with size.
=== Underground thermal energy storage ===
UTES (underground thermal energy storage), in which the storage medium may be geological strata ranging from earth or sand to solid bedrock, or aquifers.
UTES technologies include:
ATES (aquifer thermal energy storage). An ATES store is composed of a doublet, totaling two or more wells into a deep aquifer that is contained between impermeable geological layers above and below. One half of the doublet is for water extraction and the other half for reinjection, so the aquifer is kept in hydrological balance, with no net extraction. The heat (or cold) storage medium is the water and the substrate it occupies. Germany's Reichstag building has been both heated and cooled since 1999 with ATES stores, in two aquifers at different depths.In the Netherlands there are well over 1,000 ATES systems, which are now a standard construction option.A significant system has been operating at Richard Stockton College (New Jersey) for several years.ATES has a lower installation cost than borehole thermal energy storage (BTES) because usually fewer holes are drilled, but ATES has a higher operating cost. Also, ATES requires particular underground conditions to be feasible, including the presence of an aquifer.
BTES (borehole thermal energy storage). BTES stores can be constructed wherever boreholes can be drilled, and are composed of one to hundreds of vertical boreholes, typically 155 mm (6.1 in) in diameter. Systems of all sizes have been built, including many quite large.The strata can be anything from sand to crystalline hardrock, and depending on engineering factors the depth can be from 50 to 300 metres (164 to 984 ft). Spacings have ranged from 3 to 8 metres (9.8 to 26.2 ft). Thermal models can be used to predict seasonal temperature variation in the ground, including the establishment of a stable temperature regime which is achieved by matching the inputs and outputs of heat over one or more annual cycles. Warm-temperature seasonal heat stores can be created using borehole fields to store surplus heat captured in summer to actively raise the temperature of large thermal banks of soil so that heat can be extracted more easily (and more cheaply) in winter. Interseasonal Heat Transfer uses water circulating in pipes embedded in asphalt solar collectors to transfer heat to Thermal Banks created in borehole fields. A ground source heat pump is used in winter to extract the warmth from the Thermal Bank to provide space heating via underfloor heating. A high Coefficient of performance is obtained because the heat pump starts with a warm temperature of 25 °C (77 °F) from the thermal store, instead of a cold temperature of 10 °C (50 °F) from the ground. A BTES operating at Richard Stockton College since 1995 at a peak of about 29 °C (84.2 °F) consists of 400 boreholes 130 metres (427 ft) deep under a 3.5-acre (1.4 ha) parking lot. It has a heat loss of 2% over six months. The upper temperature limit for a BTES store is 85 °C (185 °F) due to characteristics of the PEX pipe used for BHEs, but most do not approach that limit. Boreholes can be either grout- or water-filled depending on geological conditions, and usually have a life expectancy in excess of 100 years. Both a BTES and its associated district heating system can be expanded incrementally after operation begins, as at Neckarsulm, Germany.BTES stores generally do not impair use of the land, and can exist under buildings, agricultural fields and parking lots. An example of one of the several kinds of STES illustrates well the capability of interseasonal heat storage. In Alberta, Canada, the homes of the Drake Landing Solar Community (in operation since 2007), get 97% of their year-round heat from a district heat system that is supplied by solar heat from solar-thermal panels on garage roofs. This feat – a world record – is enabled by interseasonal heat storage in a large mass of native rock that is under a central park. The thermal exchange occurs via a cluster of 144 boreholes, drilled 37 metres (121 ft) into the earth. Each borehole is 155 mm (6.1 in) in diameter and contains a simple heat exchanger made of small diameter plastic pipe, through which water is circulated. No heat pumps are involved.
CTES (cavern or mine thermal energy storage). STES stores are possible in flooded mines, purpose-built chambers, or abandoned underground oil stores (e.g. those mined into crystalline hardrock in Norway), if they are close enough to a heat (or cold) source and market.
Energy Pilings. During construction of large buildings, BHE heat exchangers much like those used for BTES stores have been spiraled inside the cages of reinforcement bars for pilings, with concrete then poured in place. The pilings and surrounding strata then become the storage medium.
GIITS (geo interseasonal insulated thermal storage). During construction of any building with a primary slab floor, an area approximately the footprint of the building to be heated, and > 1 m in depth, is insulated on all 6 sides typically with HDPE closed cell insulation. Pipes are used to transfer solar energy into the insulated area, as well as extracting heat as required on demand. If there is significant internal ground water flow, remedial actions are needed to prevent it.
=== Surface and above ground technologies ===
Pit Storage. Lined, shallow dug pits that are filled with gravel and water as the storage medium are used for STES in many Danish district heating systems. Storage pits are covered with a layer of insulation and then soil, and are used for agriculture or other purposes. A system in Marstal, Denmark, includes a pit storage supplied with heat from a field of solar-thermal panels. It is initially providing 20% of the year-round heat for the village and is being expanded to provide twice that. The world's largest pit store (200,000 m3 (7,000,000 cu ft)) was commissioned in Vojens, Denmark, in 2015, and allows solar heat to provide 50% of the annual energy for the world's largest solar-enabled district heating system. In these Danish systems, a capital expenditure per capacity unit between 0,4 and €0,6 /kWh could be achieved.
Large-scale thermal storage with water. Large scale STES water storage tanks can be built above ground, insulated, and then covered with soil.
Horizontal heat exchangers. For small installations, a heat exchanger of corrugated plastic pipe can be shallow-buried in a trench to create a STES.
Earth-bermed buildings. Stores heat passively in surrounding soil.
Salt hydrate technology. This technology achieves significantly higher storage densities than water-based heat storage. See Thermal energy storage: Salt hydrate technology
== Conferences and organizations ==
The International Energy Agency's Energy Conservation through Energy Storage (ECES) Programme has held triennial global energy conferences since 1981. The conferences originally focused exclusively on STES, but now that those technologies are mature other topics such as phase change materials (PCM) and electrical energy storage are also being covered. Since 1985 each conference has had "stock" (for storage) at the end of its name; e.g. EcoStock, ThermaStock. They are held at various locations around the world. Most recent were InnoStock 2012 (the 12th International Conference on Thermal Energy Storage) in Lleida, Spain and GreenStock 2015 in Beijing.
EnerStock 2018 will be held in Adana, Turkey in April 2018.
The IEA-ECES programme continues the work of the earlier International Council for Thermal Energy Storage which from 1978 to 1990 had a quarterly newsletter and was initially sponsored by the U.S. Department of Energy. The newsletter was initially called ATES Newsletter, and after BTES became a feasible technology it was changed to STES Newsletter.
== Use of STES for small, passively heated buildings ==
Small passively heated buildings typically use the soil adjoining the building as a low-temperature seasonal heat store that in the annual cycle reaches a maximum temperature similar to average annual air temperature, with the temperature drawn down for heating in colder months. Such systems are a feature of building design, as some simple but significant differences from 'traditional' buildings are necessary. At a depth of about 20 feet (6 m) in the soil, the temperature is naturally stable within a year-round range, if the drawdown does not exceed the natural capacity for solar restoration of heat. Such storage systems operate within a narrow range of storage temperatures over the course of a year, as opposed to the other STES systems described above for which large annual temperature differences are intended.
Two basic passive solar building technologies were developed in the US during the 1970s and 1980s. They use direct heat conduction to and from thermally isolated, moisture-protected soil as a seasonal storage method for space heating, with direct conduction as the heat return mechanism. In one method, "passive annual heat storage" (PAHS), the building's windows and other exterior surfaces capture solar heat which is transferred by conduction through the floors, walls, and sometimes the roof, into adjoining thermally buffered soil. When the interior spaces are cooler than the storage medium, heat is conducted back to the living space.
The other method, “annualized geothermal solar” (AGS) uses a separate solar collector to capture heat. The collected heat is delivered to a storage device (soil, gravel bed or water tank) either passively by the convection of the heat transfer medium (e.g. air or water) or actively by pumping it. This method is usually implemented with a capacity designed for six months of heating.
A number of examples of the use of solar thermal storage from across the world include: Suffolk One a college in East Anglia, England, that uses a thermal collector of pipe buried in the bus turning area to collect solar energy that is then stored in 18 boreholes each 100 metres (330 ft) deep for use in winter heating. Drake Landing Solar Community in Canada uses solar thermal collectors on the garage roofs of 52 homes, which is then stored in an array of 35 metres (115 ft) deep boreholes. The ground can reach temperatures in excess of 70 °C which is then used to heat the houses passively. The scheme has been running successfully since 2007. In Brædstrup, Denmark, some 8,000 square metres (86,000 sq ft) of solar thermal collectors are used to collect some 4,000,000 kWh/year similarly stored in an array of 50 metres (160 ft) deep boreholes.
=== Liquid engineering ===
Architect Matyas Gutai obtained an EU grant to construct a house in Hungary which uses extensive water filled wall panels as heat collectors and reservoirs with underground heat storage water tanks. The design uses microprocessor control.
== Small buildings with internal STES water tanks ==
A number of homes and small apartment buildings have demonstrated combining a large internal water tank for heat storage with roof-mounted solar-thermal collectors. Storage temperatures of 90 °C (194 °F) are sufficient to supply both domestic hot water and space heating. The first such house was MIT Solar House #1, in 1939. An eight-unit apartment building in Oberburg, Switzerland was built in 1989, with three tanks storing a total of 118 m3 (4,167 cubic feet) that store more heat than the building requires. Since 2011, that design is now being replicated in new buildings.
In Berlin, the “Zero Heating Energy House”, was built in 1997 in as part of the IEA Task 13 low energy housing demonstration project. It stores water at temperatures up to 90 °C (194 °F) inside a 20 m3 (706 cubic feet) tank in the basement.
A similar example was built in Ireland in 2009, as a prototype. The solar seasonal store consists of a 23 m3 (812 cu ft) tank, filled with water, which was installed in the ground, heavily insulated all around, to store heat from evacuated solar tubes during the year. The system was installed as an experiment to heat the world's first standardized pre-fabricated passive house in Galway, Ireland. The aim was to find out if this heat would be sufficient to eliminate the need for any electricity in the already highly efficient home during the winter months.
Based on improvements in glazing the Zero heating buildings are now possible without seasonal energy storage.
== Use of STES in greenhouses ==
STES is also used extensively for the heating of greenhouses. ATES is the kind of storage commonly in use for this application. In summer, the greenhouse is cooled with ground water, pumped from the “cold well” in the aquifer. The water is heated in the process, and is returned to the “warm well” in the aquifer. When the greenhouse needs heat, such as to extend the growing season, water is withdrawn from the warm well, becomes chilled while serving its heating function, and is returned to the cold well. This is a very efficient system of free cooling, which uses only circulation pumps and no heat pumps.
== Annualized geo-solar ==
Annualized geo-solar (AGS) enables passive solar heating in even cold, foggy north temperate areas. It uses the ground under or around a building as thermal mass to heat and cool the building. After a designed, conductive thermal lag of 6 months the heat is returned to, or removed from, the inhabited spaces of the building. In hot climates, exposing the collector to the frigid night sky in winter can cool the building in summer.
The six-month thermal lag is provided by about three meters (ten feet) of dirt. A six-meter-wide (20 ft) buried skirt of insulation around the building keeps rain and snow melt out of the dirt, which is usually under the building. The dirt does radiant heating and cooling through the floor or walls. A thermal siphon moves the heat between the dirt and the solar collector. The solar collector may be a sheet-metal compartment in the roof, or a wide flat box on the side of a building or hill. The siphons may be made from plastic pipe and carry air. Using air prevents water leaks and water-caused corrosion. Plastic pipe doesn't corrode in damp earth, as metal ducts can.
AGS heating systems typically consist of:
A very well-insulated, energy efficient, eco-friendly living space;
Heat captured in the summer months from a sun-warmed sub-roof or attic space, a sunspace or greenhouse, a ground-based, flat-plate, thermosyphon collector, or other solar-heat collection device;
Heat transported from the collection source into (typically) the earth mass under the living space (for storage), this mass surrounded by a sub-surface perimeter "cape" or "umbrella" providing both insulation from easy heat-loss back up to the outdoors air and a barrier against moisture migration through that heat-storage mass;
A high-density floor whose thermal properties are designed to radiate heat back into the living space, but only after the proper sub-floor-insulation-regulated time-lag;
A control-scheme or system which activates (often PV-powered) fans and dampers, when the warm-season air is sensed to be hotter in the collection area(s) than in the storage mass, or allows the heat to be moved into the storage-zone by passive convection (often using a solar chimney and thermally activated dampers.)
Usually it requires several years for the storage earth-mass to fully preheat from the local at-depth soil temperature (which varies widely by region and site-orientation) to an optimum Fall level at which it can provide up to 100% of the heating requirements of the living space through the winter. This technology continues to evolve, with a range of variations (including active-return devices) being explored. The listserve where this innovation is most often discussed is "Organic Architecture" at Yahoo.
This system is almost exclusively deployed in northern Europe. One system has been built at Drake Landing in North America. A more recent system is a Do-it-yourself energy-neutral home in progress in Collinsville, IL that will rely solely on Annualized Solar for conditioning.
== See also ==
== References ==
== External links ==
DOE EERE Research Reports
December 2005, Seasonal thermal store being fitted in an ENERGETIKhaus100
October 1998, Fujita Research report
Earth Notes: Milk Tanker Thermal Store with Heat Pump
Heliostats used for concentrating solar power (photos)
Wofati Eco building with annualized thermal inertia | Wikipedia/Seasonal_thermal_energy_storage |
This timeline of sustainable energy research from 2020 to the present documents research and development in renewable energy, solar energy, and nuclear energy, particularly regarding energy production that is sustainable within the Earth system.
Events currently not included in the timelines include:
goal-codifying policy about, commercialization of, adoptions of, deployment-statistics of, announced developments of, announced funding for and dissemination of sustainable energy -technologies and -infrastructure/systems
research about related phase-outs in general – such as about the fossil fuel phase out
research about relevant alternative technologies – such as in transport, HVAC, refrigeration, passive cooling, heat pumps and district heating
research about related public awareness, media, policy-making and education
research about related geopolitics, policies, and integrated strategies
== Grids ==
=== Smart grids ===
==== 2022 ====
A study provides results of simulations and analysis of "transactive energy mechanisms to engage the large-scale deployment of flexible distributed energy resources (DERs), such as air conditioners, water heaters, batteries, and electric vehicles, in the operation of the electric power system".
=== Super grids ===
==== 2022 ====
Researchers describe a novel strategy to create a global sustainable interconnected energy system based on deep-ocean-compressed hydrogen transportation.
=== Microgrids and off-the-grid ===
Researchers describe a way for "inherently robust, scalable method of integration using multiple energy storage systems and distributed energy resources, which does not require any means of dedicated communication improvised controls", which could make microgrids easy and low cost "where they are needed most" such as during a power outage or after a disaster.
== Solar power ==
=== 2020 ===
Solar cell efficiency of perovskite solar cells have increased from 3.8% in 2009 to 25.2% in 2020 in single-junction architectures, and, in silicon-based tandem cells, to 29.1%, exceeding the maximum efficiency achieved in single-junction silicon solar cells.
6 March – Scientists show that adding a layer of perovskite crystals on top of textured or planar silicon to create a tandem solar cell enhances its performance up to a power conversion efficiency of 26%. This could be a low cost way to increase efficiency of solar cells.
13 July – The first global assessment into promising approaches of solar photovoltaic modules recycling is published. Scientists recommend "research and development to reduce recycling costs and environmental impacts compared to disposal while maximizing material recovery" as well as facilitation and use of techno–economic analyses.
3 July – Scientists show that adding an organic-based ionic solid into perovskites can result in substantial improvement in solar cell performance and stability. The study also reveals a complex degradation route that is responsible for failures in aged perovskite solar cells. The understanding could help the future development of photovoltaic technologies with industrially relevant longevity.
=== 2021 ===
12 April – Scientists develop a prototype and design rules for both-sides-contacted silicon solar cells with conversion efficiencies of 26% and above, Earth's highest for this type of solar cell.
7 May – Researchers address a key problem of perovskite solar cells by increasing their stability and long-term reliability with a form of "molecular glue".
21 May – The first industrial commercial production line of perovskite solar panels, using an inkjet printing procedure, is launched in Poland.
13 December – Researchers report the development of a database and analysis tool about perovskite solar cells which systematically integrates over 15,000 publications, in particular device-data about over 42,400 of such photovoltaic devices.
16 December – ML System from Jasionka, Poland, opens first quantum glass production line. The factory started the production of windows integrating a transparent quantum-dots layer that can produce electricity while also capable of cooling buildings.
=== 2022 ===
30 May - A team at Fraunhofer ISE led by Frank Dimroth developed a 4-junction solar cell with an efficiency of 47.6% - a new world record for solar energy conversion.
13 July – Researchers report the development of semitransparent solar cells that are as large as windows, after team members achieved record efficiency with high transparency in 2020. On 4 July, researchers report the fabrication of solar cells with a record average visible transparency of 79%, being nearly invisible.
9 December – Researchers report the development of 3D-printed flexible paper-thin organic photovoltaics.
19 December – A new world record solar cell efficiency for a silicon-perovskite tandem solar cell is achieved, with a German team of scientists converting 32.5% of sunlight into electrical energy.
=== 2024 ===
12 March – Scientists demonstrate the first monolithically integrated tandem solar cell using selenium as the photoabsorbing layer in the top cell, and silicon as the photoabsorbing layer in the bottom cell.
=== 2025 ===
=== High-altitude and space-based solar power ===
Ongoing research and development projects include SSPS-OMEGA, SPS-ALPHA, and the Solaris program.
==== 2020 ====
The US Naval Research Laboratory conducts its first test of solar power generation in a satellite, the PRAM experiment aboard the Boeing X-37.
==== 2023 ====
Researchers demonstrate flexible organic solar cells on balloons in the 35 km stratosphere.
Caltech reports the first successful beaming of solar energy from space down to a receiver on the ground, via the MAPLE instrument on its SSPD-1 spacecraft, launched into orbit in January.
=== Floating solar ===
==== 2020 ====
A study concludes that deploying floating solar panels on existing hydro reservoirs could generate 16%–40% (4,251 to 10,616 TWh/year) of global energy needs when not considering project-siting constraints, local development regulations, "economic or market potential" and potential future technology improvements.
==== 2022 ====
Researchers develop floating artificial leaves for light-driven hydrogen and syngas fuel production. The lightweight, flexible perovskite devices are scalable and can float on water similar to lotus leaves.
==== 2023 ====
An analysis concludes there is large potential (≈9,400 TWh/yr) for floating solar photovoltaics on reservoirs, at the upper range of the prior 2020 study (see above).
=== Agrivoltaics ===
2021 – An improved agrivoltaic system with a grooved glass plate is demonstrated.
2021 – A report reviews several studies about the potential of agrivoltaics, which partly suggest "high potential of agrivoltaics as a viable and efficient technology" and outline concerns for refinements to the technology.
2022 – Researchers report the development of greenhouses (or solar modules) by a startup that generate electricity from a portion of the spectrum of sunlight, allowing spectra that interior plants use to pass through.
2023 – Demonstration of another agrivoltaic greenhouse which outperforms a conventional glass-roof greenhouse.
=== Solar-powered production ===
==== Water production ====
===== Early 2020s =====
Hydrogels are used to develop system that capture moisture (e.g. at night in a desert) to cool solar panels or to produce fresh water – including for irrigating crops as demonstrated in solar panel integrated systems where these have been enclosed next to or beneath the panels within the system.
== Wind power ==
=== 2021 ===
A study using simulations finds that large scale vertical-axis wind turbines could outcompete conventional HAWTs (horizontal axis) wind farm turbines.
Scientists report that due to decreases in power generation efficiency of wind farms downwind of offshore wind farms, cross-national limits and potentials for optimization need to be considered in strategic decision-making.
Researchers report, based on simulations, how large wind-farm performance can be significantly improved using windbreaks.
The world's first fully autonomous commercial "airborne wind energy" system (an airborne wind turbine) is launched by a company.
An U.S. congressionally directed report concludes that "the resource potential of wind energy available to AWE systems is likely similar to that available to traditional wind energy systems" but that "AWE would need significant further development before it could deploy at meaningful scales at the national level".
=== 2023 ===
First kWh by a TLP floating airborne wind turbine system (X30) possibly as part of a "new wave of startups" in this area.
Completion of the first functional 105 meters tall more-modular Modvion wooden wind turbine is reported.
=== 2024 ===
Minesto's Dragon 12 underwater tidal kite turbines are demonstrated successfully, connected to the Faroe Island's power grid.
== Hydrogen energy ==
=== 2022 ===
Researchers increase water electrolysis performance of renewable hydrogen via capillary-fed electrolysis cells.
A novel energy-efficient strategy for hydrogen release from liquid hydrogen carriers with the potential to reduce costs of storage and transportation is reported.
Researchers report the development of a potential efficient, secure and convenient method to separate, purify, store and transport large amounts of hydrogen for energy storage in renewables-based energy systems as powder using ball milling.
A way method for hydrogen production from the air, useful for off-the-grid settings, is demonstrated.
A novel type of effective hydrogen storage using readily available salts is reported.
An electrolysis system for viable hydrogen production from seawater without requiring a pre-desalination process is reported, which could allow for more flexible and less costly hydrogen production.
Chemical engineers report a method to substantially increase conversion efficiency and reduce material costs of green hydrogen production by using sound waves during electrolysis.
=== 2023 ===
Separate teams of researchers report substantial improvements to green hydrogen production methods, enabling higher efficiencies and durable use of untreated seawater.
A DVGW report suggests gas pipeline infrastructures (in Germany) are suitable to be repurposed to transport hydrogen, showing limited corrosion.
A concentrated solar-to-hydrogen device approaching viability is demonstrated.
Record solar-to-hydrogen efficiencies, using photoelectrochemical cells, are reported.
== Hydroelectricity and marine energy ==
=== 2021 ===
Engineers report the development of a prototype wave energy converter that is twice as efficient as similar existing experimental technologies, which could be a major step towards practical viability of tapping into the sustainable energy source.
A study investigates how tidal energy could be best integrated into the Orkney energy system. A few days earlier, a review assesses the potential of tidal energy in the UK's energy systems, finding that it could, according to their considerations that include an economic cost-benefit analysis, deliver 34 TWh/y or 11% of its energy demand.
== Energy storage ==
=== Electric batteries ===
=== 2022 ===
In a paywalled article, scientists provide 3D imaging and model analysis to reveal main causes, mechanics, and potential mitigations of the prevalent lithium-ion battery degradation over charge cycles.
=== 2023 ===
In two studies, researchers report that substitution of PET adhesive tapes could nearly prevent self-discharge in the widely used lithium-ion batteries, extending battery life.
=== Thermal energy storage ===
2022 – Researchers report the development of a system that combines the MOST solar thermal energy storage system that can store energy for 18 years with a chip-sized thermoelectric generator to generate electricity from it.
=== Novel and emerging types ===
2021 – A company generates its first power from a gravity battery at a site in Edinburgh. Other gravity batteries are also under construction by other companies.
2022 – A study describes using lifts and empty apartments in tall buildings to store energy, estimating global potential around 30 to 300 GWh.
== Nuclear fusion ==
== Geothermal energy ==
=== 2022 ===
A study describes a way by which geothermal power plants could store their energy within their reservoirs for dispatch to (better) help manage intermittency of solar and wind.
== Waste heat recovery ==
=== 2020 ===
Reviews about WHR in the aluminium industry and cement industry are published.
=== 2023 ===
A report by the company Danfoss estimates EU's excess heat recovery potential, suggesting there is "huge, unharnessed potential" and that action could involve initial mapping of existing waste heat sources.
== Bioenergy, chemical engineering and biotechnology ==
=== 2020 ===
Scientists report the development of micro-droplets for algal cells or synergistic algal-bacterial multicellular spheroid microbial reactors capable of producing oxygen as well as hydrogen via photosynthesis in daylight under air.
=== 2022 ===
Researchers report the development of 3D-printed nano-"skyscraper" electrodes that house cyanobacteria for extracting substantially more sustainable bioenergy from their photosynthesis than before.
News outlets report about the development of algae biopanels by a company for sustainable energy generation with unclear viability after other researchers built the self-powered BIQ house prototype in 2013.
==== 2023 ====
A bacterial hydrogenase enzyme, Huc, for biohydrogen energy from the air is reported.
== General ==
Research about sustainable energy in general or across different types.
=== Other energy-need reductions ===
Research and development of (technical) means to substantially or systematically reduce need for energy beyond smart grids, education / educational technology (such as about differential environmental impacts of diets), transportation infrastructure (bicycles and rail transport) and conventional improvements of energy efficiency on the level of the energy system.
==== 2020 ====
A study shows a set of different scenarios of minimal energy requirements for providing decent living standards globally, finding that – according to their models, assessments and data – by 2050 global energy use could be reduced to 1960 levels despite 'sufficiency' still being materially relatively generous.
==== 2022 ====
A trial of estimated financial energy cost of refrigerators alongside EU energy-efficiency class (EEEC) labels online finds that the approach of labels involves a trade-off between financial considerations and higher cost requirements in effort or time for the product-selection from the many available options which are often unlabelled and don't have any EEEC-requirement for being bought, used or sold within the EU.
=== Materials and recycling ===
==== 2020 ====
Researchers report that mining for renewable energy production will increase threats to biodiversity and publish a map of areas that contain needed materials as well as estimations of their overlaps with "Key Biodiversity Areas", "Remaining Wilderness" and "Protected Areas". The authors assess that careful strategic planning is needed.
==== 2021 ====
Neodymium, an essential rare-earth element (REE), plays a key role in making permanent magnets for wind turbines. Demand for REEs is expected to double by 2035 due to renewable energy growth, posing environmental risks, including radioactive waste from their extraction.
==== 2023 ====
A study finds that the world has enough rare earths and other raw materials to switch from fossil fuels to renewable energy.
A new viable lithium-ion battery recycling method is reported.
A study suggests incentives and regulations are needed for producers to design solar panels that can be more easily recycled.
==== Seabed mining ====
===== 2020 =====
Researchers assess to what extent international law and existing policy support the practice of a proactive knowledge management system that enables systematic addressing of uncertainties about the environmental effects of seabed mining via regulations that, for example, enable the International Seabed Authority to actively engage in generating and synthesizing information.
===== 2021 =====
A moratorium on deep-sea mining until rigorous and transparent impact assessments are carried out is adopted at the 2021 world congress of the International Union for the Conservation of Nature (IUCN). The vote, however, has no legal implications given that deep-sea mining regulations continue to be governed by the International Seabed Authority as established by UNCLOS. Researchers have outlined why there is a need to avoid mining the deep sea.
Nauru requested the ISA to finalize rules so that The Metals Company be approved to begin work in 2023.
China's COMRA tested its polymetallic nodules collection system at 4,200 feet of depth in the East and South China Seas. The Dayang Yihao was exploring the Clarion–Clipperton zone (CCZ) for China Minmetals when it crossed into the U.S. exclusive economic zone near Hawaii, where for five days it looped south of Honolulu without having requested entry into US waters.
Belgian company Global Sea Mineral Resources (GSR) and the German Federal Institute for Geosciences and Natural Resources (BGR) conduct a test in the CCZ with a prototype mining vehicle named Patania II. This test was the first of its kind since the late 1970s.
===== 2022 =====
Impossible Metals announces its first underwater robotic vehicle, 'Eureka 1', has completed its first trial of selectively harvesting polymetallic nodule rocks from the seabed to help address the rising global need for metals for renewable energy system components, mainly batteries.
===== 2023 =====
Supporters of mining were led by Norway, Mexico, and the United Kingdom, and supported by The Metals Company.
Chinese prospecting ship Dayang Hao prospected in China-licensed areas in the Clarion Clipperton Zone.
===== 2024 =====
Norway approved commercial deep-sea mining. 80% of Parliament voted to approve.
On February 7, 2024, the European Parliament voted in favor of a Motion for Resolution, expressing environmental concerns regarding Norway's decision to open vast areas in Arctic waters for deep-sea mining activities and reaffirming its support for a moratorium.
In July 2024, at the 29th General Assembly of the International Seabed Authority in Kingston, Jamaica, 32 countries united against the imminent start of mining for metallic nodules on the seafloor. In his address titled "Upholding the Common Heritage of Humankind", President Surangel S. Whipps Jr. of Palau highlighted the critical need to protect the deep ocean from exploitation and modern-day colonialism.
In November 2024, the People's Republic of China unveiled its first deep-sea drilling vehicle.
In December 2024 Norway suspended deep sea mining, after the Socialist Left (SV) party said that otherwise, it would not support the budget.
===== 2025 =====
In April 2025, U.S. President Trump signed an Executive Order instructing the National Oceanic and Atmospheric Administration to expedite permits for companies to mine in both international and U.S. territorial waters, which would undermine the authority of the International Seabed Authority.
=== Maintenance ===
Maintenance of sustainable energy systems could be automated, standardized and simplified and the required resources and efforts for such get reduced via research relevant for their design and processes like waste management.
==== 2022 ====
Researchers demonstrate electrostatic dust removal from solar panels.
=== Economics ===
==== 2021 ====
A review finds that the pace of cost-decline of renewables has been underestimated and that an "open cost-database would greatly benefit the energy scenario community". A 2022 study comes to similar conclusions.
==== 2022 ====
A study investigates funding allocations for public investment in energy research, development and demonstration. It provides insights about potential past impacts of drivers, that may be relevant to adjusting (or facilitating) "investment in clean energy" "to come close to achieving meaningful global decarbonization", suggesting advancement of impactful "coopetition".
=== Feasibility studies and energy system models ===
==== 2020 ====
A study suggests that all sector defossilisation can be achieved worldwide even for nations with severe conditions. The study suggests that integration impacts depend on "demand profiles, flexibility and storage cost".
==== 2021 ====
Researchers develop an energy system model for 100% renewable energy, examining feasibility and grid stability in the U.S.
==== 2022 ====
A revised or updated version of a major worldwide 100% renewable energy proposed plan and model is published.
Researchers review the scientific literature on 100% renewable energy, addressing various issues, outlining open research questions, and concluding there to be growing consensus, research and empirical evidence concerning its feasibility worldwide.
==== 2023 ====
A study indicates that in building heating in the EU, the feasibility of staying within planetary boundaries is possible only through electrification, with green hydrogen heating being 2–3 times more expensive than heat pump costs. A separate study indicates that replacing gas boilers with heat pumps is the fastest way to cut German gas consumption, despite "gas-industry lobbyists and [...] politicians" at the time making "the case for hydrogen" amid some heating transition policy changes, for which the former study revealed a need to "mitigate increased costs for [many of the] consumers".
== See also ==
Climate change adaptation
Energy development
Energy policy
Funding of science
Energy transition
Green recovery
Public research and development
Policy studies
Energy system
Renewable energy#Emerging technologies
List of emerging technologies#Energy
Technology transfer
Outline of energy
Not yet included
Standardization#Environmental protection such as for certifications and policies
Open energy system models
Open energy system databases
Power-to-X
Nanogeneration such as synthetic molecular motors for microbots and nanobots
Timelines of related areas
Timeline of materials technology#20th century
Timeline of computing 2020–present
Timeline of transportation technology#21st century
== References == | Wikipedia/Timeline_of_sustainable_energy_research_2020–present |
Renewable energy in the Cook Islands is primarily provided by solar energy and biomass. Since 2011 the Cook Islands has embarked on a programme of renewable energy development to improve its energy security and reduce greenhouse gas emissions, with an initial goal of reaching 50% renewable electricity by 2015, and 100% by 2020. The programme has been assisted by the governments of Japan, Australia, and New Zealand, and the Asian Development Bank.
Funding to provide solar panels with battery backup to the Northern atolls was provided by a NZ$20.5 million aid programme from the New Zealand Ministry of Foreign Affairs and Trade, with construction provided by PowerSmart Solar of New Zealand. The first solar site at Rakahanga was completed in September 2014. Pukapuka and Nassau were next, going online at Christmas 2014. Construction began at Tongareva on 23 February 2015 and just 10 weeks later both villages Omoka and Te Tautua were running on solar power. Manihiki was progressed at the same time. In June 2015 all of the northern atolls were fully solar powered, reducing the need to send ships north during the November to April cyclone season. A second phase of the project to provide solar farms to Atiu, Mangaia, Mauke and Mitiaro was completed in July 2019.
In 2014 construction began on the 960 kW Te Mana O Te Ra solar farm at Rarotonga International Airport. The solar farm was commissioned in October 2014. In September 2022 three battery-electric storage systems with a combined capacity of 13 MWh were installed on Rarotonga.
== See also ==
Energy in the Cook Islands
== References == | Wikipedia/Renewable_energy_in_the_Cook_Islands |
The primary application of wind turbines is to generate energy using the wind. Hence, the aerodynamics is a very important aspect of wind turbines. Like most machines, wind turbines come in many different types, all of them based on different energy extraction concepts.
Though the details of the aerodynamics depend very much on the topology, some fundamental concepts apply to all turbines. Every topology has a maximum power for a given flow, and some topologies are better than others. The method used to extract power has a strong influence on this. In general, all turbines may be classified as either lift-based or drag-based, the former being more efficient. The difference between these groups is the aerodynamic force that is used to extract the energy.
The most common topology is the horizontal-axis wind turbine. It is a lift-based wind turbine with very good performance. Accordingly, it is a popular choice for commercial applications and much research has been applied to this turbine. Despite being a popular lift-based alternative in the latter part of the 20th century, the Darrieus wind turbine is rarely used today. The Savonius wind turbine is the most common drag type turbine. Despite its low efficiency, it remains in use because of its robustness and simplicity to build and maintain.
== General aerodynamic considerations ==
The governing equation for power extraction is:
where P is the power, F is the force vector, and u is the velocity of the moving wind turbine part.
The force F is generated by the wind's interaction with the blade. The magnitude and distribution of this force is the primary focus of wind-turbine aerodynamics. The most familiar type of aerodynamic force is drag. The direction of the drag force is parallel to the relative wind. Typically, the wind turbine parts are moving, altering the flow around the part. An example of relative wind is the wind one would feel cycling on a calm day.
To extract power, the turbine part must move in the direction of the net force. In the drag force case, the relative wind speed decreases subsequently, and so does the drag force. The relative wind aspect dramatically limits the maximum power that can be extracted by a drag-based wind turbine. Lift-based wind turbines typically have lifting surfaces moving perpendicular to the flow. Here, the relative wind does not decrease; rather, it increases with rotor speed. Thus, the maximum power limits of these machines are much higher than those of drag-based machines.
== Characteristic parameters ==
Wind turbines come in a variety of sizes. Once in operation, a wind turbine experiences a wide range of conditions. This variability complicates the comparison of different types of turbines. To deal with this, nondimensionalization is applied to various qualities. Nondimensionalization allows one to make comparisons between different turbines, without having to consider the effect of things like size and wind conditions from the comparison. One of the qualities of nondimensionalization is that though geometrically similar turbines will produce the same non-dimensional results, other factors (difference in scale, wind properties) cause them to produce very different dimensional properties.
=== Power Coefficient ===
The coefficient of power is the most important variable in wind-turbine aerodynamics. The Buckingham π theorem can be applied to show that the non-dimensional variable for power is given by the equation below. This equation is similar to efficiency, so values between 0 and less than 1 are typical. However, this is not exactly the same as efficiency and thus in practice, some turbines can exhibit greater than unity power coefficients. In these circumstances, one cannot conclude the first law of thermodynamics is violated because this is not an efficiency term by the strict definition of efficiency.
where
C
P
{\displaystyle C_{P}}
is the coefficient of power,
ρ
{\displaystyle \rho }
is the air density, A is the area of the wind turbine, and V is the wind speed.
=== Thrust coefficient ===
The thrust coefficient is another important dimensionless number in wind turbine aerodynamics.
=== Speed ratio ===
Equation (1) shows two important dependents. The first is the speed (U) of the machine. The speed at the tip of the blade is usually used for this purpose, and is written as the product of the blade radius r and the rotational speed of the wind:
U
=
ω
r
{\displaystyle U=\omega r}
, where
ω
{\displaystyle \omega }
is the rotational velocity in radians/second).[please clarify] This variable is nondimensionalized by the wind speed, to obtain the speed ratio:
=== Lift and drag ===
The force vector is not straightforward, as stated earlier there are two types of aerodynamic forces, lift and drag. Accordingly, there are two non-dimensional parameters. However, both variables are non-dimensionalized in a similar way. The formula for lift is given below, the formula for drag is given after:
where
C
L
{\displaystyle C_{L}}
is the lift coefficient,
C
D
{\displaystyle C_{D}}
is the drag coefficient,
W
{\displaystyle W}
is the relative wind as experienced by the wind turbine blade, and A is the area. Note that A may not be the same area used in the power non-dimensionalization of power.
=== Relative speed ===
The aerodynamic forces have a dependency on W, this speed is the relative speed and it is given by the equation below. Note that this is vector subtraction.
== Drag- versus lift-based machines ==
All wind turbines extract energy from the wind through aerodynamic forces. There are two important aerodynamic forces: drag and lift. Drag applies a force on the body in the direction of the relative flow, while lift applies a force perpendicular to the relative flow. Many machine topologies could be classified by the primary force used to extract the energy. For example, a Savonious wind turbine is a drag-based machine, while a Darrieus wind turbine and conventional horizontal-axis wind turbines are lift-based machines. Drag-based machines are conceptually simple, yet suffer from poor efficiency. Efficiency in this analysis is based on the power extracted vs. the plan-form area. Considering that the wind is free, but the blade materials are not, a plan-form-based definition of efficiency is more appropriate.
The analysis is focused on comparing the maximum power extraction modes and nothing else. Accordingly, several idealizations are made to simplify the analysis, further considerations are required to apply this analysis to real turbines. For example, in this comparison the effects of axial momentum theory are ignored. Axial momentum theory demonstrates how the wind turbine imparts an influence on the wind which in-turn decelerates the flow and limits the maximum power. For more details see Betz's law. Since this effect is the same for both lift and drag-based machines it can be ignored for comparison purposes. The topology of the machine can introduce additional losses, for example trailing vorticity in horizontal axis machines degrades the performance at the tip. Typically these losses are minor and can be ignored in this analysis (for example tip loss effects can be reduced by using high aspect-ratio blades).
=== Maximum power of a drag-based wind turbine ===
Equation (1) will be the starting point in this derivation. Equation (CD) is used to define the force, and equation (RelativeSpeed) is used for the relative speed. These substitutions give the following formula for power.
The formulas (CP) and (SpeedRatio) are applied to express (DragPower) in nondimensional form:
It can be shown through calculus that equation (DragCP) achieves a maximum at
λ
=
1
/
3
{\displaystyle \lambda =1/3}
. By inspection one can see that equation (DragPower) will achieve larger values for
λ
>
1
{\displaystyle \lambda >1}
. In these circumstances, the scalar product in equation (1) makes the result negative. Thus, one can conclude that the maximum power is given by:
C
P
=
4
27
C
D
{\displaystyle C_{P}={\frac {4}{27}}C_{D}}
Experimentally it has been determined that a large
C
D
{\displaystyle C_{D}}
is 1.2, thus the maximum
C
P
{\displaystyle C_{P}}
is approximately 0.1778.
=== Maximum power of a lift-based wind turbine ===
The derivation for the maximum power of a lift-based machine is similar, with some modifications. First we must recognize that drag is always present, and thus cannot be ignored. It will be shown that neglecting drag leads to a final solution of infinite power. This result is clearly invalid, hence we will proceed with drag. As before, equations (1), (CD) and (RelativeSpeed) will be used along with (CL) to define the power below expression.
Similarly, this is non-dimensionalized with equations (CP) and (SpeedRatio). However, in this derivation the parameter
γ
=
C
D
/
C
L
{\displaystyle \gamma =C_{D}/C_{L}}
is also used:
Solving the optimal speed ratio is complicated by the dependency on
γ
{\displaystyle \gamma }
and the fact that the optimal speed ratio is a solution to a cubic polynomial. Numerical methods can then be applied to determine this solution and the corresponding
C
P
{\displaystyle C_{P}}
solution for a range of
γ
{\displaystyle \gamma }
results. Some sample solutions are given in the table below.
Experiments have shown that it is not unreasonable to achieve a drag ratio (
γ
{\displaystyle \gamma }
) of about 0.01 at a lift coefficient of 0.6. This would give a
C
P
{\displaystyle C_{P}}
of about 889. This is substantially better than the best drag-based machine, and explains why lift-based machines are superior.
In the analysis given here, there is an inconsistency compared to typical wind turbine non-dimensionalization. As stated in the preceding section, the A (area) in the
C
P
{\displaystyle C_{P}}
non-dimensionalization is not always the same as the A in the force equations (CL) and (CD). Typically for
C
P
{\displaystyle C_{P}}
the A is the area swept by the rotor blade in its motion. For
C
L
{\displaystyle C_{L}}
and
C
D
{\displaystyle C_{D}}
A is the area of the turbine wing section. For drag based machines, these two areas are almost identical so there is little difference. To make the lift based results comparable to the drag results, the area of the wing section was used to non-dimensionalize power. The results here could be interpreted as power per unit of material. Given that the material represents the cost (wind is free), this is a better variable for comparison.
If one were to apply conventional non-dimensionalization, more information on the motion of the blade would be required. However the discussion on horizontal-axis wind turbines will show that the maximum
C
P
{\displaystyle C_{P}}
there is 16/27. Thus, even by conventional non-dimensional analysis lift based machines are superior to drag based machines.
There are several idealizations to the analysis. In any lift-based machine (aircraft included) with finite wings, there is a wake that affects the incoming flow and creates induced drag. This phenomenon exists in wind turbines and was neglected in this analysis. Including induced drag requires information specific to the topology. In these cases it is expected that both the optimal speed-ratio and the optimal
C
P
{\displaystyle C_{P}}
would be less. The analysis focused on the aerodynamic potential but neglected structural aspects. In reality most optimal wind-turbine design becomes a compromise between optimal aerodynamic design and optimal structural design.
== Horizontal-axis wind turbine ==
The aerodynamics of a horizontal-axis wind turbine are not straightforward. The air flow at the blades is not the same as the airflow further away from the turbine. The very nature of the way in which energy is extracted from the air also causes air to be deflected by the turbine. In addition, the aerodynamics of a wind turbine at the rotor surface exhibit phenomena rarely seen in other aerodynamic fields.
== Axial momentum and the Lanchester–Betz–Joukowsky limit ==
Energy in fluid is contained in four different forms: gravitational potential energy, thermodynamic pressure, kinetic energy from the velocity and finally thermal energy. Gravitational and thermal energy have a negligible effect on the energy extraction process. From a macroscopic point of view, the air flow around the wind turbine is at atmospheric pressure. If pressure is constant then only kinetic energy is extracted. However up close near the rotor itself the air velocity is constant as it passes through the rotor plane. This is because of conservation of mass: the air that passes through the rotor cannot slow down because it needs to stay out of the way of the air behind it. So at the rotor the energy is extracted by a pressure drop. The air directly behind the wind turbine is at sub-atmospheric pressure; the air in front is at greater than atmospheric pressure. It is this high pressure in front of the wind turbine that deflects some of the upstream air around the turbine.
Frederick W. Lanchester was the first to study this phenomenon in application to ship propellers; five years later Nikolai Yegorovich Zhukovsky and Albert Betz independently arrived at the same results. It is believed that each researcher was not aware of the others' work because of World War I and the Bolshevik Revolution. Formally, the proceeding limit should thus be referred to as the Lanchester–Betz–Joukowsky limit. In general Albert Betz is credited with this accomplishment because he published his work in a journal that had wide circulation, while the other two published it in the publication associated with their respective institutions. Thus it is widely known as simply the Betz Limit.
This limit is derived by looking at the axial momentum of the air passing through the wind turbine. As stated above, some of the air is deflected away from the turbine. This causes the air passing through the rotor plane to have a smaller velocity than the free stream velocity. The ratio of this reduction to that of the air velocity far away from the wind turbine is called the axial induction factor. It is defined as
a
≡
U
1
−
U
2
U
1
{\displaystyle a\equiv {\frac {U_{1}-U_{2}}{U_{1}}}}
where a is the axial induction factor, U1 is the wind speed far away upstream from the rotor, and U2 is the wind speed at the rotor.
The first step to deriving the Betz limit is to apply the principle of conservation of angular momentum. As stated above, the effect of the wind turbine is to attenuate the flow. A location downstream of the turbine sees a lower wind speed than a location upstream of the turbine. This would violate the conservation of momentum if the wind turbine was not applying a thrust force on the flow. This thrust force manifests itself through the pressure drop across the rotor. The front operates at high pressure while the back operates at low pressure. The pressure difference from the front to back causes the thrust force. The momentum lost in the turbine is balanced by the thrust force.
Another equation is needed to relate the pressure difference to the velocity of the flow near the turbine. Here, the Bernoulli equation is used between the field flow and the flow near the wind turbine. There is one limitation to the Bernoulli equation: the equation cannot be applied to fluid passing through the wind turbine. Instead, conservation of mass is used to relate the incoming air to the outlet air. Betz used these equations and managed to solve the velocities of the flow in the far wake and near the wind turbine in terms of the far field flow and the axial induction factor. The velocities are given below as:
U
2
=
U
1
(
1
−
a
)
U
4
=
U
1
(
1
−
2
a
)
{\displaystyle {\begin{aligned}U_{2}&=U_{1}(1-a)\\U_{4}&=U_{1}(1-2a)\end{aligned}}}
U4 is introduced here as the wind velocity in the far wake. This is important because the power extracted from the turbine is defined by the following equation. However the Betz limit is given in terms of the coefficient of power
C
p
{\displaystyle C_{p}}
. The coefficient of power is similar to efficiency but not the same. The formula for the coefficient of power is given beneath the formula for power:
P
=
0.5
ρ
A
U
2
(
U
1
2
−
U
4
2
)
C
p
≡
P
0.5
ρ
A
U
1
3
{\displaystyle {\begin{aligned}P&=0.5\rho AU_{2}(U_{1}^{2}-U_{4}^{2})\\C_{p}&\equiv {\frac {P}{0.5\rho AU_{1}^{3}}}\end{aligned}}}
Betz was able to develop an expression for
C
p
{\displaystyle C_{p}}
in terms of the induction factors. This is done by the velocity relations being substituted into power and power is substituted into the coefficient of power definition. The relationship Betz developed is given below:
C
p
=
4
a
(
1
−
a
)
2
{\displaystyle C_{p}=4a(1-a)^{2}}
The Betz limit is defined by the maximum value that can be given by the above formula. This is found by taking the derivative with respect to the axial induction factor, setting it to zero and solving for the axial induction factor. Betz was able to show that the optimum axial induction factor is one third. The optimum axial induction factor was then used to find the maximum coefficient of power. This maximum coefficient is the Betz limit. Betz was able to show that the maximum coefficient of power of a wind turbine is 16/27. Airflow operating at higher thrust will cause the axial induction factor to rise above the optimum value. Higher thrust causes more air to be deflected away from the turbine. When the axial induction factor falls below the optimum value, the wind turbine is not extracting all the energy it can. This reduces pressure around the turbine and allows more air to pass through it, but not enough to account for the lack of energy being extracted.
The derivation of the Betz limit shows a simple analysis of wind turbine aerodynamics. In reality there is a lot more. A more rigorous analysis would include wake rotation, the effect of variable geometry, the important effect of airfoils on the flow, etc. Within airfoils alone, the wind turbine aerodynamicist has to consider the effects of surface roughness, dynamic stall tip losses, and solidity, among other problems.
== Angular momentum and wake rotation ==
The wind turbine described by Betz does not actually exist. It is merely an idealized wind turbine described as an actuator disk. It's a disk in space where fluid energy is simply extracted from the air. In the Betz turbine the energy extraction manifests itself through thrust. The equivalent turbine described by Betz would be a horizontal propeller type operating at infinite tip speed ratios and no losses. The tip speed ratio is the ratio of the speed of the tip relative to the free stream flow. Actual turbines try to run very high L/D airfoils at high tip speed ratios to attempt to approximate this, but there are still additional losses in the wake because of these limitations.
One key difference between actual turbines and the actuator disk, is that energy is extracted through torque. The wind imparts a torque on the wind turbine, thrust is a necessary by-product of torque. Newtonian physics dictates that for every action there is an equal and opposite reaction. If the wind imparts torque on the blades, then the blades must be imparting torque on the wind. This torque would then cause the flow to rotate. Thus the flow in the wake has two components: axial and tangential. This tangential flow is referred to as a wake rotation.
Torque is necessary for energy extraction. However wake rotation is considered a loss. Accelerating the flow in the tangential direction increases the absolute velocity. This in turn increases the amount of kinetic energy in the near wake. This rotational energy is not dissipated in any form that would allow for a greater pressure drop (Energy extraction). Thus any rotational energy in the wake is energy that is lost and unavailable.
This loss is minimized by allowing the rotor to rotate very quickly. To the observer it may seem like the rotor is not moving fast; however, it is common for the tips to be moving through the air at 8-10 times the speed of the free stream. Newtonian mechanics defines power as torque multiplied by the rotational speed. The same amount of power can be extracted by allowing the rotor to rotate faster and produce less torque. Less torque means that there is less wake rotation. Less wake rotation means there is more energy available to extract. However, very high tip speeds also increase the drag on the blades, decreasing power production. Balancing these factors is what leads to most modern horizontal-axis wind turbines running at a tip speed ratio around 9. In addition, wind turbines usually limit the tip speed to around 80-90m/s due to leading edge erosion and high noise levels. At wind speeds above about 10m/s (where a turbine running a tip speed ratio of 9 would reach 90m/s tip speed), turbines usually do not continue to increase rotational speed for this reason, which slightly reduces efficiency.
== Blade element and momentum theory ==
The simplest model for horizontal-axis wind turbine aerodynamics is blade element momentum theory. The theory is based on the assumption that the flow at a given annulus does not affect the flow at adjacent annuli. This allows the rotor blade to be analyzed in sections, where the resulting forces are summed over all sections to get the overall forces of the rotor. The theory uses both axial and angular momentum balances to determine the flow and the resulting forces at the blade.
The momentum equations for the far field flow dictate that the thrust and torque will induce a secondary flow in the approaching wind. This in turn affects the flow geometry at the blade. The blade itself is the source of these thrust and torque forces. The force response of the blades is governed by the geometry of the flow, or better known as the angle of attack. Refer to the Airfoil article for more information on how airfoils create lift and drag forces at various angles of attack. This interplay between the far field momentum balances and the local blade forces requires one to solve the momentum equations and the airfoil equations simultaneously. Typically computers and numerical methods are employed to solve these models.
There is a lot of variation between different versions of blade element momentum theory. First, one can consider the effect of wake rotation or not. Second, one can go further and consider the pressure drop induced in wake rotation. Third, the tangential induction factors can be solved with a momentum equation, an energy balance or orthogonal geometric constraint; the latter a result of Biot–Savart law in vortex methods. These all lead to different set of equations that need to be solved. The simplest and most widely used equations are those that consider wake rotation with the momentum equation but ignore the pressure drop from wake rotation. Those equations are given below. a is the axial component of the induced flow, a' is the tangential component of the induced flow.
σ
{\displaystyle \sigma }
is the solidity of the rotor,
ϕ
{\displaystyle \phi }
is the local inflow angle.
C
n
{\displaystyle C_{n}}
and
C
t
{\displaystyle C_{t}}
are the coefficient of normal force and the coefficient of tangential force respectively. Both these coefficients are defined with the resulting lift and drag coefficients of the airfoil:
a
=
1
4
C
n
σ
sin
2
ϕ
+
1
a
′
=
1
4
C
t
σ
sin
ϕ
cos
ϕ
−
1
{\displaystyle {\begin{aligned}a&={\frac {1}{{\frac {4}{C_{n}\sigma }}\sin ^{2}\phi +1}}\\a'&={\frac {1}{{\frac {4}{C_{t}\sigma }}\sin \phi \cos \phi -1}}\end{aligned}}}
=== Corrections to blade element momentum theory ===
Blade element momentum theory alone fails to represent accurately the true physics of real wind turbines. Two major shortcomings are the effects of a discrete number of blades and far field effects when the turbine is heavily loaded. Secondary shortcomings originate from having to deal with transient effects like dynamic stall, rotational effects like the Coriolis force and centrifugal pumping, and geometric effects that arise from coned and yawed rotors. The current state of the art in blade element momentum theory uses corrections to deal with these major shortcomings. These corrections are discussed below. There is as yet no accepted treatment for the secondary shortcomings. These areas remain a highly active area of research in wind turbine aerodynamics.
The effect of the discrete number of blades is dealt with by applying the Prandtl tip loss factor. The most common form of this factor is given below where B is the number of blades, R is the outer radius and r is the local radius. The definition of F is based on actuator disk models and not directly applicable to blade element momentum theory. However the most common application multiplies induced velocity term by F in the momentum equations. As in the momentum equation there are many variations for applying F, some argue that the mass flow should be corrected in either the axial equation, or both axial and tangential equations. Others have suggested a second tip loss term to account for the reduced blade forces at the tip. Shown below are the above momentum equations with the most common application of F:
F
=
2
π
arccos
[
e
−
B
(
R
−
r
)
2
r
sin
ϕ
]
a
=
1
4
C
n
σ
F
sin
2
ϕ
+
1
a
′
=
1
4
C
t
σ
F
sin
ϕ
cos
ϕ
−
1
{\displaystyle {\begin{aligned}F&={\frac {2}{\pi }}\arccos \left[e^{-{\frac {B(R-r)}{2r\sin \phi }}}\right]\\a&={\frac {1}{{\frac {4}{C_{n}\sigma }}F\sin ^{2}\phi +1}}\\a'&={\frac {1}{{\frac {4}{C_{t}\sigma }}F\sin \phi \cos \phi -1}}\end{aligned}}}
The typical momentum theory is effective only for axial induction factors up to 0.4 (thrust coefficient of 0.96). Beyond this point the wake collapses and turbulent mixing occurs. This state is highly transient and largely unpredictable by theoretical means. Accordingly, several empirical relations have been developed. As the usual case there are several version; however a simple one that is commonly used is a linear curve fit given below, with
a
c
=
0.2
{\displaystyle a_{c}=0.2}
. The turbulent wake function given excludes the tip loss function, however the tip loss is applied simply by multiplying the resulting axial induction by the tip loss function.
C
T
=
4
[
a
c
2
+
(
1
−
2
a
c
)
a
]
{\displaystyle C_{T}=4\left[a_{c}^{2}+(1-2a_{c})a\right]}
when
a
>
a
c
{\displaystyle a>a_{c}}
The terms
C
T
{\displaystyle C_{T}}
and
C
t
{\displaystyle C_{t}}
represent different quantities. The first one is the thrust coefficient of the rotor, which is the one which should be corrected for high rotor loading (i.e., for high values of
a
{\displaystyle a}
), while the second one (
c
t
{\displaystyle c_{t}}
) is the tangential aerodynamic coefficient of an individual blade element, which is given by the aerodynamic lift and drag coefficients.
A "Unified momentum model for rotor aerodynamics across operating regimes" which claims to extend validity also for 0.5 < a < 1 was published recently
https://doi.org/10.1038/s41467-024-50756-5 .
== Aerodynamic modeling ==
Blade element momentum theory is widely used due to its simplicity and overall accuracy, but its originating assumptions limit its use when the rotor disk is yawed, or when other non-axisymmetric effects (like the rotor wake) influence the flow. Limited success at improving predictive accuracy has been made using computational fluid dynamics (CFD) solvers based on Reynolds-averaged Navier–Stokes equations and other similar three-dimensional models such as free vortex methods. These are very computationally intensive simulations to perform for several reasons. First, the solver must accurately model the far-field flow conditions, which can extend several rotor diameters up- and down-stream and include atmospheric boundary layer turbulence, while at the same time resolving the small-scale boundary-layer flow conditions at the blades' surface (necessary to capture blade stall). In addition, many CFD solvers have difficulty meshing parts that move and deform, such as the rotor blades. Finally, there are many dynamic flow phenomena that are not easily modelled by Reynolds-averaged Navier–Stokes equations, such as dynamic stall and tower shadow. Due to the computational complexity, it is not currently practical to use these advanced methods for wind turbine design, though research continues in these and other areas related to helicopter and wind turbine aerodynamics.
Free vortex models and Lagrangian particle vortex methods are both active areas of research that seek to increase modelling accuracy by accounting for more of the three-dimensional and unsteady flow effects than either blade element momentum theory or Reynolds-averaged Navier–Stokes equations. Free vortex models are similar to lifting line theory in that they assume that the wind turbine rotor is shedding either a continuous vortex filament from the blade tips (and often the root), or a continuous vortex sheet from the blades' trailing edges. Lagrangian particle vortex methods can use a variety of methods to introduce vorticity into the wake. Biot–Savart summation is used to determine the induced flow field of these wake vortices' circulations, allowing for better approximations of the local flow over the rotor blades. These methods have largely confirmed much of the applicability of blade element momentum theory and shed insight into the structure of wind turbine wakes. Free vortex models have limitations due to its origin in potential flow theory, such as not explicitly modeling model viscous behavior (without semi-empirical core models), though the Lagrangian particle vortex method is a fully viscous method. Lagrangian particle vortex methods are more computationally intensive than either free vortex models or Reynolds-averaged Navier–Stokes equations, and free vortex models still rely on blade element theory for the blade forces.
== See also ==
Blade solidity
Wind turbine design
== References ==
== Sources ==
Hansen, M.O.L. Aerodynamics of Wind Turbines, 3rd ed., Routledge, 2015 ISBN 978-1138775077
Schmitz, S. Aerodynamics of Wind Turbines: A Physical Basis for Analysis and Design, Wiley, 2019 ISBN 978-1-119-40564-1
Schaffarczyk, A.P. Introduction to Wind Turbine Aerodynamics, 3rd ed., SpringerNature, 2024 doi:10.1007/978-3-031-56924 | Wikipedia/Wind-turbine_aerodynamics |
Enphase Energy, Inc. is an American energy technology company headquartered in Fremont, California, that develops and manufactures solar micro-inverters, battery energy storage, and EV charging stations primarily for residential customers. Enphase was established in 2006 and is the first company to successfully commercialize the solar micro-inverter, which converts the direct current (DC) power generated by a solar panel into grid-compatible alternating current (AC) for use or export. The company has shipped more than 48 million microinverters to 2.5 million solar systems in more than 140 countries.
== History ==
Most solar photovoltaic systems use a central inverter, where the panels are connected together in a series creating a string, which delivers all the direct current (DC) power produced into the inverter for conversion into grid-compatible alternating current (AC). The major drawback to this approach is that, unless DC power optimizers are used, the entire string's output is limited by the output of the lowest-performing panel. Solar micro-inverters address this problem by converting the DC into AC in a small inverter placed behind each individual solar panel.
Enphase founder Martin Fornage discovered this issue when he saw the low performance of the central inverter for the solar array on his ranch. Fornage was looking for a new opportunity after the 2001 Telecoms crash and brought an idea to build micro-inverters to his former Cerent Corporation colleague, Raghu Belur, and they formed PVI Solutions. The two hired Paul Nahi to be CEO at the end of 2006 and the trio formed Enphase Energy, Inc. in early 2007. Enphase raised $6 million in private equity, and in 2008, released its first microinverter, the M175. Their second generation product, 2009's M190, had sales of about 400,000 units in 2009 and early 2010. Enphase grew to 13% marketshare for residential systems by mid-2010.
Enphase went public in March 2012 and began trading on the Nasdaq with the stock symbol ENPH.
In October 2014, Enphase announced it would enter the battery home energy storage market. The first batteries were installed in Australia and New Zealand in mid-2016, but the launch of any Enphase battery system in the North American market was delayed until July 2020. When released in the North American market, the battery system was part of the Ensemble energy management system, and substantially different to the first generation on-grid only battery previously released.
Enphase experienced leadership changes in September 2017 when the President and CEO, Paul Nahi, announced his resignation from the company. Badri Kothandaraman was appointed the company's new president and CEO. Kothandaraman was previously the company's chief operating officer.
As of 2020, Enphase had about a 48% market share for residential installations in the US, which represents 72% of the entire world micro-inverter market. In the global market for inverters for all customers (residential, commercial and industrial), microinverters have a 1.7% share of the inverter market.
In 2021, Enphase completed a series of acquisitions that focused on software-as-a-service and home electrification: Sofdesk's Solargraf, a software platform offering digital tools and services to support the sales process for solar installers; the Solar Design Services business of DIN Engineering Services, a software service provider for solar proposal drawings and permit plan sets; 365 Pronto, a software platform that connects solar installers with operations and maintenance providers; and ClipperCreek, a company that offers electric vehicle (EV) charging solutions for residential and commercial customers in the U.S.
Also in 2021, the company launched its eighth-generation microinverter technology, the IQ8 series, to customers in North America. As of 2022, Enphase has shipped more than 48 million microinverters and deployed more than two million Enphase-based systems in more than 140 countries.
In 2022, Enphase completed the acquisition of SolarLeadFactory LLC, a company that provides leads to solar installers.
== Products ==
All Enphase microinverters are completely self-contained power converters. In the case of a rooftop photovoltaic (PV) inverter, the unit will convert DC from a single solar panel into grid-compliant AC power, following the maximum power point of the panel. Since the "S" series microinverters (e.g. S280) all Enphase microinverters have been both Advanced Grid Function and Bidirectional power capable. This allows a microinverter to produce power in the DC-AC direction, for solar applications, or in the DC-AC and AC/DC directions, for battery use. The microinverter(s) in the Enphase battery products are the same units as installed on the roof, with only software settings changed.
=== Legacy products ===
The M175 was the first product from Enphase, released in 2008. It was designed to output 175 Watts of AC power. The M175 was packaged in a relatively large cast aluminum box. Wiring was passed through the case using compression fittings and the inverters connected to each other using a twist-lock connection. The product saw modest sales.
Sales picked up with the second generation M190, released in 2009. The M190 had a slightly higher power rating of 190 Watts, but in a much smaller case with built-in cable connections replacing the earlier compression fittings.
Around the same time the company also released the D380, which was essentially two M190's in a single larger case. For small inverters like the M190, the case and its assembly represented a significant portion of the total cost of production, so by placing two in a single box that cost is spread out. The D380 also introduced a new inter-inverter cabling system based on a "drop cable" system. This placed a single connector on a short cable on the inverter, and used a separate cable with either one or three connectors on it. Arrays were constructed by linking together up to three D380s with a single drop cable, and then connecting them to other drop cables using larger twist-fit connectors.
The third generation M215 was introduced in 2011 bumping up the power rating to 215 Watts and adding trunk cabling, which increased installation speed by using one long cable run, with the inverters spliced in as necessary.
The fourth generation M250 was released in 2013, increasing the power rating to 250 Watts and efficiency to 96.5%. The fourth generation added an integrated grounding system, eliminating the external grounding conductor. Enphase continued to offer the M215 but updated it with the integrated grounding system.
In 2015, the company launched its fifth generation of products. The "S" series S230 and S280 microinverters with power ratings of 230 and 280 Watts, increased efficiency of 97% and added advanced grid functionality like reactive power control along with bidirectional capabilities allowing the micro-inverter to also convert AC into DC for battery use.
The next-gen Envoy-S offers revenue-grade metering of solar production, consumption monitoring, and integrated Wi-Fi. The company also moved into home energy storage with its storage system featuring an AC Battery, a modular, 1.2kWh lithium-iron phosphate offering aimed at residential users that is part of a Home Energy Solution. The Home Energy Solution launched in Australia in mid-2016.
=== Current products ===
Since 2017, Enphase has been offering its "IQ" series microinverters which use a simplified cabling system with two conductors (down from four) that eliminated the need for a neutral line. The first to be introduced was the IQ6, with the older M215, M250 and S280 remaining on sale but updated to use the new cabling system. The updated IQ7 series was launched in 2018.
In 2021, the IQ8 Microinverter was introduced as a grid-forming microinverter, enabling solar-only backup during grid outages. It features a split-phase power conversion capability to convert DC power to AC power more efficiently and an application-specific integrated circuit (ASIC), which enables the device to operate in grid-tied or off-grid modes. This chip is built in 55 nm technology with high-speed digital logic and has fast response times to changing loads and grid events, alleviating constraints on battery sizing for home energy systems.
In 2020, the company introduced the Enphase Encharge storage system, now known as the IQ Battery, to customers in North America and expansion into parts of Europe began in 2021. The IQ Battery features lithium iron phosphate (LFP) battery chemistry and comes in two capacity size configurations, 10.08kWh and 3.36kWh. Both configurations are compatible with new and existing Enphase solar systems with IQ6, IQ7, or IQ8 Microinverters.
All Enphase Energy Systems with microinverters and batteries are paired with an IQ System Controller, which provides microgrid interconnection device (MID) functionality by automatically detecting and transitioning the system from grid power to backup power in the event of a grid failure.
In 2021, Enphase Energy Systems added the option of including software to integrate most AC home standby generators. And the IQ Load Controller is a hardware add-on feature that enables systems to shed non-essential loads automatically or manually to further extend battery life and system capabilities.
All Enphase microinverter models use power line communications to pass monitoring data between the inverters and the Envoy communications gateway, now known as the IQ Gateway. The IQ Gateway stores daily performance data for up to a year, and, when available, allows Enphase's web service platform to download data approximately every 15 minutes. Customers and installers can review the data on the web services platform and Enphase App.
== References ==
== External links ==
Official website
Business data for Enphase Energy, Inc.: | Wikipedia/Enphase_Energy |
Compressed-air-energy storage (CAES) is a way to store energy for later use using compressed air. At a utility scale, energy generated during periods of low demand can be released during peak load periods.
The first utility-scale CAES project was in the Huntorf power plant in Elsfleth, Germany, and is still operational as of 2024. The Huntorf plant was initially developed as a load balancer for fossil-fuel-generated electricity, but the global shift towards renewable energy renewed interest in CAES systems, to help highly intermittent energy sources like photovoltaics and wind satisfy fluctuating electricity demands.
One ongoing challenge in large-scale design is the management of thermal energy, since the compression of air leads to an unwanted temperature increase that not only reduces operational efficiency but can also lead to damage. The main difference between various architectures lies in thermal engineering. On the other hand, small-scale systems have long been used for propulsion of mine locomotives. Contrasted with traditional batteries, systems can store energy for longer periods of time and have less upkeep.
== Types ==
Compression of air creates heat; the air is warmer after compression. Expansion removes heat. If no extra heat is added, the air will be much colder after expansion. If the heat generated during compression can be stored and used during expansion, then the efficiency of the storage improves considerably. There are several ways in which a CAES system can deal with heat. Air storage can be adiabatic, diabatic, isothermal, or near-isothermal.
=== Adiabatic ===
Adiabatic storage continues to store the energy produced by compression and returns it to the air as it is expanded to generate power. This is a subject of an ongoing study, with no utility-scale plants as of 2015. The theoretical efficiency of adiabatic storage approaches 100% with perfect insulation, but in practice, round trip efficiency is expected to be 70%. Heat can be stored in a solid such as concrete or stone, or in a fluid such as hot oil (up to 300 °C) or molten salt solutions (600 °C). Storing the heat in hot water may yield an efficiency around 65%.
Packed beds have been proposed as thermal storage units for adiabatic systems. A study numerically simulated an adiabatic compressed air energy storage system using packed bed thermal energy storage. The efficiency of the simulated system under continuous operation was calculated to be between 70.5% and 71%.
Advancements in adiabatic CAES involve the development of high-efficiency thermal energy storage systems that capture and reuse the heat generated during compression. This innovation has led to system efficiencies exceeding 70%, significantly higher than traditional Diabatic systems.
=== Diabatic ===
Diabatic storage dissipates much of the heat of compression with intercoolers (thus approaching isothermal compression) into the atmosphere as waste, essentially wasting the energy used to perform the work of compression. Upon removal from storage, the temperature of this compressed air is the one indicator of the amount of stored energy that remains in this air. Consequently, if the air temperature is too low for the energy recovery process, then the air must be substantially re-heated prior to expansion in the turbine to power a generator. This reheating can be accomplished with a natural-gas-fired burner for utility-grade storage or with a heated metal mass. As recovery is often most needed when renewable sources are quiescent, the fuel must be burned to make up for the wasted heat. This degrades the efficiency of the storage-recovery cycle. While this approach is relatively simple, the burning of fuel adds to the cost of the recovered electrical energy and compromises the ecological benefits associated with most renewable energy sources. Nevertheless, this is thus far the only system that has been implemented commercially.
The McIntosh, Alabama, CAES plant requires 2.5 MJ of electricity and 1.2 MJ lower heating value (LHV) of gas for each MJ of energy output, corresponding to an energy recovery efficiency of about 27%. A General Electric 7FA 2x1 combined cycle plant, one of the most efficient natural gas plants in operation, uses 1.85 MJ (LHV) of gas per MJ generated, a 54% thermal efficiency.
To improve the efficiency of Diabatic CAES systems, modern designs incorporate heat recovery units that capture waste heat during compression, thereby reducing energy losses and enhancing overall performance.
=== Isothermal ===
Isothermal compression and expansion approaches attempt to maintain operating temperature by constant heat exchange to the environment. In a reciprocating compressor, this can be achieved by using a finned piston and low cycle speeds. Current challenges in effective heat exchangers mean that they are only practical for low power levels. The theoretical efficiency of isothermal energy storage approaches 100% for perfect heat transfer to the environment. In practice, neither of these perfect thermodynamic cycles is obtainable, as some heat losses are unavoidable, leading to a near-isothermal process. Recent developments in isothermal CAES focus on advanced thermal management techniques and materials that maintain constant air temperatures during compression and expansion, minimizing energy losses and improving system efficiency.
=== Near-isothermal ===
Near-isothermal compression (and expansion) is a process in which a gas is compressed in very close proximity to a large incompressible thermal mass such as a heat-absorbing and -releasing structure (HARS) or a water spray. A HARS is usually made up of a series of parallel fins. As the gas is compressed, the heat of compression is rapidly transferred to the thermal mass, so the gas temperature is stabilized. An external cooling circuit is then used to maintain the temperature of the thermal mass. The isothermal efficiency (Z) is a measure of where the process lies between an adiabatic and isothermal process. If the efficiency is 0%, then it is totally adiabatic; with an efficiency of 100%, it is totally isothermal. Typically with a near-isothermal process, an isothermal efficiency of 90–95% can be expected.
=== Hybrid CAES systems ===
Hybrid Compressed Air Energy Storage (H-CAES) systems integrate renewable energy sources, such as wind or solar power, with traditional CAES technology. This integration allows for the storage of excess renewable energy generated during periods of low demand, which can be released during peak demand to enhance grid stability and reduce reliance on fossil fuels. For instance, the Apex CAES Plant in Texas combines wind energy with CAES to provide a consistent energy output, addressing the intermittency of renewable energy sources.
=== Other ===
One implementation of isothermal CAES uses high-, medium-, and low-pressure pistons in series. Each stage is followed by an airblast venturi pump that draws ambient air over an air-to-air (or air-to-seawater) heat exchanger between each expansion stage. Early compressed-air torpedo designs used a similar approach, substituting seawater for air. The venturi warms the exhaust of the preceding stage and admits this preheated air to the following stage. This approach was widely adopted in various compressed-air vehicles such as H. K. Porter, Inc.'s mining locomotives and trams. Here, the heat of compression is effectively stored in the atmosphere (or sea) and returned later on.
== Compressors and expanders ==
Compression can be done with electrically-powered turbo-compressors and expansion with turbo-expanders or air engines driving electrical generators to produce electricity.
== Storage ==
Air storage vessels vary in the thermodynamic conditions of the storage and on the technology used:
Constant volume storage (solution-mined caverns, above-ground vessels, aquifers, automotive applications, etc.)
Constant pressure storage (underwater pressure vessels, hybrid pumped hydro / compressed air storage)
=== Constant-volume storage ===
This storage system uses a chamber with specific boundaries to store large amounts of air. This means from a thermodynamic point of view that this system is a constant-volume and variable-pressure system. This causes some operational problems for the compressors and turbines, so the pressure variations have to be kept below a certain limit, as do the stresses induced on the storage vessels.
The storage vessel is often a cavern created by solution mining (salt is dissolved in water for extraction) or by using an abandoned mine; use of porous and permeable rock formations (rocks that have interconnected holes, through which liquid or air can pass), such as those in which reservoirs of natural gas are found, has also been studied.
In some cases, an above-ground pipeline was tested as a storage system, giving some good results. Obviously, the cost of the system is higher, but it can be placed wherever the designer chooses, whereas an underground system needs some particular geologic formations (salt domes, aquifers, depleted gas fields, etc.).
=== Constant-pressure storage ===
In this case, the storage vessel is kept at constant pressure, while the gas is contained in a variable-volume vessel. Many types of storage vessels have been proposed, generally relying on liquid displacement to achieve isobaric operation. In such cases, the storage vessel is positioned hundreds of meters below ground level, and the hydrostatic pressure (head) of the water column above the storage vessel maintains the pressure at the desired level.
This configuration allows:
Improvement of the energy density of the storage system because all the air contained can be used (the pressure is constant in all charge conditions, full or empty, so the turbine has no problem exploiting it, while with constant-volume systems, if the pressure goes below a safety limit, then the system needs to stop).
Removal of the requirement of throttling prior to the expansion.
Avoidance of mixing of heat at different temperatures in the Thermal Energy Storage system, which leads to irreversibility.
Improvement of the efficiency of the turbomachinery, which will work under constant-inlet conditions.
Use of various geographic locations for the positioning of the CAES plant (coastal lines, floating platforms, etc.).
On the other hand, the cost of this storage system is higher due to the need to position the storage vessel on the bottom of the chosen water reservoir (often the ocean) and due to the cost of the vessel itself.
A different approach consists of burying a large bag buried under several meters of sand instead of water.
Plants operate on a peak-shaving daily cycle, charging at night and discharging during the day. Heating the compressed air using natural gas or geothermal heat to increase the amount of energy being extracted has been studied by the Pacific Northwest National Laboratory.
Compressed-air energy storage can also be employed on a smaller scale, such as exploited by air cars and air-driven locomotives, and can use high-strength (e.g., carbon-fiber) air-storage tanks. In order to retain the energy stored in compressed air, this tank should be thermally isolated from the environment; otherwise, the energy stored will escape in the form of heat, because compressing air raises its temperature.
== Environmental Impact ==
CAES systems are often considered an environmentally friendly alternative to other large-scale energy storage technologies due to their reliance on naturally occurring resources, such as salt caverns for air storage and ambient air as the working medium. Unlike lithium-ion batteries, which require the extraction of finite resources such as lithium and cobalt, CAES has a minimal environmental footprint during its lifecycle.
However, the construction of CAES facilities presents unique challenges. Underground air storage requires geological formations such as salt domes, which are geographically limited. Inappropriate siting or mismanagement during construction can lead to disruptions in local ecosystems, land subsidence, or groundwater contamination.
On the positive side, CAES systems integrated with renewable energy sources contribute to a significant reduction in greenhouse gas emissions by enabling the storage and dispatch of clean energy during peak demand. Additionally, repurposing depleted natural gas fields or other geological formations for air storage can mitigate environmental impacts and extend the usefulness of existing infrastructure.
=== Economic Considerations ===
The cost of implementing CAES systems depends heavily on the geological conditions of the site, the scale of the facility, and the type of CAES process used (adiabatic, diabatic, or isothermal). Initial capital expenditures are significant, often ranging from $500 to $1,200 per kW for large-scale systems. These costs primarily include the development of underground storage caverns, compression and expansion equipment, and thermal energy storage units (for advanced systems).
Despite the high upfront costs, CAES facilities have long operational lifespans, often exceeding 30 years, with low maintenance and operational costs compared to lithium-ion battery storage systems, which require periodic replacements. This long-term cost efficiency makes CAES particularly attractive for electric utility companies and grid operators.
=== Policy and Regulation ===
Market trends suggest growing interest in CAES technology due to increasing renewable energy integration and the need for grid-scale energy storage. Government incentives and declining costs of advanced components, such as high-efficiency compressors and turbines, are further enhancing the economic feasibility of CAES.
Government policies and regulatory frameworks are critical in determining the pace of CAES adoption and development. Countries like Germany and the United States have implemented various incentives, including tax credits and grants, to promote energy storage technologies. For instance, the U.S. Department of Energy’s Energy Storage Grand Challengeincludes CAES as a key focus area for research and development funding.
One of the significant regulatory hurdles for CAES is the permitting process for underground air storage facilities. Environmental impact assessments, land use approvals, and safety standards for high-pressure storage systems can delay or increase costs for CAES projects. For example, projects sited near urban areas often face additional scrutiny due to concerns about noise pollution, air quality, and potential risks associated with high-pressure air storage.
Internationally, efforts are underway to standardize the design, operation, and safety protocols for CAES systems. Organizations like the International Energy Agency (IEA) and regional bodies such as the European Union have been instrumental in developing frameworks to support the integration of CAES into modern energy grids. As renewable energy adoption accelerates, policies aimed at addressing intermittency challenges will likely prioritize grid-scale solutions like CAES.
== History ==
Citywide compressed air energy systems for delivering mechanical power directly via compressed air have been built since 1870. Cities such as Paris, France; Birmingham, England; Dresden, Rixdorf, and Offenbach, Germany; and Buenos Aires, Argentina, installed such systems. Victor Popp constructed the first systems to power clocks by sending a pulse of air every minute to change their pointer arms. They quickly evolved to deliver power to homes and industries. As of 1896, the Paris system had 2.2 MW of generation distributed at 550 kPa in 50 km of air pipes for motors in light and heavy industry. Usage was measured in cubic meters. The systems were the main source of house-delivered energy in those days and also powered the machines of dentists, seamstresses, printing facilities, and bakeries.
The first utility-scale diabatic compressed-air energy storage project was the 290-megawatt Huntorf plant opened in 1978 in Germany using a salt dome cavern with a capacity of 580 megawatt-hours (2,100 GJ) and a 42% efficiency.
A plant that could store up to 2,860 megawatt-hours (10,300 GJ) (and produce up to 110 MW for 26 hours) was built in McIntosh, Alabama in 1991. The Alabama facility's $65 million cost equals $590 per kW of power capacity and about $23 per kW⋅h of storage capacity. It uses a nineteen-million-cubic-foot (540,000 m3) solution-mined salt cavern to store air at up to 1,100 psi (7,600 kPa). Although the compression phase is approximately 82% efficient, the expansion phase requires the combustion of natural gas at one-third the rate of a gas turbine producing the same amount of electricity at 54% efficiency.
In 2012, General Compression completed construction of a two-megawatt near-isothermal project in Gaines County, Texas, the world's third such project. The project uses no fuel. It appears to have stopped operating in 2016.
In 2017 FLASC from the University of Malta deployed an isothermal compressed energy storage prototype in the Grand Harbour on the Maltese islands. The prototype was a 300W and 530Wh small scale test operating at 11.5bar which achieved more than 96% thermal efficiency. Currently there are ongoing projects to set up these systems for offshore wind energy storage in the Netherlands, and a "one stop shop" for renewable energy and storage designed for small islands to be trialed on Oinousses in Greece.
A 60 MW, 300 MW⋅h facility with 60% efficiency opened in Jiangsu, China, using a salt cavern (2022).
A 2.5 MW, 4 MW⋅h compressed CO2 closed-cycle facility started operating in Sardinia, Italy (2022).
In 2022, Zhangjiakou connected the world's first 100 MW storage system to the grid in north China. It uses supercritical thermal storage, supercritical heat exchange, and high-load compression and expansion technologies. The plant can store 400 MW⋅h with 70.4% efficiency. Construction of a 350 MW, 1.4 GW⋅h salt cave project started in Shangdong at a cost of $208 million, operating in 2024 with 64% efficiency, and construction of a four-hour, 700 MW, 2.8 GW⋅h facility started in China in 2024.
== Largest CAES facilities ==
== Projects ==
In 2009, the US Department of Energy awarded $24.9 million in matching funds for phase one of a 300 MW, $356 million Pacific Gas and Electric Company installation using a saline porous rock formation being developed near Bakersfield in Kern County, California. The goals of the project were to build and validate an advanced design.
In 2010, the US Department of Energy provided $29.4 million in funding to conduct preliminary work on a 150-MW salt-based project being developed by Iberdrola USA in Watkins Glen, New York. The goal is to incorporate smart grid technology to balance renewable intermittent energy sources.
The first adiabatic project, a 200-megawatt facility called ADELE, was planned for construction in Germany (2013) with a target of 70% efficiency by using 600 °C (1,112 °F) air at 100 bars of pressure. This project was delayed for undisclosed reasons until at least 2016.
Storelectric Ltd planned to build a 40-MW 100% renewable energy pilot plant in Cheshire, UK, with 800 MWh of storage capacity (2017).
Hydrostor completed the first commercial A-CAES system in Goderich, Ontario, supplying service with 2.2MW / 10MWh storage to the Ontario Grid (2019). It was the first A-CAES system to achieve commercial operation in decades.
The European-Union-funded RICAS (adiabatic) project in Austria was to use crushed rock to store heat from the compression process to improve efficiency (2020). The system was expected to achieve 70–80% efficiency.
Apex planned a plant for Anderson County, Texas, to go online in 2016. This project has been delayed until at least 2020.
Canadian company Hydrostor planned to build four Advance plants in Toronto, Goderich, Angas, and Rosamond (2020). Some included partial heat storage in water, improving efficiency to 65%.
As of 2022, the Gem project at Rosamond in Kern County, California, was planned to provide 500 MW / 4,000 MWh of storage. The Pecho project in San Luis Obispo, California, was planned to be 400 MW / 3,200 MWh. The Broken Hill project in New South Wales, Australia was 200 MW / 1,600 MWh.
In 2023, Alliant Energy announced plans to construct a 200-MWh compressed CO2 facility based on the Sardinia facility in Columbia County, Wisconsin. It will be the first of its kind in the United States.
Compressed air energy storage may be stored in undersea caves in Northern Ireland.
== Storage thermodynamics ==
In order to achieve a near-thermodynamically-reversible process so that most of the energy is saved in the system and can be retrieved, and losses are kept negligible, a near-reversible isothermal process or an isentropic process is desired.
=== Isothermal storage ===
In an isothermal compression process, the gas in the system is kept at a constant temperature throughout. This necessarily requires an exchange of heat with the gas; otherwise, the temperature would rise during charging and drop during discharge. This heat exchange can be achieved by heat exchangers (intercooling) between subsequent stages in the compressor, regulator, and tank. To avoid wasted energy, the intercoolers must be optimized for high heat transfer and low pressure drop. Smaller compressors can approximate isothermal compression even without intercooling, due to the relatively high ratio of surface area to volume of the compression chamber and the resulting improvement in heat dissipation from the compressor body itself.
When one obtains perfect isothermal storage (and discharge), the process is said to be "reversible". This requires that the heat transfer between the surroundings and the gas occur over an infinitesimally small temperature difference. In that case, there is no exergy loss in the heat transfer process, and so the compression work can be completely recovered as expansion work: 100% storage efficiency. However, in practice, there is always a temperature difference in any heat transfer process, and so all practical energy storage obtains efficiencies lower than 100%.
To estimate the compression/expansion work in an isothermal process, it may be assumed that the compressed air obeys the ideal gas law:
p
V
=
n
R
T
=
constant
.
{\displaystyle pV=nRT={\text{constant}}.}
For a process from an initial state A to a final state B, with absolute temperature
T
=
T
A
=
T
B
{\displaystyle T=T_{A}=T_{B}}
constant, one finds the work required for compression (negative) or done by the expansion (positive) to be
W
A
→
B
=
∫
V
A
V
B
p
d
V
=
∫
V
A
V
B
n
R
T
V
d
V
=
n
R
T
∫
V
A
V
B
1
V
d
V
=
n
R
T
(
ln
V
B
−
ln
V
A
)
=
n
R
T
ln
V
B
V
A
=
p
A
V
A
ln
p
A
p
B
=
p
B
V
B
ln
p
A
p
B
,
{\displaystyle {\begin{aligned}W_{A\to B}&=\int _{V_{A}}^{V_{B}}p\,dV=\int _{V_{A}}^{V_{B}}{\frac {nRT}{V}}dV=nRT\int _{V_{A}}^{V_{B}}{\frac {1}{V}}dV\\&=nRT(\ln {V_{B}}-\ln {V_{A}})=nRT\ln {\frac {V_{B}}{V_{A}}}=p_{A}V_{A}\ln {\frac {p_{A}}{p_{B}}}=p_{B}V_{B}\ln {\frac {p_{A}}{p_{B}}},\\\end{aligned}}}
where
p
V
=
p
A
V
A
=
p
B
V
B
{\displaystyle pV=p_{A}V_{A}=p_{B}V_{B}}
, and so
V
B
V
A
=
p
A
p
B
{\displaystyle {\frac {V_{B}}{V_{A}}}={\frac {p_{A}}{p_{B}}}}
.
Here
p
{\displaystyle p}
is the absolute pressure,
V
A
{\displaystyle V_{A}}
is the (unknown) volume of gas compressed,
V
B
{\displaystyle V_{B}}
is the volume of the vessel,
n
{\displaystyle n}
is the amount of substance of gas (mol), and
R
{\displaystyle R}
is the ideal gas constant.
If there is a constant pressure outside of the vessel, which is equal to the starting pressure
p
A
{\displaystyle p_{A}}
, the positive work of the outer pressure reduces the exploitable energy (negative value). This adds a term to the equation above:
W
A
→
B
=
p
A
V
A
ln
p
A
p
B
+
(
V
A
−
V
B
)
p
A
=
p
B
V
B
ln
p
A
p
B
+
(
p
B
−
p
A
)
V
B
.
{\displaystyle W_{A\to B}=p_{A}V_{A}\ln {\frac {p_{A}}{p_{B}}}+(V_{A}-V_{B})p_{A}=p_{B}V_{B}\ln {\frac {p_{A}}{p_{B}}}+(p_{B}-p_{A})V_{B}.}
Example
How much energy can be stored in a 1 m3 storage vessel at a pressure of 70 bars (7.0 MPa), if the ambient pressure is 1 bar (0.10 MPa)? In this case, the process work is
W
=
p
B
V
B
ln
p
A
p
B
+
(
p
B
−
p
A
)
V
B
{\displaystyle W=p_{B}V_{B}\ln {\frac {p_{A}}{p_{B}}}+(p_{B}-p_{A})V_{B}}
=
= 7.0 MPa × 1 m3 × ln(0.1 MPa/7.0 MPa) + (7.0 MPa − 0.1 MPa) × 1 m3 = −22.8 MJ.
The negative sign means that work is done on the gas by the surroundings. Process irreversibilities (such as in heat transfer) will result in less energy being recovered from the expansion process than is required for the compression process. If the environment is at a constant temperature, for example, then the thermal resistance in the intercoolers will mean that the compression occurs at a temperature somewhat higher than the ambient temperature, and the expansion will occur at a temperature somewhat lower than the ambient temperature. So a perfect isothermal storage system is impossible to achieve.
=== Adiabatic (isentropic) storage ===
An adiabatic process is one where there is no heat transfer between the fluid and the surroundings: the system is insulated against heat transfer. If the process is furthermore internally reversible (frictionless, to the ideal limit), then it will additionally be isentropic.
An adiabatic storage system does away with the intercooling during the compression process and simply allows the gas to heat up during compression and likewise cool down during expansion. This is attractive since the energy losses associated with the heat transfer are avoided, but the downside is that the storage vessel must be insulated against heat loss. It should also be mentioned that real compressors and turbines are not isentropic, but instead have an isentropic efficiency of around 85%. The result is that round-trip storage efficiency for adiabatic systems is also considerably less than perfect.
=== Large storage system thermodynamics ===
Energy storage systems often use large caverns. This is the preferred system design due to the very large volume and thus the large quantity of energy that can be stored with only a small pressure change. The gas is compressed adiabatically with little temperature change (approaching a reversible isothermal system) and heat loss (approaching an isentropic system). This advantage is in addition to the low cost of constructing the gas storage system, using the underground walls to assist in containing the pressure. The cavern space can be insulated to improve efficiency.
Undersea insulated airbags that have similar thermodynamic properties to large cavern storage have been suggested.
== Vehicle applications ==
=== Practical constraints in transportation ===
In order to use air storage in vehicles or aircraft for practical land or air transportation, the energy storage system must be compact and lightweight. Energy density and specific energy are the engineering terms that define these desired qualities.
==== Specific energy, energy density, and efficiency ====
As explained in the thermodynamics of the gas storage section above, compressing air heats it, and expansion cools it. Therefore, practical air engines require heat exchangers in order to avoid excessively high or low temperatures, and even so do not reach ideal constant-temperature conditions or ideal thermal insulation.
Nevertheless, as stated above, it is useful to describe the maximum energy storable using the isothermal case, which works out to about 100 kJ/m3 [ ln(PA/PB)].
Thus if 1.0 m3 of air from the atmosphere is very slowly compressed into a 5 L bottle at 20 MPa (200 bar), then the potential energy stored is 530 kJ. A highly efficient air motor can transfer this into kinetic energy if it runs very slowly and manages to expand the air from its initial 20 MPa pressure down to 100 kPa (bottle completely "empty" at atmospheric pressure). Achieving high efficiency is a technical challenge both due to heat loss to the ambient and to unrecoverable internal gas heat. If the bottle above is emptied to 1 MPa, then the extractable energy is about 300 kJ at the motor shaft.
A standard 20-MPa, 5-L steel bottle has a mass of 7.5 kg, and a superior one 5 kg. High-tensile-strength fibers such as carbon fiber or Kevlar can weigh below 2 kg in this size, consistent with the legal safety codes. One cubic meter of air at 20 °C has a mass of 1.204 kg at standard temperature and pressure. Thus, theoretical specific energies are from roughly 70 kJ/kg at the motor shaft for a plain steel bottle to 180 kJ/kg for an advanced fiber-wound one, whereas practical achievable specific energies for the same containers would be from 40 to 100 kJ/kg.
==== Safety ====
As with most technologies, compressed air has safety concerns, mainly catastrophic tank rupture. Safety regulations make this a rare occurrence at the cost of higher weight and additional safety features such as pressure relief valves. Regulations may limit the legal working pressure to less than 40% of the rupture pressure for steel bottles (for a safety factor of 2.5) and less than 20% for fiber-wound bottles (safety factor 5). Commercial designs adopt the ISO 11439 standard. High-pressure bottles are fairly strong so that they generally do not rupture in vehicle crashes.
=== Comparison with batteries ===
Advanced fiber-reinforced bottles are comparable to the rechargeable lead–acid battery in terms of energy density. Batteries provide nearly-constant voltage over their entire charge level, whereas the pressure varies greatly while using a pressure vessel from full to empty. It is technically challenging to design air engines to maintain high efficiency and sufficient power over a wide range of pressures. Compressed air can transfer power at very high flux rates, which meets the principal acceleration and deceleration objectives of transportation systems, particularly for hybrid vehicles.
Compressed air systems have advantages over conventional batteries, including longer lifetimes of pressure vessels and lower material toxicity. Newer battery designs such as those based on lithium iron phosphate chemistry suffer from neither of these problems. Compressed air costs are potentially lower; however, advanced pressure vessels are costly to develop and safety-test and at present are more expensive than mass-produced batteries.
As with electric storage technology, compressed air is only as "clean" as the source of the energy that it stores. Life cycle assessment addresses the question of overall emissions from a given energy storage technology combined with a given mix of generation on a power grid.
=== Engine ===
A pneumatic motor or compressed-air engine uses the expansion of compressed air to drive the pistons of an engine, turn the axle, or to drive a turbine.
The following methods can increase efficiency:
A continuous expansion turbine at high efficiency
Multiple expansion stages
Use of waste heat, notably in a hybrid heat engine design
Use of environmental heat
A highly efficient arrangement uses high, medium, and low pressure pistons in series, with each stage followed by an airblast venturi that draws ambient air over an air-to-air heat exchanger. This warms the exhaust of the preceding stage and admits this preheated air to the following stage. The only exhaust gas from each stage is cold air, which can be as cold as −15 °C (5 °F); the cold air may be used for air conditioning in a car.
Additional heat can be supplied by burning fuel, as in 1904 for the Whitehead torpedo. This improves the range and speed available for a given tank volume at the cost of the additional fuel.
==== Cars ====
Since about 1990, several companies have claimed to be developing compressed-air cars, but none is available. Typically, the main claimed advantages are no roadside pollution, low cost, use of cooking oil for lubrication, and integrated air conditioning.
The time required to refill a depleted tank is important for vehicle applications. "Volume transfer" moves pre-compressed air from a stationary tank to the vehicle tank almost instantaneously. Alternatively, a stationary or on-board compressor can compress air on demand, possibly requiring several hours.
==== Ships ====
Large marine diesel engines have started using compressed air, typically stored in large bottles between 20 and 30 bar, acting directly on the pistons via special starting valves to turn the crankshaft prior to beginning fuel injection. This arrangement is more compact and cheaper than an electric starter motor would be at such scales and able to supply the necessary burst of extremely high power without placing a prohibitive load on the ship's electrical generators and distribution system. Compressed air is commonly also used, at lower pressures, to control the engine and act as the spring force acting on the cylinder exhaust valves, and to operate other auxiliary systems and power tools on board, sometimes including pneumatic PID controllers. One advantage of this approach is that, in the event of an electrical blackout, ship systems powered by stored compressed air can continue functioning uninterrupted, and generators can be restarted without an electrical supply. Another is that pneumatic tools can be used in commonly-wet environments without the risk of electric shock.
==== Hybrid vehicles ====
While the air storage system offers a relatively low power density and vehicle range, its high efficiency is attractive for hybrid vehicles that use a conventional internal combustion engine as the main power source. The air storage can be used for regenerative braking and to optimize the cycle of the piston engine, which is not equally efficient at all power/RPM levels.
Bosch and PSA Peugeot Citroën have developed a hybrid system that uses hydraulics as a way to transfer energy to and from a compressed nitrogen tank. An up-to-45% reduction in fuel consumption is claimed, corresponding to 2.9 L / 100 km (81 mpg, 69 g CO2/km) on the New European Driving Cycle (NEDC) for a compact frame like Peugeot 208. The system is claimed to be much more affordable than competing electric and flywheel KERS systems and is expected on road cars by 2016.
=== History of air engines ===
Air engines have been used since the 19th century to power mine locomotives, pumps, drills, and trams, via centralized, city-level distribution. Racecars use compressed air to start their internal combustion engine (ICE), and large diesel engines may have starting pneumatic motors.
== Types of systems ==
=== Hybrid systems ===
Brayton cycle engines compress and heat air with a fuel suitable for an internal combustion engine. For example, burning natural gas or biogas heats compressed air, and then a conventional gas turbine engine or the rear portion of a jet engine expands it to produce work.
Compressed air engines can recharge an electric battery. The apparently-defunct Energine promoted its Pne-PHEV or Pneumatic Plug-in Hybrid Electric Vehicle-system.
==== Existing hybrid systems ====
Huntorf, Germany in 1978, and McIntosh, Alabama, U.S. in 1991 commissioned hybrid power plants. Both systems use off-peak energy for air compression and burn natural gas in the compressed air during the power-generating phase.
==== Future hybrid systems ====
The Iowa Stored Energy Park (ISEP) would have used aquifer storage rather than cavern storage. The ISEP was an innovative, 270-megawatt, $400 million compressed air energy storage (CAES) project proposed for in-service near Des Moines, Iowa, in 2015. The project was terminated after eight years in development because of site geological limitation, according to the U.S. Department of Energy.
Additional facilities are under development in Norton, Ohio. FirstEnergy, an Akron, Ohio, electric utility, obtained development rights to the 2,700-MW Norton project in November 2009.
The RICAS2020 project attempts to use an abandoned mine for adiabatic CAES with heat recovery. The compression heat is stored in a tunnel section filled with loose stones, so the compressed air is nearly cool when entering the main pressure storage chamber. The cool compressed air regains the heat stored in the stones when released back through a surface turbine, leading to higher overall efficiency. A two-stage process has theoretical higher efficiency of around 70%.
=== Underwater storage ===
==== Bag/tank ====
Deep water in lakes and the ocean can provide pressure without requiring high-pressure vessels or drilling. The air goes into inexpensive, flexible containers such as plastic bags. Obstacles include the limited number of suitable locations and the need for high-pressure pipelines between the surface and the containers. Given the low cost of the containers, great pressure (and great depth) may not be as important. A key benefit of such systems is that charge and discharge pressures are a constant function of depth. Carnot inefficiencies can be increased by using multiple charge and discharge stages and using inexpensive heat sources and sinks such as cold water from rivers or hot water from solar ponds.
==== Hydroelectric ====
A nearly isobaric solution is possible by using the compressed gas to drive a hydroelectric system. This solution requires large pressure tanks on land (as well as underwater airbags). Hydrogen gas is the preferred fluid, since other gases suffer from substantial hydrostatic pressures at even relatively modest depths (~500 meters).
European electrical utility company E.ON has provided €1.4 million (£1.1 million) in funding to develop undersea air storage bags. Hydrostor in Canada is developing a commercial system of underwater storage "accumulators" for compressed air energy storage, starting at the 1- to 4-MW scale.
==== Buoy ====
When excess wind energy is available from offshore wind turbines, a spool-tethered buoy can be pushed below the surface. When electricity demand rises, the buoy is allowed to rise towards the surface, generating power.
=== Nearly isothermal compression ===
A number of methods of nearly isothermal compression are being developed. Fluid Mechanics has a system with a heat absorbing and releasing structure (HARS) attached to a reciprocating piston. Light Sail injects a water spray into a reciprocating cylinder. SustainX uses an air-water foam mix inside a semi-custom, 120-rpm compressor/expander. All these systems ensure that the air is compressed with high thermal diffusivity compared to the speed of compression. Typically these compressors can run at speeds up to 1000 rpm. To ensure high thermal diffusivity, the average distance a gas molecule is from a heat-absorbing surface is about 0.5 mm. These nearly-isothermal compressors can also be used as nearly-isothermal expanders and are being developed to improve the round-trip efficiency of CAES.
== See also ==
Alternative fuel vehicle
Fireless locomotive
Grid energy storage
Hydraulic accumulator
List of energy storage power plants
Pneumatics
Zero-emissions vehicle
Cryogenic energy storage
Compressed-air engine
== References ==
== External links ==
Compressed Air System of Paris – technical notes Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 (Special supplement, Scientific American, 1921)
Solution to some of country's energy woes might be little more than hot air (Sandia National Labs, DoE).
MSNBC article, Cities to Store Wind Power for Later Use, January 4, 2006
Power storage: Trapped wind
Catching The Wind In A Bottle A group of Midwest utilities is building a plant that will store excess wind power underground
New York Times Article: Technology; Using Compressed Air To Store Up Electricity
Compressed Air Energy Storage, Entropy and Efficiency | Wikipedia/Compressed-air_energy_storage |
Specialized wind energy software applications aid in the development and operation of wind farms.
== Pre-feasibility and feasibility analysis ==
The RETScreen software wind power model is designed to evaluate energy production and savings, costs, emission reductions, financial viability and risk for central-grid, isolated-grid and off-grid wind energy projects, for multi-turbine and single-turbine hybrid systems. Developed by the Government of Canada, the software is multilingual, and includes links to wind energy resource maps.
The Wind Data Generator (WDG) is a Wind Energy Software tool capable of running WRF (Weather Research and Forecasting) model to create a wind atlas and to generate wind data at resolutions of 3 km to 10 km.
== Turbine design ==
Software helps design wind turbines. There are several aero-elastic packages that are used in this design process.
FOCUS6 aids in the design of wind turbines and turbine components such as rotor blades. It was developed by Knowledge Centre Wind turbine Materials and Constructions (WMC) and Energy Research Centre of the Netherlands (ECN).
The National Wind Technology Center (NWTC), a division of the U.S. National Renewable Energy Laboratory (NREL), has developed many packages which are used by turbine manufacturers and researchers. NWTC has developed a suite of turbine design and performance prediction codes which rely on Blade Element Momentum (BEM) theory. WTPerf uses steady BEM theory to model turbine performance. FAST is a comprehensive aero-elastic simulator which uses unsteady BEM theory to model a turbine as a collection of rigid and flexible bodies in a spatiotemporal field of turbulent flow. Germanischer Lloyd found FAST suitable for "the calculation of onshore wind turbine loads for design and certification." OpenFAST is an open-source wind turbine simulation tool that was established with the FAST v8 code as its starting point in 2018.
The open source software QBlade developed by the wind energy research group of Hermann Föttinger Institute of TU Berlin (Chair of Fluid Dynamics) is a BEM code coupled with the airfoil simulation code XFOIL. It allows the user to develop/import airfoil shapes, simulate them and use them for the design and simulation of wind turbine blades/rotors with the use of steady state BEM theory. The software is built with the Qt framework thus it includes a graphical user interface.
The open source software Vortexje, developed by Baayen & Heinz GmbH in Berlin, is an unsteady 3D panel method implementation suitable for dynamic simulation of vertical and horizontal axis wind turbines. Easily coupled with other simulation environments such as Simulink and Dymola, it is suitable for aerodynamic optimization, fluid-structure interaction problems, and unsteady control system simulation.
Ashes is a software package for analyzing aerodynamic and mechanical forces for onshore and offshore horizontal axis wind turbines. It is based on research done at the Norwegian University of Science and Technology in Trondheim, Norway.
== Flow modeling ==
Wind flow modeling software predicts important wind characteristics at locations where measurements are not available. Furow is a software which offers a lineal flow model and a Computational fluid dynamic model in the same software. WAsP was created at Denmark's Risø National Laboratory. WAsP uses a potential flow model to predict how wind flows over terrain at a site. Meteodyn WT, Windie, WindSim, WindStation and the open-source code ZephyTOOLS use computational fluid dynamics instead, which are potentially more accurate, but more computationally intensive.
== Farm modeling ==
This software simulates wind farm behavior, most importantly to calculate its energy output. The user can usually input wind data, height and roughness contour lines (topography), turbine specifications, background maps, and define environmental restrictions. Processing this information produces the design of a wind farm that maximizes energy production while accounting for restrictions and construction issues. Packages include Furow, Meteodyn WT, openWind, WindFarm, WindFarmer: Analyst, WindPRO, WindSim and WindStation. WakeBlaster is a specialised CFD service for modelling the wind farm wake losses.
== Farm visualization ==
Wind farm visualization software graphically presents a proposed wind farm, most importantly for the purpose of obtaining building permits. The primary techniques include photomontages, zone-of-visual-impact maps and three-dimensional visualization (perspective views of the landscape often incorporating aerial photography and including turbines and other objects).
== Farm monitoring ==
Wind farm monitoring software is a software that allows people to see if the wind turbines are running well or are going to become broken. Other functions of monitoring software is reporting, analysis of measurement data (power curve) and tools for monitoring of environmental constraints (bat control, etc.).
== Prediction software ==
For existing wind farms, several software systems exist which produce short and medium term forecasts for the generated power (single farms or complete forecast regions) using existing numerical weather prediction data (NWP) and live (SCADA) farm data as input. Examples of numerical weather prediction models used for this purpose are the European HiRLAM (High Resolution Limited Area Model) and the GFS (Global Forecast System) from NOAA. Open-source systems like Anemos, developed through European research initiatives, provide advanced forecasting capabilities for wind power integration into energy grids, while proprietary tools such as WindCast© by WindDeep employ machine learning to enhance prediction accuracy and optimize operational efficiency.
== References == | Wikipedia/Wind_energy_software |
In chemistry, a hydrate is a substance that contains water or its constituent elements. The chemical state of the water varies widely between different classes of hydrates, some of which were so labeled before their chemical structure was understood.
== Chemical nature ==
=== Inorganic chemistry ===
Hydrates are not inorganic salts "containing water molecules combined in a definite ratio as an integral part of the crystal" that are either bound to a metal center or that have crystallized with the metal complex. Such hydrates are also said to contain water of crystallization or water of hydration. If the water is heavy water in which the constituent hydrogen is the isotope deuterium, then the term deuterate may be used in place of hydrate.
A colorful example is cobalt(II) chloride, which turns from blue to red upon hydration, and can therefore be used as a water indicator.
The notation "hydrated compound⋅nH2O", where n is the number of water molecules per formula unit of the salt, is commonly used to show that a salt is hydrated. The n is usually a low integer, though it is possible for fractional values to occur. For example, in a monohydrate n = 1, and in a hexahydrate n = 6. Numerical prefixes mostly of Greek origin are:
A hydrate that has lost water is referred to as an anhydride; the remaining water, if any exists, can only be removed with very strong heating. A substance that does not contain any water is referred to as anhydrous. Some anhydrous compounds are hydrated so easily that they are said to be hygroscopic and are used as drying agents or desiccants.
=== Organic chemistry ===
In organic chemistry, a hydrate is a compound formed by the hydration, i.e. "Addition of water or of the elements of water (i.e. H and OH) to a molecular entity". For example: ethanol, CH3−CH2−OH, is the product of the hydration reaction of ethene, CH2=CH2, formed by the addition of H to one C and OH to the other C, and so can be considered as the hydrate of ethene. A molecule of water may be eliminated, for example, by the action of sulfuric acid. Another example is chloral hydrate, CCl3−CH(OH)2, which can be formed by reaction of water with chloral, CCl3−CH=O.
Many organic molecules, as well as inorganic molecules, form crystals that incorporate water into the crystalline structure without chemical alteration of the organic molecule (water of crystallization). The sugar trehalose, for example, exists in both an anhydrous form (melting point 203 °C) and as a dihydrate (melting point 97 °C). Protein crystals commonly have as much as 50% water content.
Molecules are also labeled as hydrates for historical reasons not covered above. Glucose, C6H12O6, was originally thought of as C6(H2O)6 and described as a carbohydrate.
Hydrate formation is common for active ingredients. Many manufacturing processes provide an opportunity for hydrates to form and the state of hydration can be changed with environmental humidity and time. The state of hydration of an active pharmaceutical ingredient can significantly affect the solubility and dissolution rate and therefore its bioavailability.
=== Clathrate hydrates ===
Clathrate hydrates (also known as gas hydrates, gas clathrates, etc.) are water ice with gas molecules trapped within; they are a form of clathrate. An important example is methane hydrate (also known as gas hydrate, methane clathrate, etc.).
Nonpolar molecules, such as methane, can form clathrate hydrates with water, especially under high pressure. Although there is no hydrogen bonding between water and guest molecules when methane is the guest molecule of the clathrate, guest–host hydrogen bonding often forms when the guest is a larger organic molecule such as tetrahydrofuran. In such cases, the guest–host hydrogen bonds result in the formation of L-type Bjerrum defects in the clathrate lattice.
== Stability ==
The stability of hydrates is generally determined by the nature of the compounds, their temperature, and the relative humidity (if they are exposed to air).
== See also ==
== References == | Wikipedia/Hydrate |
Superconducting magnetic energy storage (SMES) systems store energy in the magnetic field created by the flow of direct current in a superconducting coil that has been cryogenically cooled to a temperature below its superconducting critical temperature. This use of superconducting coils to store magnetic energy was invented by M. Ferrier in 1970.
A typical SMES system includes three parts: superconducting coil, power conditioning system and cryogenically cooled refrigerator. Once the superconducting coil is energized, the current will not decay and the magnetic energy can be stored indefinitely.
The stored energy can be released back to the network by discharging the coil. The power conditioning system uses an inverter/rectifier to transform alternating current (AC) power to direct current or convert DC back to AC power. The inverter/rectifier accounts for about 2–3% energy loss in each direction. SMES loses the least amount of electricity in the energy storage process compared to other methods of storing energy. SMES systems are highly efficient; the round-trip efficiency is greater than 95%.
Due to the energy requirements of refrigeration and the high cost of superconducting wire, SMES is currently used for short duration energy storage. Therefore, SMES is most commonly devoted to improving power quality.
== Advantages over other energy storage methods ==
There are several reasons for using superconducting magnetic energy storage instead of other energy storage methods. The most important advantage of SMES is that the time delay during charge and discharge is quite short. Power is available almost instantaneously and very high power output can be provided for a brief period of time. Other energy storage methods, such as pumped hydro or compressed air, have a substantial time delay associated with the energy conversion of stored mechanical energy back into electricity. Thus if demand is immediate, SMES is a viable option. Another advantage is that the loss of power is less than other storage methods because electric currents encounter almost no resistance. Additionally the main parts in a SMES are motionless, which results in high reliability.
== Current use ==
There are several small SMES units available for commercial use and several larger test bed projects. Several 1 MW·h units are used for power quality control in installations around the world, especially to provide power quality at manufacturing plants requiring ultra-clean power, such as microchip fabrication facilities.
These facilities have also been used to provide grid stability in distribution systems. SMES is also used in utility applications. In northern Wisconsin, a string of distributed SMES units were deployed to enhance stability of a transmission loop. The transmission line is subject to large, sudden load changes due to the operation of a paper mill, with the potential for uncontrolled fluctuations and voltage collapse.
The Engineering Test Model is a large SMES with a capacity of approximately 20 MW·h, capable of providing 40 MW of power for 30 minutes or 10 MW of power for 2 hours.
== System architecture ==
A SMES system typically consists of four parts
Superconducting magnet and supporting structure
This system includes the superconducting coil, a magnet and the coil protection. Here the energy is stored by disconnecting the coil from the larger system and then using electromagnetic induction from the magnet to induce a current in the superconducting coil. This coil then preserves the current until the coil is reconnected to the larger system, after which the coil partly or fully discharges.
Refrigeration system
The refrigeration system maintains the superconducting state of the coil by cooling the coil to the operating temperature.
Power conditioning system
The power conditioning system typically contains a power conversion system that converts DC to AC current and the other way around.
Control system
The control system monitors the power demand of the grid and controls the power flow from and to the coil. The control system also manages the condition of the SMES coil by controlling the refrigerator.
== Working principle ==
As a consequence of Faraday's law of induction, any loop of wire that generates a changing magnetic field in time, also generates an electric field. This process takes energy out of the wire through the electromotive force (EMF). EMF is defined as electromagnetic work done on a unit charge when it has traveled one round of a conductive loop. The energy could now be seen as stored in the electric field. This process uses energy from the wire with power equal to the electric potential times the total charge divided by time. Where ℰ is the voltage or EMF. By defining the power we can calculate the work that is needed to create such an electric field. Due to energy conservation this amount of work also has to be equal to the energy stored in the field.
P
=
Q
E
/
t
{\displaystyle P=Q{\mathcal {E}}/t}
This formula can be rewritten in the easier to measure variable of electric current by the substitution.
P
=
Q
E
/
t
=
I
E
,
{\displaystyle P=Q{\mathcal {E}}/t=I{\mathcal {E}},}
where I is the electric current in Ampere. The EMF ℰ is an inductance and can thus be rewritten as:
E
=
L
d
I
d
t
{\displaystyle {\mathcal {E}}=L{\frac {dI}{dt}}}
Substitution now gives:
P
=
I
L
d
I
d
t
,
{\displaystyle P=IL{\frac {dI}{dt}},}
where L is just a linearity constant called the inductance measured in Henry. Now that the power is found, all that is left to do is fill in the work equation to find the work.
W
=
∫
0
T
P
d
t
=
∫
0
I
I
L
d
I
d
t
d
t
=
∫
0
I
I
L
d
I
=
L
I
2
2
{\displaystyle W=\int _{0}^{T}Pdt=\int _{0}^{I}IL{\frac {dI}{dt}}dt=\int _{0}^{I}ILdI={\frac {LI^{2}}{2}}}
As said earlier the work has to be equal to the energy stored in the field. This entire calculation is based on a single looped wire. For wires that are looped multiple times the inductance L increases, as L is simply defined as the ratio between the voltage and rate of change of the current. In conclusion the stored energy in the coil is equal to:
E
=
L
I
2
2
{\displaystyle E={\frac {LI^{2}}{2}}}
where
E = energy measured in joules
L = inductance measured in henries
I = current measured in amperes
Consider a cylindrical coil with conductors of a rectangular cross section. The mean radius of coil is R. a and b are width and depth of the conductor. f is called form function, which is different for different shapes of coil. ξ (xi) and δ (delta) are two parameters to characterize the dimensions of the coil. We can therefore write the magnetic energy stored in such a cylindrical coil as shown below. This energy is a function of coil dimensions, number of turns and carrying current.
E
=
R
N
2
I
2
f
(
ξ
,
δ
)
/
2
{\displaystyle E=RN^{2}I^{2}f(\xi ,\delta )/2}
where
E = energy measured in joules
I = current measured in amperes
f(ξ, δ) = form function, joules per ampere-meter
N = number of turns of coil
== Solenoid versus toroid ==
Besides the properties of the wire, the configuration of the coil itself is an important issue from a mechanical engineering aspect. There are three factors that affect the design and the shape of the coil – they are: Inferior strain tolerance, thermal contraction upon cooling and Lorentz forces in an energized coil. Among them, the strain tolerance is crucial not because of any electrical effect, but because it determines how much structural material is needed to keep the SMES from breaking. For small SMES systems, the optimistic value of 0.3% strain tolerance is selected. Toroidal geometry can help to lessen the external magnetic forces and therefore reduces the size of mechanical support needed. Also, due to the low external magnetic field, toroidal SMES can be located near a utility or customer load.
For small SMES, solenoids are usually used because they are easy to coil and no pre-compression is needed. In toroidal SMES, the coil is always under compression by the outer hoops and two disks, one of which is on the top and the other is on the bottom to avoid breakage. Currently, there is little need for toroidal geometry for small SMES, but as the size increases, mechanical forces become more important and the toroidal coil is needed.
The older large SMES concepts usually featured a low aspect ratio solenoid approximately 100 m in diameter buried in earth. At the low extreme of size is the concept of micro-SMES solenoids, for energy storage range near 1 MJ.
== Low-temperature versus high-temperature superconductors ==
Under steady state conditions and in the superconducting state, the coil resistance is negligible. However, the refrigerator necessary to keep the superconductor cool requires electric power and this refrigeration energy must be considered when evaluating the efficiency of SMES as an energy storage device.
Although high-temperature superconductors (HTS) have higher critical temperature, flux lattice melting takes place in moderate magnetic fields around a temperature lower than this critical temperature. The heat loads that must be removed by the cooling system include conduction through the support system, radiation from warmer to colder surfaces, AC losses in the conductor (during charge and discharge), and losses from the cold–to-warm power leads that connect the cold coil to the power conditioning system. Conduction and radiation losses are minimized by proper design of thermal surfaces. Lead losses can be minimized by good design of the leads. AC losses depend on the design of the conductor, the duty cycle of the device and the power rating.
The refrigeration requirements for HTSC and low-temperature superconductor (LTSC) toroidal coils for the baseline temperatures of 77 K, 20 K, and 4.2 K, increases in that order. The refrigeration requirements here is defined as electrical power to operate the refrigeration system. As the stored energy increases by a factor of 100, refrigeration cost only goes up by a factor of 20. Also, the savings in refrigeration for an HTSC system is larger (by 60% to 70%) than for an LTSC systems.
== Cost ==
Whether HTSC or LTSC systems are more economical depends because there are other major components determining the cost of SMES: Conductor consisting of superconductor and copper stabilizer and cold support are major costs in themselves. They must be judged with the overall efficiency and cost of the device. Other components, such as vacuum vessel insulation, has been shown to be a small part compared to the large coil cost. The combined costs of conductors, structure and refrigerator for toroidal coils are dominated by the cost of the superconductor. The same trend is true for solenoid coils. HTSC coils cost more than LTSC coils by a factor of 2 to 4. HTSC was expected to be cheaper due to lower refrigeration requirements but this is not the case.
To gain some insight into costs consider a breakdown by major components of both HTSC and LTSC coils corresponding to three typical stored energy levels, 2, 20 and 200 MW·h. The conductor cost dominates the three costs for all HTSC cases and is particularly important at small sizes. The principal reason lies in the comparative current density of LTSC and HTSC materials. The critical current of HTSC wire is lower than LTSC wire generally in the operating magnetic field, about 5 to 10 teslas (T). Assume the wire costs are the same by weight. Because HTSC wire has lower (Jc) value than LTSC wire, it will take much more wire to create the same inductance. Therefore, the cost of wire is much higher than LTSC wire. Also, as the SMES size goes up from 2 to 20 to 200 MW·h, the LTSC conductor cost also goes up about a factor of 10 at each step. The HTSC conductor cost rises a little slower but is still by far the costliest item.
The structure costs of either HTSC or LTSC go up uniformly (a factor of 10) with each step from 2 to 20 to 200 MW·h. But HTSC structure cost is higher because the strain tolerance of the HTSC (ceramics cannot carry much tensile load) is less than LTSC, such as Nb3Ti or Nb3Sn, which demands more structure materials. Thus, in the very large cases, the HTSC cost can not be offset by simply reducing the coil size at a higher magnetic field.
It is worth noting here that the refrigerator cost in all cases is so small that there is very little percentage savings associated with reduced refrigeration demands at high temperature. This means that if a HTSC, BSCCO for instance, works better at a low temperature, say 20K, it will certainly be operated there. For very small SMES, the reduced refrigerator cost will have a more significant positive impact.
Clearly, the volume of superconducting coils increases with the stored energy. Also, we can see that the LTSC torus maximum diameter is always smaller for a HTSC magnet than LTSC due to higher magnetic field operation. In the case of solenoid coils, the height or length is also smaller for HTSC coils, but still much higher than in a toroidal geometry (due to low external magnetic field).
An increase in peak magnetic field yields a reduction in both volume (higher energy density) and cost (reduced conductor length). Smaller volume means higher energy density and cost is reduced due to the decrease of the conductor length. There is an optimum value of the peak magnetic field, about 7 T in this case. If the field is increased past the optimum, further volume reductions are possible with minimal increase in cost. The limit to which the field can be increased is usually not economic but physical and it relates to the impossibility of bringing the inner legs of the toroid any closer together and still leave room for the bucking cylinder.
The superconductor material is a key issue for SMES. Superconductor development efforts focus on increasing Jc and strain range and on reducing the wire manufacturing cost.
== Applications ==
The energy density, efficiency and the high discharge rate make SMES useful systems to incorporate into modern energy grids and green energy initiatives. The SMES system's uses can be categorized into three categories: power supply systems, control systems and emergency/contingency systems.
FACTS
FACTS (flexible AC transmission system) devices are static devices that can be installed in electricity grids. These devices are used to enhance the controllability and power transfer capability of an electric power grid. The application of SMES in FACTS devices was the first application of SMES systems. The first realization of SMES using FACTS devices were installed by the Bonneville power authority in 1980. This system uses SMES systems to damp the low frequencies, which contributes to the stabilization of the power grid. In 2000, SMES based FACTS systems were introduced at key points in the northern Winston power grid to enhance the stability of the grid.
Load leveling
The use of electric power requires a stable energy supply that delivers a constant power. This stability is dependent on the amount of power used and the amount of power created. The power usage varies throughout the day, and also varies during the seasons. SMES systems can be used to store energy when the generated power is higher than the demand/Load, and release power when the load is higher than the generated power. Thereby compensating for power fluctuations. Using these systems makes it possible for conventional generating units to operate at a constant output that is more efficient and convenient. However, when the power imbalance between supply and demand lasts for a long time, the SMES may get completely discharged.
Load frequency control
When the load does not meet the generated power output, due to a load perturbation, this can cause the load to be larger than the rated power output of the generators. This for example can happen when wind generators don't spin due to a sudden lack of wind. This load perturbation can cause a load-frequency control problem. This problem can be amplified in DFIG-based wind power generators. This load disparity can be compensated by power output from SMES systems that store energy when the generation is larger than the load. SMES based load frequency control systems have the advantage of a fast response when compared to contemporary control systems.
Uninterruptible power supplies
Uninterruptible Power Supplies (UPS) are used to protect against power surges and shortfalls by supplying a continuous power supply. This compensation is done by switching from the failing power supply to a SMES systems that can almost instantaneously supply the necessary power to continue the operation of essential systems. The SMES based UPS are most useful in systems that need to be kept at certain critical loads.
Circuit breaker reclosing
When the power angle difference across a circuit breaker is too large, protective relays prevent the reclosing of the circuit breakers. SMES systems can be used in these situations to reduce the power angle difference across the circuit breaker. Thereby allowing the reclosing of the circuit breaker. These systems allow the quick restoration of system power after major transmission line outages.
Spinning reserve
Spinning reserve is the extra generating capacity that is available by increasing the power generation of systems that are connected to the grid. This capacity reserved by the system operator for the compensation of disruptions in the power grid. Due to the fast recharge times and fast alternating current to direct current conversion process of SMES systems, these systems can be used as a spinning reserve when a major grid of transmission line is out of service.
SFCL
Superconducting fault current limiters (SFCL) are used to limit current under a fault in the grid. In this system a superconductor is quenched (raised in temperature) when a fault in the gridline is detected. By quenching the superconductor the resistance rises and the current is diverted to other grid lines. This is done without interrupting the larger grid. Once the fault is cleared, the SFCL temperature is lowered and becomes invisible to the larger grid.
Electromagnetic launchers
Electromagnetic launchers are electric projectile weapons that use a magnetic field to accelerate projectiles to a very high velocity. These launchers require high power pulse sources in order to work. These launchers can be realised by the use of the quick release capability and the high power density of the SMES system.
== Future developments for SMES systems ==
Future developments in the components of SMES systems could make them more viable for other applications; specifically, superconductors with higher critical temperatures and critical current densities. These limits are the same faced in other industrial usage of superconductors. Recent development of HTS wire made of YBCO with a superconducting transition temperature of around 90 K shows promise.Typically, the higher the superconducting transition temperature, the higher the maximum current density the superconductor can sustain before Cooper pair breakdown. A substance with a high critical temperature will generally have a higher critical current at low temperature than a superconductor with a lower critical temperature. This higher critical current will raise the energy storage quadratically, which may make SMES and other industrial applications of superconductors cost-effective.
== Technical challenges ==
The energy content of current SMES systems is usually quite small. Methods to increase the energy stored in SMES often resort to large-scale storage units. As with other superconducting applications, cryogenics are a necessity. A robust mechanical structure is usually required to contain the very large Lorentz forces generated by and on the magnet coils. The dominant cost for SMES is the superconductor, followed by the cooling system and the rest of the mechanical structure.
Mechanical support
Needed because of large Lorentz forces generated by the strong magnetic field acting on the coil, and the strong magnetic field generated by the coil on the larger structure.
Size
To achieve commercially useful levels of storage, around 5 GW·h (18 TJ), a SMES installation would need a loop of around 800 m. This is traditionally pictured as a circle, though in practice it could be more like a rounded rectangle. In either case it would require access to a significant amount of land to house the installation.
Manufacturing
here are two manufacturing issues around SMES. The first is the fabrication of bulk cable suitable to carry the current. The HTSC superconducting materials found to date are relatively delicate ceramics, making it difficult to use established techniques to draw extended lengths of superconducting wire. Much research has focused on layer deposit techniques, applying a thin film of material onto a stable substrate, but this is currently only suitable for small-scale electrical circuits.
Infrastructure
The second problem is the infrastructure required for an installation. Until room-temperature superconductors are found, the 800 m loop of wire would have to be contained within a vacuum flask of liquid nitrogen. This in turn would require stable support, most commonly envisioned by burying the installation.
Critical magnetic field
Above a certain field strength, known as the critical field, the superconducting state is destroyed. This means that there exists a maximum charging rate for the superconducting material, given that the magnitude of the magnetic field determines the flux captured by the superconducting coil.
Critical current
In general power systems look to maximize the current they are able to handle. This makes any losses due to inefficiencies in the system relatively insignificant. Unfortunately, large currents may generate magnetic fields greater than the critical field due to Ampere's Law. Current materials struggle, therefore, to carry sufficient current to make a commercial storage facility economically viable.
Several issues at the onset of the technology have hindered its proliferation:
Expensive refrigeration units and high power cost to maintain operating temperatures
Existence and continued development of adequate technologies using normal conductors
These still pose problems for superconducting applications but are improving over time. Advances have been made in the performance of superconducting materials. Furthermore, the reliability and efficiency of refrigeration systems has improved significantly.
Long precooling time
At the moment it takes four months to cool the coil from room temperature to its operating temperature. This also means that the SMES takes equally long to return to operating temperature after maintenance and when restarting after operating failures.
Protection
Due to the large amount of energy stored, certain measures need to be taken to protect the coils from damage in the case of coil failure. The rapid release of energy in case of coil failure might damage surrounding systems. Some conceptual designs propose to incorporate a superconducting cable into the design with as goal the absorption of energy after coil failure. The system also needs to be kept in excellent electric isolation in order to prevent loss of energy.
== See also ==
Grid energy storage
== References ==
== Bibliography ==
Sheahen, T., P. (1994). Introduction to High-Temperature Superconductivity. Plenum Press, New York. pp. 66, 76–78, 425–430, 433–446.
El-Wakil, M., M. (1984). Powerplant Technology. McGraw-Hill, pp. 685–689, 691–695.
Wolsky, A., M. (2002). The status and prospects for flywheels and SMES that incorporate HTS. Physica C 372–376, pp. 1,495–1,499.
Hassenzahl, W.V. (March 2001). "Superconductivity, an enabling technology for 21st century power systems?". IEEE Transactions on Applied Superconductivity. 11 (1): 1447–1453. Bibcode:2001ITAS...11.1447H. doi:10.1109/77.920045.
== Further reading ==
Browne, Malcome W. (January 6, 1988). "New Hunt for Ideal Energy Storage System". The New York Times.
== External links ==
Cost Analysis of Energy Storage Systems for Electric Utility Applications
Loyola SMES summary | Wikipedia/Superconducting_magnetic_energy_storage |
A rechargeable battery, storage battery, or secondary cell (formally a type of energy accumulator), is a type of electrical battery which can be charged, discharged into a load, and recharged many times, as opposed to a disposable or primary battery, which is supplied fully charged and discarded after use. It is composed of one or more electrochemical cells. The term "accumulator" is used as it accumulates and stores energy through a reversible electrochemical reaction. Rechargeable batteries are produced in many different shapes and sizes, ranging from button cells to megawatt systems connected to stabilize an electrical distribution network. Several different combinations of electrode materials and electrolytes are used, including lead–acid, zinc–air, nickel–cadmium (NiCd), nickel–metal hydride (NiMH), lithium-ion (Li-ion), lithium iron phosphate (LiFePO4), and lithium-ion polymer (Li-ion polymer).
Rechargeable batteries typically initially cost more than disposable batteries but have a much lower total cost of ownership and environmental impact, as they can be recharged inexpensively many times before they need replacing. Some rechargeable battery types are available in the same sizes and voltages as disposable types, and can be used interchangeably with them. Billions of dollars in research are being invested around the world for improving batteries as industry focuses on building better batteries.
== Applications ==
Devices which use rechargeable batteries include automobile starters, portable consumer devices, light vehicles (such as motorized wheelchairs, golf carts, electric bicycles, and electric forklifts), road vehicles (cars, vans, trucks, motorbikes), trains, small airplanes, tools, uninterruptible power supplies, and battery storage power stations. Emerging applications in hybrid internal combustion-battery and electric vehicles drive the technology to reduce cost, weight, and size, and increase lifetime.
Older rechargeable batteries self-discharge relatively rapidly and require charging before first use; some newer low self-discharge NiMH batteries hold their charge for many months, and are typically sold factory-charged to about 70% of their rated capacity.
Battery storage power stations use rechargeable batteries for load-leveling (storing electric energy at times of low demand for use during peak periods) and for renewable energy uses (such as storing power generated from photovoltaic arrays during the day to be used at night). Load-leveling reduces the maximum power which a plant must be able to generate, reducing capital cost and the need for peaking power plants.
According to a report from Research and Markets, the analysts forecast the global rechargeable battery market to grow at a CAGR of 8.32% during the period 2018–2022.
Small rechargeable batteries can power portable electronic devices, power tools, appliances, and so on. Heavy-duty batteries power electric vehicles, ranging from scooters to locomotives and ships. They are used in distributed electricity generation and in stand-alone power systems.
== Charging and discharging ==
During charging, the positive active material is oxidized, releasing electrons, and the negative material is reduced, absorbing electrons. These electrons constitute the current flow in the external circuit. The electrolyte may serve as a simple buffer for internal ion flow between the electrodes, as in lithium-ion and nickel-cadmium cells, or it may be an active participant in the electrochemical reaction, as in lead–acid cells.
The energy used to charge rechargeable batteries usually comes from a battery charger using AC mains electricity, although some are equipped to use a vehicle's 12-volt DC power outlet. The voltage of the source must be higher than that of the battery to force current to flow into it, but not too much higher or the battery may be damaged.
Chargers take from a few minutes to several hours to charge a battery. Slow "dumb" chargers without voltage or temperature-sensing capabilities will charge at a low rate, typically taking 14 hours or more to reach a full charge. Rapid chargers can typically charge cells in two to five hours, depending on the model, with the fastest taking as little as fifteen minutes. Fast chargers must have multiple ways of detecting when a cell reaches full charge (change in terminal voltage, temperature, etc.) to stop charging before harmful overcharging or overheating occurs. The fastest chargers often incorporate cooling fans to keep the cells from overheating. Battery packs intended for rapid charging may include a temperature sensor that the charger uses to protect the pack; the sensor will have one or more additional electrical contacts.
Different battery chemistries require different charging schemes. For example, some battery types can be safely recharged from a constant voltage source. Other types need to be charged with a regulated current source that tapers as the battery reaches fully charged voltage. Charging a battery incorrectly can damage a battery; in extreme cases, batteries can overheat, catch fire, or explosively vent their contents.
=== Rate of discharge ===
Battery charging and discharging rates are often discussed by referencing a "C" rate of current. The C rate is that which would theoretically fully charge or discharge the battery in one hour. For example, trickle charging might be performed at C/20 (or a "20-hour" rate), while typical charging and discharging may occur at C/2 (two hours for full capacity). The available capacity of electrochemical cells varies depending on the discharge rate. Some energy is lost in the internal resistance of cell components (plates, electrolyte, interconnections), and the rate of discharge is limited by the speed at which chemicals in the cell can move about. For lead-acid cells, the relationship between time and discharge rate is described by Peukert's law; a lead-acid cell that can no longer sustain a usable terminal voltage at a high current may still have usable capacity, if discharged at a much lower rate. Data sheets for rechargeable cells often list the discharge capacity on 8-hour or 20-hour or other stated time; cells for uninterruptible power supply systems may be rated at 15-minute discharge.
The terminal voltage of the battery is not constant during charging and discharging. Some types have relatively constant voltage during discharge over much of their capacity. Non-rechargeable alkaline and zinc–carbon cells output 1.5 V when new, but this voltage drops with use. Most NiMH AA and AAA cells are rated at 1.2 V, but have a flatter discharge curve than alkalines and can usually be used in equipment designed to use alkaline batteries.
Battery manufacturers' technical notes often refer to voltage per cell (VPC) for the individual cells that make up the battery. For example, to charge a 12 V lead-acid battery (containing 6 cells of 2 V each) at 2.3 VPC requires a voltage of 13.8 V across the battery's terminals.
=== Damage from cell reversal ===
Subjecting a discharged cell to a current in the direction which tends to discharge it further to the point the positive and negative terminals switch polarity causes a condition called cell reversal. Generally, pushing current through a discharged cell in this way causes undesirable and irreversible chemical reactions to occur, resulting in permanent damage to the cell.
Cell reversal can occur under a number of circumstances, the two most common being:
When a battery or cell is connected to a charging circuit the wrong way around.
When a battery made of several cells connected in series is deeply discharged.
In the latter case, the problem occurs due to the different cells in a battery having slightly different capacities. When one cell reaches discharge level ahead of the rest, the remaining cells will force the current through the discharged cell.
Many battery-operated devices have a low-voltage cutoff that prevents deep discharges from occurring that might cause cell reversal. A smart battery has voltage monitoring circuitry built inside.
Cell reversal can occur to a weakly charged cell even before it is fully discharged. If the battery drain current is high enough, the cell's internal resistance can create a resistive voltage drop that is greater than the cell's forward emf. This results in the reversal of the cell's polarity while the current is flowing. The higher the required discharge rate of a battery, the better matched the cells should be, both in the type of cell and state of charge, in order to reduce the chances of cell reversal.
In some situations, such as when correcting NiCd batteries that have been previously overcharged, it may be desirable to fully discharge a battery. To avoid damage from the cell reversal effect, it is necessary to access each cell separately: each cell is individually discharged by connecting a load clip across the terminals of each cell, thereby avoiding cell reversal.
=== Damage during storage in fully discharged state ===
If a multi-cell battery is fully discharged, it will often be damaged due to the cell reversal effect mentioned above.
It is possible however to fully discharge a battery without causing cell reversal—either by discharging each cell separately, or by allowing each cell's internal leakage to dissipate its charge over time.
Even if a cell is brought to a fully discharged state without reversal, however, damage may occur over time simply due to remaining in the discharged state. An example of this is the sulfation that occurs in lead-acid batteries that are left sitting on a shelf for long periods.
For this reason it is often recommended to charge a battery that is intended to remain in storage, and to maintain its charge level by periodically recharging it.
Since damage may also occur if the battery is overcharged, the optimal level of charge during storage is typically around 30% to 70%.
=== Depth of discharge ===
Depth of discharge (DOD) is normally stated as a percentage of the nominal ampere-hour capacity; 0% DOD means no discharge. As the usable capacity of a battery system depends on the rate of discharge and the allowable voltage at the end of discharge, the depth of discharge must be qualified to show the way it is to be measured. Due to variations during manufacture and aging, the DOD for complete discharge can change over time or number of charge cycles. Generally a rechargeable battery system will tolerate more charge/discharge cycles if the DOD is lower on each cycle. Lithium batteries can discharge to about 80 to 90% of their nominal capacity. Lead-acid batteries can discharge to about 50–60%. While flow batteries can discharge 100%.
=== Lifespan and cycle stability ===
If batteries are used repeatedly even without mistreatment, they lose capacity as the number of charge cycles increases, until they are eventually considered to have reached the end of their useful life. Different battery systems have differing mechanisms for wearing out. For example, in lead-acid batteries, not all the active material is restored to the plates on each charge/discharge cycle; eventually enough material is lost that the battery capacity is reduced. In lithium-ion types, especially on deep discharge, some reactive lithium metal can be formed on charging, which is no longer available to participate in the next discharge cycle. Sealed batteries may lose moisture from their liquid electrolyte, especially if overcharged or operated at high temperature. This reduces the cycling life.
=== Recharging time ===
Recharging time is an important parameter to the user of a product powered by rechargeable batteries. Even if the charging power supply provides enough power to operate the device as well as recharge the battery, the device is attached to an external power supply during the charging time. For electric vehicles used industrially, charging during off-shifts may be acceptable. For highway electric vehicles, rapid charging is necessary for charging in a reasonable time.
A rechargeable battery cannot be recharged at an arbitrarily high rate. The internal resistance of the battery will produce heat, and excessive temperature rise will damage or destroy a battery. For some types, the maximum charging rate will be limited by the speed at which active material can diffuse through a liquid electrolyte. High charging rates may produce excess gas in a battery, or may result in damaging side reactions that permanently lower the battery capacity. Very roughly, and with many exceptions and caveats, restoring a battery's full capacity in one hour or less is considered fast charging. A battery charger system will include more complex control-circuit- and charging strategies for fast charging, than for a charger designed for slower recharging.
== Active components ==
The active components in a secondary cell are the chemicals that make up the positive and negative active materials, and the electrolyte. The positive and negative electrodes are made up of different materials, with the positive exhibiting a reduction potential and the negative having an oxidation potential. The sum of the potentials from these half-reactions is the standard cell potential or voltage.
In primary cells the positive and negative electrodes are known as the cathode and anode, respectively. Although this convention is sometimes carried through to rechargeable systems—especially with lithium-ion cells, because of their origins in primary lithium cells—this practice can lead to confusion. In rechargeable cells the positive electrode is the cathode on discharge and the anode on charge, and vice versa for the negative electrode.
== Types ==
=== Commercial types ===
The lead–acid battery, invented in 1859 by French physicist Gaston Planté, is the oldest type of rechargeable battery. Despite having a very low energy-to-weight ratio and a low energy-to-volume ratio, its ability to supply high surge currents means that the cells have a relatively large power-to-weight ratio. These features, along with the low cost, makes it attractive for use in motor vehicles to provide the high current required by automobile starter motors.
The nickel–cadmium battery (NiCd) was invented by Waldemar Jungner of Sweden in 1899. It uses nickel oxide hydroxide and metallic cadmium as electrodes. Cadmium is a toxic element, and was banned for most uses by the European Union in 2004. Nickel–cadmium batteries have been almost completely superseded by nickel–metal hydride (NiMH) batteries.
The nickel–iron battery (NiFe) was also developed by Waldemar Jungner in 1899; and commercialized by Thomas Edison in 1901 in the United States for electric vehicles and railway signalling. It is composed of only non-toxic elements, unlike many kinds of batteries that contain toxic mercury, cadmium, or lead.
The nickel–metal hydride battery (NiMH) became available in 1989. These are now a common consumer and industrial type. The battery has a hydrogen-absorbing alloy for the negative electrode instead of cadmium.
The lithium-ion battery was introduced in the market in 1991, is the choice in most consumer electronics, having the best energy density and a very slow loss of charge when not in use. It does have drawbacks too, particularly the risk of unexpected ignition from the heat generated by the battery. Such incidents are rare and according to experts, they can be minimized "via appropriate design, installation, procedures and layers of safeguards" so the risk is acceptable.
Lithium-ion polymer batteries (LiPo) are light in weight, offer slightly higher energy density than Li-ion at slightly higher cost, and can be made in any shape. They are available but have not displaced Li-ion in the market. A primary use is for LiPo batteries is in powering remote-controlled cars, boats and airplanes. LiPo packs are readily available on the consumer market, in various configurations, up to 44.4 V, for powering certain R/C vehicles and helicopters or drones. Some test reports warn of the risk of fire when the batteries are not used in accordance with the instructions. Independent reviews of the technology discuss the risk of fire and explosion from lithium-ion batteries under certain conditions because they use liquid electrolytes.
=== Other experimental types ===
‡ citations are needed for these parameters
Notes
a Nominal cell voltage in V.
b Energy density = energy/weight or energy/size, given in three different units
c Specific power = power/weight in W/kg
e Energy/consumer price in W·h/US$ (approximately)
f Self-discharge rate in %/month
g Cycle durability in number of cycles
h Time durability in years
i VRLA or recombinant includes gel batteries and absorbed glass mats
p Pilot production
Several types of lithium–sulfur battery have been developed, and numerous research groups and organizations have demonstrated that batteries based on lithium sulfur can achieve superior energy density to other lithium technologies. Whereas lithium-ion batteries offer energy density in the range of 150–260 Wh/kg, batteries based on lithium-sulfur are expected to achieve 450–500 Wh/kg, and can eliminate cobalt, nickel and manganese from the production process. Furthermore, while initially lithium-sulfur batteries suffered from stability problems, recent research has made advances in developing lithium-sulfur batteries that cycle as long as (or longer than) batteries based on conventional lithium-ion technologies.
The thin-film battery (TFB) is a refinement of lithium ion technology by Excellatron. The developers claim a large increase in recharge cycles to around 40,000 and higher charge and discharge rates, at least 5 C charge rate. Sustained 60 C discharge and 1000 C peak discharge rate and a significant increase in specific energy, and energy density.
lithium iron phosphate batteries are used in some applications.
UltraBattery, a hybrid lead–acid battery and ultracapacitor invented by Australia's national science organisation CSIRO, exhibits tens of thousands of partial state of charge cycles and has outperformed traditional lead-acid, lithium, and NiMH-based cells when compared in testing in this mode against variability management power profiles. UltraBattery has kW and MW-scale installations in place in Australia, Japan, and the U.S. It has also been subjected to extensive testing in hybrid electric vehicles and has been shown to last more than 100,000 vehicle miles in on-road commercial testing in a courier vehicle. The technology is claimed to have a lifetime of 7 to 10 times that of conventional lead-acid batteries in high rate partial state-of-charge use, with safety and environmental benefits claimed over competitors like lithium-ion. Its manufacturer suggests an almost 100% recycling rate is already in place for the product.
The potassium-ion battery delivers around a million cycles, due to the extraordinary electrochemical stability of potassium insertion/extraction materials such as Prussian blue.
The sodium-ion battery is meant for stationary storage and competes with lead–acid batteries. It aims at a low total cost of ownership per kWh of storage. This is achieved by a long and stable lifetime. The effective number of cycles is above 5000 and the battery is not damaged by deep discharge. The energy density is rather low, somewhat lower than lead–acid.
== Alternatives ==
A rechargeable battery is only one of several types of rechargeable energy storage systems. Several alternatives to rechargeable batteries exist or are under development. For uses such as portable radios, rechargeable batteries may be replaced by clockwork mechanisms which are wound up by hand, driving dynamos, although this system may be used to charge a battery rather than to operate the radio directly. Flashlights may be driven by a dynamo directly. For transportation, uninterruptible power supply systems and laboratories, flywheel energy storage systems store energy in a spinning rotor for conversion to electric power when needed; such systems may be used to provide large pulses of power that would otherwise be objectionable on a common electrical grid.
Ultracapacitors – capacitors of extremely high value – are also used; an electric screwdriver which charges in 90 seconds and will drive about half as many screws as a device using a rechargeable battery was introduced in 2007, and similar flashlights have been produced. In keeping with the concept of ultracapacitors, betavoltaic batteries may be utilized as a method of providing a trickle-charge to a secondary battery, greatly extending the life and energy capacity of the battery system being employed; this type of arrangement is often referred to as a "hybrid betavoltaic power source" by those in the industry.
Ultracapacitors are being developed for transportation, using a large capacitor to store energy instead of the rechargeable battery banks used in hybrid vehicles. One drawback of capacitors compared to batteries is that the terminal voltage drops rapidly; a capacitor that has 25% of its initial energy left in it will have one-half of its initial voltage. By contrast, battery systems tend to have a terminal voltage that does not decline rapidly until nearly exhausted. This terminal voltage drop complicates the design of power electronics for use with ultracapacitors. However, there are potential benefits in cycle efficiency, lifetime, and weight compared with rechargeable systems. China started using ultracapacitors on two commercial bus routes in 2006; one of them is route 11 in Shanghai.
Flow batteries, used for specialized applications, are recharged by replacing the electrolyte liquid. A flow battery can be considered to be a type of rechargeable fuel cell.
== Research ==
Rechargeable battery research includes development of new electrochemical systems as well as improving the life span and capacity of current types. Contemporary research in rechargeable batteries increasingly prioritizes unlocking anionic redox mechanisms (e.g., oxygen, sulfur) to transcend the capacity constraints of traditional transition metal cationic redox. While foundational in lithium-rich cathodes (e.g., Li₂MnO₃), this paradigm now extends to sodium-, potassium-, and multivalent-ion systems, where anion participation in charge compensation—enabled by tailored coordination environments or "orphaned" non-bonding orbitals—offers pathways to higher energy densities. Computational tools (density functional theory, machine learning) and operando characterization guide material design, balancing redox activity with structural stability. Innovations like surface fluorination, cation doping, and lattice strain engineering mitigate degradation (e.g., oxygen loss, phase collapse), while emerging solid-state architectures leverage anion mobility for safer, high-rate performance. These trends reflect a broader shift toward redox-flexible frameworks that harmonize resource efficiency with electrochemical resilience.
== See also ==
History of the battery
Comparison of commercial battery types
Energy storage
List of battery types
Battery management system
== References ==
== Further reading ==
Belli, Brita. 'Battery University' Aims to Train a Work Force for Next-Generation Energy Storage, The New York Times, 8 April 2013. Discusses a professional development program at San Jose State University.
Vlasic, Bill. Chinese Firm Wins Bid for Auto Battery Maker, The New York Times, published online 9 December 2012, p. B1.
Cardwell, Diane. Battery Seen as Way to Cut Heat-Related Power Losses, 16 July 2013 online and 17 July 2013 in print on 17 July 2013, on page B1 in the New York City edition of The New York Times, p. B1. Discusses Eos Energy Systems' Zinc–air batteries.
Cardwell, Diane. SolarCity to Use Batteries From Tesla for Energy Storage, 4 December 2013 on line, and 5 December 2013 in the New York City edition of The New York Times, p. B-2. Discusses SolarCity, DemandLogic and Tesla Motors.
Galbraith, Kate. In Presidio, a Grasp at the Holy Grail of Energy Storage, The New York Times, 6 November 2010.
Galbraith, Kate. Filling the Gaps in the Flow of Renewable Energy, The New York Times, 22 October 2013.
Witkin, Jim. Building Better Batteries for Electric Cars, The New York Times, 31 March 2011, p. F4. Published online 30 March 2011. Discusses rechargeable batteries and the new-technology lithium ion battery.
Wald, Matthew L. Hold That Megawatt!, The New York Times, 7 January 2011. Discusses AES Energy Storage.
Wald, Matthew L. Green Blog: Is That Onions You Smell? Or Battery Juice?, The New York Times, 9 May 2012. Discusses vanadium redox battery technology.
Wald, Matthew L. Green Blog: Cutting the Electric Bill with a Giant Battery, The New York Times, 27 June 2012. Discusses Saft Groupe S.A.
Wald, Matthew L. Seeking to Start a Silicon Valley for Battery Science, The New York Times, 30 November 2012.
Wald, Matthew L. From Harvard, a Cheaper Storage Battery, The New York Times, 8 January 2014. Discusses research into flow-batteries utilizing carbon-based molecules called quinones.
Witkin, Jim. Building Better Batteries for Electric Cars, The New York Times, 31 March 2011, p. F4. Published online 30 March 2011. Discusses rechargeable batteries and lithium ion batteries.
Witkin, Jim. Green Blog: A Second Life for the Electric Car Battery, The New York Times, 27 April 2011. Describes: ABB; Community Energy Storage for the use of electric vehicle batteries for grid energy storage.
Woody, Todd. Green Blog: When It Comes to Car Batteries, Moore's Law Does Not Compute, The New York Times, 6 September 2010. Discusses lithium-air batteries.
Jang Wook Choi. Promise and reality of post-lithium-ion batteries with high energy densities. | Wikipedia/Rechargeable_energy_storage_system |
The expression military–industrial complex (MIC) describes the relationship between a country's military and the defense industry that supplies it, seen together as a vested interest which influences public policy. A driving factor behind the relationship between the military and the defense-minded corporations is that both sides benefit—one side from obtaining weapons, and the other from being paid to supply them. The term is most often used in reference to the system behind the armed forces of the United States, where the relationship is most prevalent due to close links among defense contractors, the Pentagon, and politicians. The expression gained popularity after a warning of the relationship's detrimental effects, in the farewell address of U.S. President Dwight D. Eisenhower on January 17, 1961.
Conceptually, it is closely related to the ideas of the iron triangle in the U.S. (the three-sided relationship between Congress, the executive branch bureaucracy, and interest groups) and the defense industrial base (the network of organizations, facilities, and resources that supplies governments with defense-related goods and services).
== Etymology ==
U.S. President Dwight D. Eisenhower originally coined the term in his Farewell Address to the Nation on January 17, 1961:
A vital element in keeping the peace is our military establishment. Our arms must be mighty, ready for instant action, so that no potential aggressor may be tempted to risk his own destruction...
This conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence—economic, political, even spiritual—is felt in every city, every statehouse, every office of the federal government. We recognize the imperative need for this development. Yet we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society. In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military–industrial complex. The potential for the disastrous rise of misplaced power exists, and will persist. We must never let the weight of this combination endanger our liberties or democratic processes. We should take nothing for granted. Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals so that security and liberty may prosper together. [emphasis added]
The phrase was thought to have been "war-based" industrial complex before becoming "military" in later drafts of Eisenhower's speech, a claim passed on only by oral history. Geoffrey Perret, in his biography of Eisenhower, claims that, in one draft of the speech, the phrase was "military–industrial–congressional complex", indicating the essential role that the United States Congress plays in the propagation of the military industry, but the word "congressional" was dropped from the final version to appease elected officials. James Ledbetter calls this a "stubborn misconception" not supported by any evidence; likewise a claim by Douglas Brinkley that it was originally "military–industrial–scientific complex". Henry Giroux claims that it was originally "military–industrial–academic complex". The actual authors of the speech were Eisenhower's speechwriters Ralph E. Williams and Malcolm Moos.
== The MIC and the Cold War ==
Attempts to conceptualize something similar to a modern "military–industrial complex" did exist before 1961, as the underlying phenomenon described by the term is generally agreed to have emerged during or shortly after World War II. For example, a similar phrase was used in a 1947 Foreign Affairs article in a sense close to that it would later acquire, and sociologist C. Wright Mills contended in his 1956 book The Power Elite that a democratically unaccountable class of military, business, and political leaders with convergent interests exercised the preponderance of power in the contemporary West.
Following its coinage in Eisenhower's address, the MIC became a staple of American political and sociological discourse. Many Vietnam War–era activists and polemicists, such as Seymour Melman and Noam Chomsky employed the concept in their criticism of U.S. foreign policy, while other academics and policymakers found it to be a useful analytical framework. Although the MIC was bound up in its origins with the bipolar international environment of the Cold War, some contended that the MIC might endure under different geopolitical conditions (for example, George F. Kennan wrote in 1987 that "were the Soviet Union to sink tomorrow under the waters of the ocean, the American military–industrial complex would have to remain, substantially unchanged, until some other adversary could be invented."). The collapse of the USSR and the resultant decrease in global military spending (the so-called 'peace dividend') did in fact lead to decreases in defense industrial output and consolidation among major arms producers, although global expenditures rose again following the September 11 attacks and the ensuing "War on terror", as well as the more recent increase in geopolitical tensions associated with strategic competition between the United States, Russia, and China.
== Eras ==
=== First era ===
Some sources divide the history of the United States military–industrial complex into three eras. From 1797 to 1941, the U.S. government only relied on civilian industries while the country was actually at war. The government owned their own shipyards and weapons manufacturing facilities which they relied on through World War I. With World War II came a massive shift in the way that the U.S. government armed the military.
In World War II, the U.S. President Franklin D. Roosevelt established the War Production Board to coordinate civilian industries and shift them into wartime production. Arms production in the U.S. went from around one percent of annual Gross domestic product (GDP) to 40 percent of GDP. U.S. companies, such as Boeing and General Motors, maintained and expanded their defense divisions. These companies have gone on to develop various technologies that have improved civilian life as well, such as night-vision goggles and GPS.
=== Second era ===
The second era is identified as beginning with the coining of the term by U.S. President Dwight D. Eisenhower. This era continued through the Cold War period, up to the end of the Warsaw Pact and the collapse of the Soviet Union. A 1965 article written by Marc Pilisuk and Thomas Hayden says benefits of the military–industrial complex of the U.S. include the advancement of the civilian technology market as civilian companies benefit from innovations from the MIC and vice versa. In 1993, the Pentagon urged defense contractors to consolidate due to the fall of communism and a shrinking defense budget.
=== Third era ===
In the third era, U.S. defense contractors either consolidated or shifted their focus to civilian innovation. From 1992 to 1997 there was a total of US$55 billion worth of mergers in the defense industry, with major defense companies purchasing smaller competitors. The U.S. domestic economy is now tied to the success of the MIC which has led to concerns of repression as Cold War-era attitudes are still prevalent among the American public. Shifts in values and the collapse of communism have ushered in a new era for the U.S. military–industrial complex. The Department of Defense works in coordination with traditional military–industrial complex aligned companies such as Lockheed Martin and Northrop Grumman. Many former defense contractors have shifted operations to the civilian market and sold off their defense departments. In recent years, traditional defense contracting firms have faced competition from Silicon Valley and other tech companies, like Anduril Industries and Palantir, over Pentagon contracts. This represents a shift in defense strategy away from the procurement of more armaments and toward an increasing role of technologies like cloud computing and cybersecurity in military affairs. From 2019 to 2022, venture capital funding for defense technologies doubled.
== Military subsidy theory ==
According to the military subsidy theory, the Cold War–era mass production of aircraft benefited the U.S. civilian aircraft industry. The theory asserts that the technologies developed during the Cold War along with the financial backing of the military led to the dominance of U.S. aviation companies. There is also strong evidence that the United States federal government intentionally paid a higher price for these innovations to serve as a subsidy for civilian aircraft advancement.
== Current applications ==
According to the Stockholm International Peace Research Institute (SIPRI), total world spending on military expenses in 2022 was $2,240 billion. 39% of this total, or $837 billion, was spent by the United States. China was the second largest spender, with $292 billion and 13% of the global share. The privatization of the production and invention of military technology also leads to a complicated relationship with significant research and development of many technologies. In 2011, the United States spent more (in absolute numbers) on its military than the next 13 countries combined.
The military budget of the United States for the 2009 fiscal year was $515.4 billion. Adding emergency discretionary spending and supplemental spending brings the sum to $651.2 billion. This does not include many military-related items that are outside of the Defense Department's budget. Overall, the U.S. federal government is spending about $1 trillion annually on military-related purposes.
U.S. President Joe Biden signed a record $886 billion defense spending bill into law on December 22, 2023.
In a 2012 story, Salon reported, "Despite a decline in global arms sales in 2010 due to recessionary pressures, the United States increased its market share, accounting for a whopping 53 percent of the trade that year. Last year saw the United States on pace to deliver more than $46 billion in foreign arms sales." The U.S. military and arms industry also tend to contribute heavily to incumbent members of Congress.
== Political geography ==
The datagraphic represents the 20 largest US defense contractors based on the amount of their defense revenue. Among these corporations, 53.5% of total revenues are derived from defense, and the median proportion is 63.4%; 6 firms derive over 75% of their revenue from defense.
According to the Wikipedia entries for the companies, the headquarters of 11 of these corporations are located in the Washington metropolitan area, of which 5 are in Reston, Virginia.
== Similar concepts ==
A thesis similar to the military–industrial complex was originally expressed by Daniel Guérin, in his 1936 book Fascism and Big Business, about the fascist government ties to heavy industry. It can be defined as, "an informal and changing coalition of groups with vested psychological, moral, and material interests in the continuous development and maintenance of high levels of weaponry, in preservation of colonial markets and in military-strategic conceptions of internal affairs."
An exhibit of the trend was made in Franz Leopold Neumann's book Behemoth: The Structure and Practice of National Socialism in 1942, a study of how Nazism came into a position of power in a democratic state.
Within decades of its inception, the idea of the military–industrial complex gave rise to the ideas of other similar industrial complexes, including:: ix–xxv
Animal–industrial complex;
Prison–industrial complex;
Pharmaceutical–industrial complex;
Entertainment-industrial complex;
Medical–industrial complex;
Corporate consumption complex.
Virtually all institutions in sectors ranging from agriculture, medicine, entertainment, and media, to education, criminal justice, security, and transportation, began reconceiving and reconstructing in accordance with capitalist, industrial, and bureaucratic models with the aim of realizing profit, growth, and other imperatives. According to Steven Best, all these systems interrelate and reinforce one another.
The concept of the military–industrial complex has been also expanded to include the entertainment and creative industries as well. For an example in practice, Matthew Brummer describes Japan's Manga Military and how the Ministry of Defense uses popular culture and the moe that it engenders to shape domestic and international perceptions.
An alternative term to describe the interdependence between the military-industrial complex and the entertainment industry is coined by James Der Derian as "Military-Industrial-Media-Entertainment-Network". Ray McGovern extended this appellation to Military-Industrial-Congressional-Intelligence-Media-Academia-Think-Tank complex, MICIMATT.
=== Tech–industrial complex ===
In his 2025 farewell address, outgoing U.S. President Joe Biden warned of a 'tech–industrial complex', stating that "Americans are being buried under an avalanche of misinformation and disinformation, enabling the abuse of power." Commentators noted that this statement was made following Elon Musk's upcoming role in the second Donald Trump administration and public overtures towards Trump by technology industry leaders including Meta's Mark Zuckerberg and Amazon's Jeff Bezos, including the dismantling of Facebook's fact-checking program.
== See also ==
Literature and media
The Complex: How the Military Invades Our Everyday Lives (2008 book by Nick Turse)
The Power Elite (1956 book by C. Wright Mills)
War Is a Racket (1935 book by Smedley Butler)
War Made Easy: How Presidents & Pundits Keep Spinning Us to Death (2007 documentary film)
Why We Fight (2005 documentary film by Eugene Jarecki)
Other complexes or axes
List of industrial complexes
Miscellaneous
Last Supper (Defense industry)
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
== External links ==
Khaki capitalism, The Economist, December 3, 2011
Militaryindustrialcomplex.com, Features running daily, weekly and monthly defense spending totals plus Contract Archives section.
C. Wright Mills, Structure of Power in American Society, British Journal of Sociology, Vol. 9. No. 1 1958
Dwight David Eisenhower, Farewell Address On the military–industrial complex and the government–universities collusion – January 17, 1961
Dwight D. Eisenhower, Farewell Address As delivered transcript and complete audio from AmericanRhetoric.com
William McGaffin and Erwin Knoll, The military–industrial complex, An analysis of the phenomenon written in 1969
The Cost of War & Today's Military Industrial Complex, National Public Radio, January 8, 2003.
Human Rights First; Private Security Contractors at War: Ending the Culture of Impunity (2008)
Fifty Years After Eisenhower's Farewell Address, A Look at the Military–Industrial Complex – video report by Democracy Now!
Online documents, Dwight D. Eisenhower Presidential Library
50th Anniversary of Eisenhower's Farewell Address – Eisenhower Institute
Part 1 – Anniversary Discussion of Eisenhower's Farewell Address – Gettysburg College
Part 2 – Anniversary Discussion of Eisenhower's Farewell Address – Gettysburg College | Wikipedia/Military-industrial_complex |
The scientific method is an empirical method for acquiring knowledge that has been referred to while doing science since at least the 17th century. Historically, it was developed through the centuries from the ancient and medieval world. The scientific method involves careful observation coupled with rigorous skepticism, because cognitive assumptions can distort the interpretation of the observation. Scientific inquiry includes creating a testable hypothesis through inductive reasoning, testing it through experiments and statistical analysis, and adjusting or discarding the hypothesis based on the results.
Although procedures vary across fields, the underlying process is often similar. In more detail: the scientific method involves making conjectures (hypothetical explanations), predicting the logical consequences of hypothesis, then carrying out experiments or empirical observations based on those predictions. A hypothesis is a conjecture based on knowledge obtained while seeking answers to the question. Hypotheses can be very specific or broad but must be falsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.
While the scientific method is often presented as a fixed sequence of steps, it actually represents a set of general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order. Numerous discoveries have not followed the textbook model of the scientific method and chance has played a role, for instance.
== History ==
The history of the scientific method considers changes in the methodology of scientific inquiry, not the history of science itself. The development of rules for scientific reasoning has not been straightforward; the scientific method has been the subject of intense and recurring debate throughout the history of science, and eminent natural philosophers and scientists have argued for the primacy of various approaches to establishing scientific knowledge.
Different early expressions of empiricism and the scientific method can be found throughout history, for instance with the ancient Stoics, Aristotle, Epicurus, Alhazen, Avicenna, Al-Biruni, Roger Bacon, and William of Ockham.
In the Scientific Revolution of the 16th and 17th centuries, some of the most important developments were the furthering of empiricism by Francis Bacon and Robert Hooke, the rationalist approach described by René Descartes, and inductivism, brought to particular prominence by Isaac Newton and those who followed him. Experiments were advocated by Francis Bacon and performed by Giambattista della Porta, Johannes Kepler, and Galileo Galilei. There was particular development aided by theoretical works by the skeptic Francisco Sanches, by idealists as well as empiricists John Locke, George Berkeley, and David Hume. C. S. Peirce formulated the hypothetico-deductive model in the 20th century, and the model has undergone significant revision since.
The term "scientific method" emerged in the 19th century, as a result of significant institutional development of science, and terminologies establishing clear boundaries between science and non-science, such as "scientist" and "pseudoscience". Throughout the 1830s and 1850s, when Baconianism was popular, naturalists like William Whewell, John Herschel, and John Stuart Mill engaged in debates over "induction" and "facts," and were focused on how to generate knowledge. In the late 19th and early 20th centuries, a debate over realism vs. antirealism was conducted as powerful scientific theories extended beyond the realm of the observable.
=== Modern use and critical thought ===
The term "scientific method" came into popular use in the twentieth century; Dewey's 1910 book, How We Think, inspired popular guidelines. It appeared in dictionaries and science textbooks, although there was little consensus on its meaning. Although there was growth through the middle of the twentieth century, by the 1960s and 1970s numerous influential philosophers of science such as Thomas Kuhn and Paul Feyerabend had questioned the universality of the "scientific method," and largely replaced the notion of science as a homogeneous and universal method with that of it being a heterogeneous and local practice. In particular, Paul Feyerabend, in the 1975 first edition of his book Against Method, argued against there being any universal rules of science; Karl Popper, and Gauch 2003, disagreed with Feyerabend's claim.
Later stances include physicist Lee Smolin's 2013 essay "There Is No Scientific Method", in which he espouses two ethical principles, and historian of science Daniel Thurs' chapter in the 2015 book Newton's Apple and Other Myths about Science, which concluded that the scientific method is a myth or, at best, an idealization. As myths are beliefs, they are subject to the narrative fallacy, as pointed out by Taleb. Philosophers Robert Nola and Howard Sankey, in their 2007 book Theories of Scientific Method, said that debates over the scientific method continue, and argued that Feyerabend, despite the title of Against Method, accepted certain rules of method and attempted to justify those rules with a meta methodology.
Staddon (2017) argues it is a mistake to try following rules in the absence of an algorithmic scientific method; in that case, "science is best understood through examples". But algorithmic methods, such as disproof of existing theory by experiment have been used since Alhacen (1027) and his Book of Optics, and Galileo (1638) and his Two New Sciences, and The Assayer, which still stand as scientific method.
== Elements of inquiry ==
=== Overview ===
The scientific method is the process by which science is carried out. As in other areas of inquiry, science (through the scientific method) can build on previous knowledge, and unify understanding of its studied topics over time. Historically, the development of the scientific method was critical to the Scientific Revolution.
The overall process involves making conjectures (hypotheses), predicting their logical consequences, then carrying out experiments based on those predictions to determine whether the original conjecture was correct. However, there are difficulties in a formulaic statement of method. Though the scientific method is often presented as a fixed sequence of steps, these actions are more accurately general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always done in the same order.
==== Factors of scientific inquiry ====
There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of experimental sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.The scientific method is an iterative, cyclical process through which information is continually revised. It is generally recognized to develop advances in knowledge through the following elements, in varying combinations or contributions:
Characterizations (observations, definitions, and measurements of the subject of inquiry)
Hypotheses (theoretical, hypothetical explanations of observations and measurements of the subject)
Predictions (inductive and deductive reasoning from the hypothesis or theory)
Experiments (tests of all of the above)
Each element of the scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do but apply mostly to experimental sciences (e.g., physics, chemistry, biology, and psychology). The elements above are often taught in the educational system as "the scientific method".
The scientific method is not a single recipe: it requires intelligence, imagination, and creativity. In this sense, it is not a mindless set of standards and procedures to follow but is rather an ongoing cycle, constantly developing more useful, accurate, and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically massive, the feather-light, and the extremely fast are removed from Einstein's theories – all phenomena Newton could not have observed – Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase confidence in Newton's work.
An iterative, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding:
Define a question
Gather information and resources (observe)
Form an explanatory hypothesis
Test the hypothesis by performing an experiment and collecting data in a reproducible manner
Analyze the data
Interpret the data and draw conclusions that serve as a starting point for a new hypothesis
Publish results
Retest (frequently done by other scientists)
The iterative cycle inherent in this step-by-step method goes from point 3 to 6 and back to 3 again.
While this schema outlines a typical hypothesis/testing method, many philosophers, historians, and sociologists of science, including Paul Feyerabend, claim that such descriptions of scientific method have little relation to the ways that science is actually practiced.
=== Characterizations ===
The basic elements of the scientific method are illustrated by the following example (which occurred from 1944 to 1953) from the discovery of the structure of DNA (marked with and indented).
In 1950, it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle). But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle.
The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; these observations often demand careful measurements and/or counting can take the form of expansive empirical research.
A scientific question can refer to the explanation of a specific observation, as in "Why is the sky blue?" but can also be open-ended, as in "How can I design a drug to cure this particular disease?" This stage frequently involves finding and evaluating evidence from previous experiments, personal scientific observations or assertions, as well as the work of other scientists. If the answer is already known, a different question that builds on the evidence can be posed. When applying the scientific method to research, determining a good question can be very difficult and it will affect the outcome of the investigation.
The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.
I am not accustomed to saying anything with certainty after only one or two observations.
==== Definition ====
The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work.
New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood. In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.
=== Hypothesis development ===
Linus Pauling proposed that DNA might be a triple helix. This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong. and that Pauling would soon admit his difficulties with that structure.
A hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena. Normally, hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.
Scientists are free to use whatever resources they have – their own creativity, ideas from other fields, inductive reasoning, Bayesian inference, and so on – to imagine possible explanations for a phenomenon under study. Albert Einstein once observed that "there is no logical bridge between phenomena and their theoretical principles." Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning.: II, p.290 The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology.
William Glen observes that
the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate ... bald suppositions and areas of vagueness.
In general, scientists tend to look for theories that are "elegant" or "beautiful". Scientists often use these terms to refer to a theory that is following the known facts but is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.
To minimize the confirmation bias that results from entertaining a single hypothesis, strong inference emphasizes the need for entertaining multiple alternative hypotheses, and avoiding artifacts.
=== Predictions from the hypothesis ===
James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'. This prediction followed from the work of Cochran, Crick and Vand (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x-shaped patterns.
In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities.
It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered while formulating the hypothesis.
If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. For example, while a hypothesis on the existence of other intelligent species may be convincing with scientifically based speculation, no known experiment can test this hypothesis. Therefore, science itself can have little to say about the possibility. In the future, a new technique may allow for an experimental test and the speculation would then become part of accepted science.
For example, Einstein's theory of general relativity makes several specific predictions about the observable structure of spacetime, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation.
=== Experiments ===
Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from King's College London – Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's photo 51, a detailed X-ray diffraction image, which showed an X-shape and was able to confirm the structure was helical.
Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed when compared to a crucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples, or observations, or populations, under differing conditions, to see what varies or what remains the same. We vary the conditions for the acts of measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is. Factor analysis is one technique for discovering the important factor in an effect.
Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment that tests the aerodynamical hypotheses used for constructing the plane.
These institutions thereby reduce the research function to a cost/benefit, which is expressed as money, and the time and attention of the researchers to be expended, in exchange for a report to their constituents. Current large instruments, such as CERN's Large Hadron Collider (LHC), or LIGO, or the National Ignition Facility (NIF), or the International Space Station (ISS), or the James Webb Space Telescope (JWST), entail expected costs of billions of dollars, and timeframes extending over decades. These kinds of institutions affect public policy, on a national or even international basis, and the researchers would require shared access to such machines and their adjunct infrastructure.
Scientists assume an attitude of openness and accountability on the part of those experimenting. Detailed record-keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190–120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of al-Battani (853–929 CE) and Alhazen (965–1039 CE).
=== Communication and iteration ===
Watson and Crick then produced their model, using this information along with the previously known information about DNA's composition, especially Chargaff's rules of base pairing. After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts, Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it. They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images.
The scientific method is iterative. At any stage, it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject.
This manner of iteration can span decades and sometimes centuries. Published papers can be built upon. For example: By 1027, Alhazen, based on his measurements of the refraction of light, was able to deduce that outer space was less dense than air, that is: "the body of the heavens is rarer than the body of air". In 1079 Ibn Mu'adh's Treatise On Twilight was able to infer that Earth's atmosphere was 50 miles thick, based on atmospheric refraction of the sun's rays.
This is why the scientific method is often represented as circular – new information leads to new characterisations, and the cycle of science continues. Measurements collected can be archived, passed onwards and used by others. Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.
=== Confirmation ===
Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin.
If an experiment cannot be repeated to produce the same results, this implies that the original results might have been in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications of experimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work. Replication has become a contentious issue in social and biomedical science where treatments are administered to groups of individuals. Typically an experimental group gets the treatment, such as a drug, and the control group gets a placebo. John Ioannidis in 2005 pointed out that the method being used has led to many findings that cannot be replicated.
The process of peer review involves the evaluation of the experiment by experts, who typically give their opinions anonymously. Some journals request that the experimenter provide lists of possible peer reviewers, especially if the field is highly specialized. Peer review does not certify the correctness of the results, only that, in the opinion of the reviewer, the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which occasionally may require new experiments requested by the reviewers, it will be published in a peer-reviewed scientific journal. The specific journal that publishes the results indicates the perceived quality of the work.
Scientists typically are careful in recording their data, a requirement promoted by Ludwik Fleck (1896–1961) and others. Though not typically required, they might be requested to supply this data to other scientists who wish to replicate their original results (or parts of their original results), extending to the sharing of any experimental samples that may be difficult to obtain. To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals, including Nature and Science, have a policy that researchers must archive their data and methods so that other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at several national archives in the U.S. or the World Data Center.
== Foundational principles ==
=== Honesty, openness, and falsifiability ===
The unfettered principles of science are to strive for accuracy and the creed of honesty; openness already being a matter of degrees. Openness is restricted by the general rigour of scepticism. And of course the matter of non-science.
Smolin, in 2013, espoused ethical principles rather than giving any potentially limited definition of the rules of inquiry. His ideas stand in the context of the scale of data–driven and big science, which has seen increased importance of honesty and consequently reproducibility. His thought is that science is a community effort by those who have accreditation and are working within the community. He also warns against overzealous parsimony.
Popper previously took ethical principles even further, going as far as to ascribe value to theories only if they were falsifiable. Popper used the falsifiability criterion to demarcate a scientific theory from a theory like astrology: both "explain" observations, but the scientific theory takes the risk of making predictions that decide whether it is right or wrong:
"Those among us who are unwilling to expose their ideas to the hazard of refutation do not take part in the game of science."
=== Theory's interactions with observation ===
Science has limits. Those limits are usually deemed to be answers to questions that aren't in science's domain, such as faith. Science has other limits as well, as it seeks to make true statements about reality. The nature of truth and the discussion on how scientific statements relate to reality is best left to the article on the philosophy of science here. More immediately topical limitations show themselves in the observation of reality.
It is the natural limitations of scientific inquiry that there is no pure observation as theory is required to interpret empirical data, and observation is therefore influenced by the observer's conceptual framework. As science is an unfinished project, this does lead to difficulties. Namely, that false conclusions are drawn, because of limited information.
An example here are the experiments of Kepler and Brahe, used by Hanson to illustrate the concept. Despite observing the same sunrise the two scientists came to different conclusions—their intersubjectivity leading to differing conclusions. Johannes Kepler used Tycho Brahe's method of observation, which was to project the image of the Sun on a piece of paper through a pinhole aperture, instead of looking directly at the Sun. He disagreed with Brahe's conclusion that total eclipses of the Sun were impossible because, contrary to Brahe, he knew that there were historical accounts of total eclipses. Instead, he deduced that the images taken would become more accurate, the larger the aperture—this fact is now fundamental for optical system design. Another historic example here is the discovery of Neptune, credited as being found via mathematics because previous observers didn't know what they were looking at.
=== Empiricism, rationalism, and more pragmatic views ===
Scientific endeavour can be characterised as the pursuit of truths about the natural world or as the elimination of doubt about the same. The former is the direct construction of explanations from empirical data and logic, the latter the reduction of potential explanations. It was established above how the interpretation of empirical data is theory-laden, so neither approach is trivial.
The ubiquitous element in the scientific method is empiricism, which holds that knowledge is created by a process involving observation; scientific theories generalize observations. This is in opposition to stringent forms of rationalism, which holds that knowledge is created by the human intellect; later clarified by Popper to be built on prior theory. The scientific method embodies the position that reason alone cannot solve a particular scientific problem; it unequivocally refutes claims that revelation, political or religious dogma, appeals to tradition, commonly held beliefs, common sense, or currently held theories pose the only possible means of demonstrating truth.
In 1877, C. S. Peirce characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, the belief being that on which one is prepared to act. His pragmatic views framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or "hyperbolic doubt", which he held to be fruitless. This "hyperbolic doubt" Peirce argues against here is of course just another name for Cartesian doubt associated with René Descartes. It is a methodological route to certain knowledge by identifying what can't be doubted.
A strong formulation of the scientific method is not always aligned with a form of empiricism in which the empirical data is put forward in the form of experience or other abstracted forms of knowledge as in current scientific practice the use of scientific modelling and reliance on abstract typologies and theories is normally accepted. In 2010, Hawking suggested that physics' models of reality should simply be accepted where they prove to make useful predictions. He calls the concept model-dependent realism.
== Rationality ==
Rationality embodies the essence of sound reasoning, a cornerstone not only in philosophical discourse but also in the realms of science and practical decision-making. According to the traditional viewpoint, rationality serves a dual purpose: it governs beliefs, ensuring they align with logical principles, and it steers actions, directing them towards coherent and beneficial outcomes. This understanding underscores the pivotal role of reason in shaping our understanding of the world and in informing our choices and behaviours. The following section will first explore beliefs and biases, and then get to the rational reasoning most associated with the sciences.
=== Beliefs and biases ===
Scientific methodology often directs that hypotheses be tested in controlled conditions wherever possible. This is frequently possible in certain areas, such as in the biological sciences, and more difficult in other areas, such as in astronomy.
The practice of experimental control and reproducibility can have the effect of diminishing the potentially harmful effects of circumstance, and to a degree, personal bias. For example, pre-existing beliefs can alter the interpretation of results, as in confirmation bias; this is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe).
[T]he action of thought is excited by the irritation of doubt, and ceases when belief is attained.
A historical example is the belief that the legs of a galloping horse are splayed at the point when none of the horse's legs touch the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false, and that the legs are instead gathered together.
Another important human bias that plays a role is a preference for new, surprising statements (see Appeal to novelty), which can result in a search for evidence that the new is true. Poorly attested beliefs can be believed and acted upon via a less rigorous heuristic.
Goldhaber and Nieto published in 2010 the observation that if theoretical structures with "many closely neighboring subjects are described by connecting theoretical concepts, then the theoretical structure acquires a robustness which makes it increasingly hard – though certainly never impossible – to overturn". When a narrative is constructed its elements become easier to believe.
Fleck (1979), p. 27 notes "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it". Sometimes, these relations have their elements assumed a priori, or contain some other logical or methodological flaw in the process that ultimately produced them. Donald M. MacKay has analyzed these elements in terms of limits to the accuracy of measurement and has related them to instrumental elements in a category of measurement.
=== Deductive and inductive reasoning ===
The idea of there being two opposed justifications for truth has shown up throughout the history of scientific method as analysis versus synthesis, non-ampliative/ampliative, or even confirmation and verification. (And there are other kinds of reasoning.) One to use what is observed to build towards fundamental truths – and the other to derive from those fundamental truths more specific principles.
Deductive reasoning is the building of knowledge based on what has been shown to be true before. It requires the assumption of fact established prior, and, given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. Inductive reasoning builds knowledge not from established truth, but from a body of observations. It requires stringent scepticism regarding observed phenomena, because cognitive assumptions can distort the interpretation of initial perceptions.
An example for how inductive and deductive reasoning works can be found in the history of gravitational theory. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic, and European astronomers, to fully record the motion of planet Earth. Kepler(and others) were then able to build their early theories by generalizing the collected data inductively, and Newton was able to unify prior theory and measurements into the consequences of his laws of motion in 1727.
Another common example of inductive reasoning is the observation of a counterexample to current theory inducing the need for new ideas. Le Verrier in 1859 pointed out problems with the perihelion of Mercury that showed Newton's theory to be at least incomplete. The observed difference of Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of relativity. His relativistic calculations matched observation much more closely than Newtonian theory did. Though, today's Standard Model of physics suggests that we still do not know at least some of the concepts surrounding Einstein's theory, it holds to this day and is being built on deductively.
A theory being assumed as true and subsequently built on is a common example of deductive reasoning. Theory building on Einstein's achievement can simply state that 'we have shown that this case fulfils the conditions under which general/special relativity applies, therefore its conclusions apply also'. If it was properly shown that 'this case' fulfils the conditions, the conclusion follows. An extension of this is the assumption of a solution to an open problem. This weaker kind of deductive reasoning will get used in current research, when multiple scientists or even teams of researchers are all gradually solving specific cases in working towards proving a larger theory. This often sees hypotheses being revised again and again as new proof emerges.
This way of presenting inductive and deductive reasoning shows part of why science is often presented as being a cycle of iteration. It is important to keep in mind that that cycle's foundations lie in reasoning, and not wholly in the following of procedure.
=== Certainty, probabilities, and statistical inference ===
Claims of scientific truth can be opposed in three ways: by falsifying them, by questioning their certainty, or by asserting the claim itself to be incoherent. Incoherence, here, means internal errors in logic, like stating opposites to be true; falsification is what Popper would have called the honest work of conjecture and refutation — certainty, perhaps, is where difficulties in telling truths from non-truths arise most easily.
Measurements in scientific work are usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken.
In the case of measurement imprecision, there will simply be a 'probable deviation' expressing itself in a study's conclusions. Statistics are different. Inductive statistical generalisation will take sample data and extrapolate more general conclusions, which has to be justified — and scrutinised. It can even be said that statistical models are only ever useful, but never a complete representation of circumstances.
In statistical analysis, expected and unexpected bias is a large factor. Research questions, the collection of data, or the interpretation of results, all are subject to larger amounts of scrutiny than in comfortably logical environments. Statistical models go through a process for validation, for which one could even say that awareness of potential biases is more important than the hard logic; errors in logic are easier to find in peer review, after all. More general, claims to rational knowledge, and especially statistics, have to be put into their appropriate context. Simple statements such as '9 out of 10 doctors recommend' are therefore of unknown quality because they do not justify their methodology.
Lack of familiarity with statistical methodologies can result in erroneous conclusions. Foregoing the easy example, multiple probabilities interacting is where, for example medical professionals, have shown a lack of proper understanding. Bayes' theorem is the mathematical principle lining out how standing probabilities are adjusted given new information. The boy or girl paradox is a common example. In knowledge representation, Bayesian estimation of mutual information between random variables is a way to measure dependence, independence, or interdependence of the information under scrutiny.
Beyond commonly associated survey methodology of field research, the concept together with probabilistic reasoning is used to advance fields of science where research objects have no definitive states of being. For example, in statistical mechanics.
== Methods of inquiry ==
=== Hypothetico-deductive method ===
The hypothetico-deductive model, or hypothesis-testing method, or "traditional" scientific method is, as the name implies, based on the formation of hypotheses and their testing via deductive reasoning. A hypothesis stating implications, often called predictions, that are falsifiable via experiment is of central importance here, as not the hypothesis but its implications are what is tested. Basically, scientists will look at the hypothetical consequences a (potential) theory holds and prove or disprove those instead of the theory itself. If an experimental test of those hypothetical consequences shows them to be false, it follows logically that the part of the theory that implied them was false also. If they show as true however, it does not prove the theory definitively.
The logic of this testing is what affords this method of inquiry to be reasoned deductively. The formulated hypothesis is assumed to be 'true', and from that 'true' statement implications are inferred. If the following tests show the implications to be false, it follows that the hypothesis was false also. If test show the implications to be true, new insights will be gained. It is important to be aware that a positive test here will at best strongly imply but not definitively prove the tested hypothesis, as deductive inference (A ⇒ B) is not equivalent like that; only (¬B ⇒ ¬A) is valid logic. Their positive outcomes however, as Hempel put it, provide "at least some support, some corroboration or confirmation for it". This is why Popper insisted on fielded hypotheses to be falsifieable, as successful tests imply very little otherwise. As Gillies put it, "successful theories are those that survive elimination through falsification".
Deductive reasoning in this mode of inquiry will sometimes be replaced by abductive reasoning—the search for the most plausible explanation via logical inference. For example, in biology, where general laws are few, as valid deductions rely on solid presuppositions.
=== Inductive method ===
The inductivist approach to deriving scientific truth first rose to prominence with Francis Bacon and particularly with Isaac Newton and those who followed him. After the establishment of the HD-method, it was often put aside as something of a "fishing expedition" though. It is still valid to some degree, but today's inductive method is often far removed from the historic approach—the scale of the data collected lending new effectiveness to the method. It is most-associated with data-mining projects or large-scale observation projects. In both these cases, it is often not at all clear what the results of proposed experiments will be, and thus knowledge will arise after the collection of data through inductive reasoning.
Where the traditional method of inquiry does both, the inductive approach usually formulates only a research question, not a hypothesis. Following the initial question instead, a suitable "high-throughput method" of data-collection is determined, the resulting data processed and 'cleaned up', and conclusions drawn after. "This shift in focus elevates the data to the supreme role of revealing novel insights by themselves".
The advantage the inductive method has over methods formulating a hypothesis that it is essentially free of "a researcher's preconceived notions" regarding their subject. On the other hand, inductive reasoning is always attached to a measure of certainty, as all inductively reasoned conclusions are. This measure of certainty can reach quite high degrees, though. For example, in the determination of large primes, which are used in encryption software.
=== Mathematical modelling ===
Mathematical modelling, or allochthonous reasoning, typically is the formulation of a hypothesis followed by building mathematical constructs that can be tested in place of conducting physical laboratory experiments. This approach has two main factors: simplification/abstraction and secondly a set of correspondence rules. The correspondence rules lay out how the constructed model will relate back to reality-how truth is derived; and the simplifying steps taken in the abstraction of the given system are to reduce factors that do not bear relevance and thereby reduce unexpected errors. These steps can also help the researcher in understanding the important factors of the system, how far parsimony can be taken until the system becomes more and more unchangeable and thereby stable. Parsimony and related principles are further explored below.
Once this translation into mathematics is complete, the resulting model, in place of the corresponding system, can be analysed through purely mathematical and computational means. The results of this analysis are of course also purely mathematical in nature and get translated back to the system as it exists in reality via the previously determined correspondence rules—iteration following review and interpretation of the findings. The way such models are reasoned will often be mathematically deductive—but they don't have to be. An example here are Monte-Carlo simulations. These generate empirical data "arbitrarily", and, while they may not be able to reveal universal principles, they can nevertheless be useful.
== Scientific inquiry ==
Scientific inquiry generally aims to obtain knowledge in the form of testable explanations that scientists can use to predict the results of future experiments. This allows scientists to gain a better understanding of the topic under study, and later to use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it frequently can be, and the more likely it will continue to explain a body of evidence better than its alternatives. The most successful explanations – those that explain and make accurate predictions in a wide range of circumstances – are often called scientific theories.
Most experimental results do not produce large changes in human understanding; improvements in theoretical scientific understanding typically result from a gradual process of development over time, sometimes across different domains of science. Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted over time as evidence accumulates on a given topic, and the explanation in question proves more powerful than its alternatives at explaining the evidence. Often subsequent researchers re-formulate the explanations over time, or combined explanations to produce new explanations.
=== Properties of scientific inquiry ===
Scientific knowledge is closely tied to empirical findings and can remain subject to falsification if new experimental observations are incompatible with what is found. That is, no theory can ever be considered final since new problematic evidence might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory relates to how long it has persisted without major alteration to its core principles.
Theories can also become subsumed by other theories. For example, Newton's laws explained thousands of years of scientific observations of the planets almost perfectly. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws and predicted and explained other observations such as the deflection of light by gravity. Thus, in certain cases independent, unconnected, scientific observations can be connected, unified by principles of increasing explanatory power.
Since new theories might be more comprehensive than what preceded them, and thus be able to explain more than previous ones, successor theories might be able to meet a higher standard by explaining a larger body of observations than their predecessors. For example, the theory of evolution explains the diversity of life on Earth, how species adapt to their environments, and many other patterns observed in the natural world; its most recent major modification was unification with genetics to form the modern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such as biochemistry and molecular biology.
== Heuristics ==
=== Confirmation theory ===
During the course of history, one theory has succeeded another, and some have suggested further work while others have seemed content just to explain the phenomena. The reasons why one theory has replaced another are not always obvious or simple. The philosophy of science includes the question: What criteria are satisfied by a 'good' theory. This question has a long history, and many scientists, as well as philosophers, have considered it. The objective is to be able to choose one theory as preferable to another without introducing cognitive bias. Though different thinkers emphasize different aspects, a good theory:
is accurate (the trivial element);
is consistent, both internally and with other relevant currently accepted theories;
has explanatory power, meaning its consequences extend beyond the data it is required to explain;
has unificatory power; as in its organizing otherwise confused and isolated phenomena
and is fruitful for further research.
In trying to look for such theories, scientists will, given a lack of guidance by empirical evidence, try to adhere to:
parsimony in causal explanations
and look for invariant observations.
Scientists will sometimes also list the very subjective criteria of "formal elegance" which can indicate multiple different things.
The goal here is to make the choice between theories less arbitrary. Nonetheless, these criteria contain subjective elements, and should be considered heuristics rather than a definitive. Also, criteria such as these do not necessarily decide between alternative theories. Quoting Bird:
"[Such criteria] cannot determine scientific choice. First, which features of a theory satisfy these criteria may be disputable (e.g. does simplicity concern the ontological commitments of a theory or its mathematical form?). Secondly, these criteria are imprecise, and so there is room for disagreement about the degree to which they hold. Thirdly, there can be disagreement about how they are to be weighted relative to one another, especially when they conflict."
It also is debatable whether existing scientific theories satisfy all these criteria, which may represent goals not yet achieved. For example, explanatory power over all existing observations is satisfied by no one theory at the moment.
==== Parsimony ====
The desiderata of a "good" theory have been debated for centuries, going back perhaps even earlier than Occam's razor, which is often taken as an attribute of a good theory. Science tries to be simple. When gathered data supports multiple explanations, the most simple explanation for phenomena or the most simple formation of a theory is recommended by the principle of parsimony. Scientists go as far as to call simple proofs of complex statements beautiful.
We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
The concept of parsimony should not be held to imply complete frugality in the pursuit of scientific truth. The general process starts at the opposite end of there being a vast number of potential explanations and general disorder. An example can be seen in Paul Krugman's process, who makes explicit to "dare to be silly". He writes that in his work on new theories of international trade he reviewed prior work with an open frame of mind and broadened his initial viewpoint even in unlikely directions. Once he had a sufficient body of ideas, he would try to simplify and thus find what worked among what did not. Specific to Krugman here was to "question the question". He recognised that prior work had applied erroneous models to already present evidence, commenting that "intelligent commentary was ignored". Thus touching on the need to bridge the common bias against other circles of thought.
==== Elegance ====
Occam's razor might fall under the heading of "simple elegance", but it is arguable that parsimony and elegance pull in different directions. Introducing additional elements could simplify theory formulation, whereas simplifying a theory's ontology might lead to increased syntactical complexity.
Sometimes ad-hoc modifications of a failing idea may also be dismissed as lacking "formal elegance". This appeal to what may be called "aesthetic" is hard to characterise, but essentially about a sort of familiarity. Though, argument based on "elegance" is contentious and over-reliance on familiarity will breed stagnation.
==== Invariance ====
Principles of invariance have been a theme in scientific writing, and especially physics, since at least the early 20th century. The basic idea here is that good structures to look for are those independent of perspective, an idea that has featured earlier of course for example in Mill's Methods of difference and agreement—methods that would be referred back to in the context of contrast and invariance. But as tends to be the case, there is a difference between something being a basic consideration and something being given weight. Principles of invariance have only been given weight in the wake of Einstein's theories of relativity, which reduced everything to relations and were thereby fundamentally unchangeable, unable to be varied. As David Deutsch put it in 2009: "the search for hard-to-vary explanations is the origin of all progress".
An example here can be found in one of Einstein's thought experiments. The one of a lab suspended in empty space is an example of a useful invariant observation. He imagined the absence of gravity and an experimenter free floating in the lab. — If now an entity pulls the lab upwards, accelerating uniformly, the experimenter would perceive the resulting force as gravity. The entity however would feel the work needed to accelerate the lab continuously. Through this experiment Einstein was able to equate gravitational and inertial mass; something unexplained by Newton's laws, and an early but "powerful argument for a generalised postulate of relativity".
The feature, which suggests reality, is always some kind of invariance of a structure independent of the aspect, the projection.
The discussion on invariance in physics is often had in the more specific context of symmetry. The Einstein example above, in the parlance of Mill would be an agreement between two values. In the context of invariance, it is a variable that remains unchanged through some kind of transformation or change in perspective. And discussion focused on symmetry would view the two perspectives as systems that share a relevant aspect and are therefore symmetrical.
Related principles here are falsifiability and testability. The opposite of something being hard-to-vary are theories that resist falsification—a frustration that was expressed colourfully by Wolfgang Pauli as them being "not even wrong". The importance of scientific theories to be falsifiable finds especial emphasis in the philosophy of Karl Popper. The broader view here is testability, since it includes the former and allows for additional practical considerations.
== Philosophy and discourse ==
Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions, derived from philosophy by at least one prominent scientist, that form the base of the scientific method – namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world. These assumptions from methodological naturalism form a basis on which science may be grounded. Logical positivist, empiricist, falsificationist, and other theories have criticized these assumptions and given alternative accounts of the logic of science, but each has also itself been criticized.
There are several kinds of modern philosophical conceptualizations and attempts at definitions of the method of science. The one attempted by the unificationists, who argue for the existence of a unified definition that is useful (or at least 'works' in every context of science). The pluralists, arguing degrees of science being too fractured for a universal definition of its method to by useful. And those, who argue that the very attempt at definition is already detrimental to the free flow of ideas.
Additionally, there have been views on the social framework in which science is done, and the impact of the sciences social environment on research. Also, there is 'scientific method' as popularised by Dewey in How We Think (1910) and Karl Pearson in Grammar of Science (1892), as used in fairly uncritical manner in education.
=== Pluralism ===
Scientific pluralism is a position within the philosophy of science that rejects various proposed unities of scientific method and subject matter. Scientific pluralists hold that science is not unified in one or more of the following ways: the metaphysics of its subject matter, the epistemology of scientific knowledge, or the research methods and models that should be used. Some pluralists believe that pluralism is necessary due to the nature of science. Others say that since scientific disciplines already vary in practice, there is no reason to believe this variation is wrong until a specific unification is empirically proven. Finally, some hold that pluralism should be allowed for normative reasons, even if unity were possible in theory.
=== Unificationism ===
Unificationism, in science, was a central tenet of logical positivism. Different logical positivists construed this doctrine in several different ways, e.g. as a reductionist thesis, that the objects investigated by the special sciences reduce to the objects of a common, putatively more basic domain of science, usually thought to be physics; as the thesis that all theories and results of the various sciences can or ought to be expressed in a common language or "universal slang"; or as the thesis that all the special sciences share a common scientific method.
Development of the idea has been troubled by accelerated advancement in technology that has opened up many new ways to look at the world.
The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend.
=== Epistemological anarchism ===
Paul Feyerabend examined the history of science, and was led to deny that science is genuinely a methodological process. In his 1975 book Against Method he argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception-free methodological rules governing the progress of science. In essence, he said that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. He jokingly suggested that, if believers in the scientific method wish to express a single universally valid rule, it should be 'anything goes'. As has been argued before him however, this is uneconomic; problem solvers, and researchers are to be prudent with their resources during their inquiry.
A more general inference against formalised method has been found through research involving interviews with scientists regarding their conception of method. This research indicated that scientists frequently encounter difficulty in determining whether the available evidence supports their hypotheses. This reveals that there are no straightforward mappings between overarching methodological concepts and precise strategies to direct the conduct of research.
=== Education ===
In science education, the idea of a general and universal scientific method has been notably influential, and numerous studies (in the US) have shown that this framing of method often forms part of both students’ and teachers’ conception of science. This convention of traditional education has been argued against by scientists, as there is a consensus that educations' sequential elements and unified view of scientific method do not reflect how scientists actually work. Major organizations of scientists such as the American Association for the Advancement of Science (AAAS) consider the sciences to be a part of the liberal arts traditions of learning and proper understating of science includes understanding of philosophy and history, not just science in isolation.
How the sciences make knowledge has been taught in the context of "the" scientific method (singular) since the early 20th century. Various systems of education, including but not limited to the US, have taught the method of science as a process or procedure, structured as a definitive series of steps: observation, hypothesis, prediction, experiment.
This version of the method of science has been a long-established standard in primary and secondary education, as well as the biomedical sciences. It has long been held to be an inaccurate idealisation of how some scientific inquiries are structured.
The taught presentation of science had to defend demerits such as:
it pays no regard to the social context of science,
it suggests a singular methodology of deriving knowledge,
it overemphasises experimentation,
it oversimplifies science, giving the impression that following a scientific process automatically leads to knowledge,
it gives the illusion of determination; that questions necessarily lead to some kind of answers and answers are preceded by (specific) questions,
and, it holds that scientific theories arise from observed phenomena only.
The scientific method no longer features in the standards for US education of 2013 (NGSS) that replaced those of 1996 (NRC). They, too, influenced international science education, and the standards measured for have shifted since from the singular hypothesis-testing method to a broader conception of scientific methods. These scientific methods, which are rooted in scientific practices and not epistemology, are described as the 3 dimensions of scientific and engineering practices, crosscutting concepts (interdisciplinary ideas), and disciplinary core ideas.
The scientific method, as a result of simplified and universal explanations, is often held to have reached a kind of mythological status; as a tool for communication or, at best, an idealisation. Education's approach was heavily influenced by John Dewey's, How We Think (1910). Van der Ploeg (2016) indicated that Dewey's views on education had long been used to further an idea of citizen education removed from "sound education", claiming that references to Dewey in such arguments were undue interpretations (of Dewey).
=== Sociology of knowledge ===
The sociology of knowledge is a concept in the discussion around scientific method, claiming the underlying method of science to be sociological. King explains that sociology distinguishes here between the system of ideas that govern the sciences through an inner logic, and the social system in which those ideas arise.
==== Thought collectives ====
A perhaps accessible lead into what is claimed is Fleck's thought, echoed in Kuhn's concept of normal science. According to Fleck, scientists' work is based on a thought-style, that cannot be rationally reconstructed. It gets instilled through the experience of learning, and science is then advanced based on a tradition of shared assumptions held by what he called thought collectives. Fleck also claims this phenomenon to be largely invisible to members of the group.
Comparably, following the field research in an academic scientific laboratory by Latour and Woolgar, Karin Knorr Cetina has conducted a comparative study of two scientific fields (namely high energy physics and molecular biology) to conclude that the epistemic practices and reasonings within both scientific communities are different enough to introduce the concept of "epistemic cultures", in contradiction with the idea that a so-called "scientific method" is unique and a unifying concept.
==== Situated cognition and relativism ====
On the idea of Fleck's thought collectives sociologists built the concept of situated cognition: that the perspective of the researcher fundamentally affects their work; and, too, more radical views.
Norwood Russell Hanson, alongside Thomas Kuhn and Paul Feyerabend, extensively explored the theory-laden nature of observation in science. Hanson introduced the concept in 1958, emphasizing that observation is influenced by the observer's conceptual framework. He used the concept of gestalt to show how preconceptions can affect both observation and description, and illustrated this with examples like the initial rejection of Golgi bodies as an artefact of staining technique, and the differing interpretations of the same sunrise by Tycho Brahe and Johannes Kepler. Intersubjectivity led to different conclusions.
Kuhn and Feyerabend acknowledged Hanson's pioneering work, although Feyerabend's views on methodological pluralism were more radical. Criticisms like those from Kuhn and Feyerabend prompted discussions leading to the development of the strong programme, a sociological approach that seeks to explain scientific knowledge without recourse to the truth or validity of scientific theories. It examines how scientific beliefs are shaped by social factors such as power, ideology, and interests.
The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between postmodernist and realist perspectives. Postmodernists argue that scientific knowledge is merely a discourse, devoid of any claim to fundamental truth. In contrast, realists within the scientific community maintain that science uncovers real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate way of deriving truth.
== Limits of method ==
=== Role of chance in discovery ===
Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky. Scientists themselves in the 19th and 20th century acknowledged the role of fortunate luck or serendipity in discoveries. Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected. This is what Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough – it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.
Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try to fix what they think is an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious, and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.
=== Relationship with statistics ===
When the scientific method employs statistics as a key part of its arsenal, there are mathematical and practical issues that can have a deleterious effect on the reliability of the output of scientific methods. This is described in a popular 2005 scientific paper "Why Most Published Research Findings Are False" by John Ioannidis, which is considered foundational to the field of metascience. Much research in metascience seeks to identify poor use of statistics and improve its use, an example being the misuse of p-values.
The points raised are both statistical and economical. Statistically, research findings are less likely to be true when studies are small and when there is significant flexibility in study design, definitions, outcomes, and analytical approaches. Economically, the reliability of findings decreases in fields with greater financial interests, biases, and a high level of competition among research teams. As a result, most research findings are considered false across various designs and scientific fields, particularly in modern biomedical research, which often operates in areas with very low pre- and post-study probabilities of yielding true findings. Nevertheless, despite these challenges, most new discoveries will continue to arise from hypothesis-generating research that begins with low or very low pre-study odds. This suggests that expanding the frontiers of knowledge will depend on investigating areas outside the mainstream, where the chances of success may initially appear slim.
=== Science of complex systems ===
Science applied to complex systems can involve elements such as transdisciplinarity, systems theory, control theory, and scientific modelling.
In general, the scientific method may be difficult to apply stringently to diverse, interconnected systems and large data sets. In particular, practices used within Big data, such as predictive analytics, may be considered to be at odds with the scientific method, as some of the data may have been stripped of the parameters which might be material in alternative hypotheses for an explanation; thus the stripped data would only serve to support the null hypothesis in the predictive analytics application. Fleck (1979), pp. 38–50 notes "a scientific discovery remains incomplete without considerations of the social practices that condition it".
== Relationship with mathematics ==
Science is the process of gathering, comparing, and evaluating proposed models against observables. A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines try to distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proved; at such a stage, that statement would be called a conjecture.
Mathematical work and scientific work can inspire each other. For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But today, the Poincaré conjecture has been proved using time as a mathematical concept in which objects can flow (see Ricci flow).
Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", is a very well-known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well-known mathematicians such as Gregory Chaitin, and others such as Lakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science.
George Pólya's work on problem solving, the construction of mathematical proofs, and heuristic show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps.
In Pólya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Pólya takes from Pappus, involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details of the proof; review involves reconsidering and re-examining the result and the path taken to it.
Building on Pólya's work, Imre Lakatos argued that mathematicians actually use contradiction, criticism, and revision as principles for improving their work. In like manner to science, where truth is sought, but certainty is not found, in Proofs and Refutations, what Lakatos tried to establish was that no theorem of informal mathematics is final or perfect. This means that, in non-axiomatic mathematics, we should not think that a theorem is ultimately true, only that no counterexample has yet been found. Once a counterexample, i.e. an entity contradicting/not explained by the theorem is found, we adjust the theorem, possibly extending the domain of its validity. This is a continuous way our knowledge accumulates, through the logic and process of proofs and refutations. (However, if axioms are given for a branch of mathematics, this creates a logical system —Wittgenstein 1921 Tractatus Logico-Philosophicus 5.13; Lakatos claimed that proofs from such a system were tautological, i.e. internally logically true, by rewriting forms, as shown by Poincaré, who demonstrated the technique of transforming tautologically true forms (viz. the Euler characteristic) into or out of forms from homology, or more abstractly, from homological algebra.
Lakatos proposed an account of mathematical knowledge based on Polya's idea of heuristics. In Proofs and Refutations, Lakatos gave several basic rules for finding proofs and counterexamples to conjectures. He thought that mathematical 'thought experiments' are a valid way to discover mathematical conjectures and proofs.
Gauss, when asked how he came about his theorems, once replied "durch planmässiges Tattonieren" (through systematic palpable experimentation).
== See also ==
Empirical limits in science – Idea that knowledge comes only/mainly from sensory experiencePages displaying short descriptions of redirect targets
Evidence-based practices – Pragmatic methodologyPages displaying short descriptions of redirect targets
Methodology – Study of research methods
Metascience – Scientific study of science
Outline of scientific method
Quantitative research – All procedures for the numerical representation of empirical facts
Research transparency
Scientific law – Statement based on repeated empirical observations that describes some natural phenomenon
Scientific technique – systematic way of obtaining informationPages displaying wikidata descriptions as a fallback
Testability – Extent to which truthness or falseness of a hypothesis/declaration can be tested
== Notes ==
=== Notes: Problem-solving via scientific method ===
=== Notes: Philosophical expressions of method ===
== References ==
== Sources ==
== Further reading ==
== External links ==
Andersen, Hanne; Hepburn, Brian. "Scientific Method". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
"Confirmation and Induction". Internet Encyclopedia of Philosophy.
Scientific method at PhilPapers
Scientific method at the Indiana Philosophy Ontology Project
An Introduction to Science: Scientific Thinking and a scientific method Archived 2018-01-01 at the Wayback Machine by Steven D. Schafersman.
Introduction to the scientific method at the University of Rochester
The scientific method from a philosophical perspective
Theory-ladenness by Paul Newall at The Galilean Library
Lecture on Scientific Method by Greg Anderson (archived 28 April 2006)
Using the scientific method for designing science fair projects
Scientific Methods an online book by Richard D. Jarrard
Richard Feynman on the Key to Science (one minute, three seconds), from the Cornell Lectures.
Lectures on the Scientific Method by Nick Josh Karean, Kevin Padian, Michael Shermer and Richard Dawkins (archived 21 January 2013).
"How Do We Know What Is True?" (animated video; 2:52) | Wikipedia/Scientific_methodology |
The Naval Air Warfare Center Training Systems Division (NAWCTSD) is an Echelon IV command of the United States Navy, reporting to the Commander, Naval Air Warfare Center - Aircraft Division (NAWCAD) at NAS Patuxent River, Maryland. NAWCTSD is located in Orlando, Florida in the Central Florida Research Park, adjacent to the University of Central Florida (UCF). The facility is a part of a larger military installation within the Central Florida Research Park known as Naval Support Activity Orlando (NSA Orlando).
The Commanding Officer (CO) of NAWCTSD, an aeronautically designated U.S. Navy Captain, is also dual-hatted as the installation CO of NSA Orlando. This results in a dual-track command chain, answering to the Commander of NAWCAD as CO of NAWCTSD for Naval Air Systems Command (NAVAIR) issues, and to the Commander, Navy Installations Command (CNIC) as CO of NSA Orlando for installation-related issues.
NAWCTSD is the principal U.S. Navy center for research, development, test and evaluation, acquisition, life cycle program management and product support of all aviation, surface and undersea training systems, devices and programs for the U.S. Navy and all aviation training systems for the U.S. Marine Corps. It also provides interservice coordination and training systems support for the U.S. Army, U.S. Air Force and U.S. Coast Guard, especially in those instances of similar platforms and systems (i.e., USAF CV-22B Osprey, USCG MH-60T Jayhawk, etc.).
== History ==
The roots of the NAWCTSD reach back to April 1941 when then-Commander Luis de Florez became head of the new Special Devices Desk in the Engineering Division of the Navy's Bureau of Aeronautics (BuAer). De Florez championed the use of "synthetic training devices" and urged the Navy to undertake development of such devices to increase readiness. In June, the desk became the Special Devices Section.
Throughout World War II, the Section developed numerous innovative training devices including ones that used motion pictures to train aircraft gunners, a device to train precision bombing, and a kit with which to build model terrains to facilitate operational planning in the field.
The Special Devices Section grew and became the Special Devices Division. In August 1946, the Division, at its newest home at Port Washington, Long Island, NY, was commissioned the Special Devices Center.
As what would later become NAWCTSD evolved and grew, it was aligned at various times under several different parent organizations within the Navy. In 1956, it became the Naval Training Device Center (NAVTRADEVCEN). Over a three-year period in the mid-1960s, the Center moved from its Long Island location to Orlando, Florida, taking residence as a tenant activity at the then-Orlando Air Force Base, that installation subsequently becoming Naval Training Center Orlando in 1968 until its closure in 1999 pursuant to a 1993 Base Realignment and Closure Commission (BRAC) decision.
In 1985, the by then renamed Naval Training Equipment Center (NTEC) became the Naval Training Systems Center (NTSC). In 1988, the Center moved from NTC Orlando to its present headquarters approximately 15 miles east of its former location and just south of the University of Central Florida campus. The main building complex named for its founding father, de Florez.
On October 1, 1993, the Naval Training Systems Center became today's NAWCTSD. In 2003, the command was briefly renamed Naval Air Systems Command Training Systems Division (NAVAIR-TSD), but has since reverted to its original name. In 2005, the physical facility and property was also designated as an independent base named Naval Support Activity Orlando, making it the sole remaining active duty U.S. Navy installation in the Orlando area. At approximately 40 acres in size, NSA Orlando is the second smallest shore installation in the U.S. Navy. However, it is surrounded by several "Partnership" buildings owned and maintained by the State of Florida and effectively leased at cost to various modeling, simulation and training (MS&T) commands of the Department of Defense, to include Army and Marine Corps commands, effectively increasing the military community to several hundred acres. The Air Force MS&T command in Orlando, the Air Force Agency for Modeling and Simulation (AFAMS), currently leases commercial office property immediately adjacent to, but outside the fence line of, NSA Orlando.
NAWCTSD remains a component of the Naval Air Systems Command, but continues to maintain a portfolio and lines of effort that extends beyond Naval Aviation, to include other activities of the Navy and is a collaborative partner with other DoD and non-DoD organizations.
== Naval Support Activity Orlando ==
The land and main buildings on which the main NAWCTSD facility is located inside the Central Florida Research Park is a U.S. Government installation that was designated as Naval Support Activity Orlando in 2005. Additional nearby buildings and facilities are shared in partnership with the Central Florida Research Park and the University of Central Florida and house other Department of Defense (DoD) and Department of Homeland Security (DHS) activities, to include the U.S. Army Futures Command's Synthetic Training Environment Cross-Functional Team (STE CFT), the U.S. Army Program Executive Office for Simulation, Training and Instrumentation (PEO-STRI), the U.S. Marine Corps Program Manager for Training Systems (PMTRASYS), the Air Force Agency for Modeling and Simulation (AFAMS), United States Army Simulation and Training Technology Center (STTC), Federal Law Enforcement Training Centers FLETC Orlando team and the Veterans Health Administration Simulation Learning, Education and Research Network.
As previously stated, the Commanding Officer of NAWCTSD is concurrently dual-hatted as the Commanding Officer of NSA Orlando. Command of the combined NAWCTSD and NSA Orlando is held by a Naval Aviator or Naval Flight Officer in the rank of Captain, either an unrestricted line officer dual-designated as an acquisition professional (AP), or a restricted line aeronautical engineering duty officer.
The gate guardian aircraft for the installation is a Douglas A-4 Skyhawk, BuNo 139931, in Blue Angels livery. This aircraft was previously a gate guardian at the former NTC Orlando and was relocated to NSA Orlando in 1999, just prior to the former's closure due to BRAC. Another aircraft, an F/A-18A Hornet, BuNo 161597, is located on the west side of the installation and is painted in low visibility gray Fleet markings with NAVAIR and NAWCTSD insignia. During its operational career, this aircraft flew with both Fleet squadrons and the Blue Angels. It was previously located at the former Naval Air Station Atlanta, Georgia and was relocated to NSA Orlando just prior to the former's closure due to BRAC.
== See also ==
Air Force Agency for Modeling and Simulation
United States Army Simulation and Training Technology Center
University of Central Florida research centers
== References ==
== External links ==
Official website | Wikipedia/Naval_Air_Warfare_Center_Training_Systems_Division |
The GeForce FX or "GeForce 5" series (codenamed NV30) is a line of graphics processing units from the manufacturer Nvidia.
== Overview ==
Nvidia's GeForce FX series is the fifth generation of the GeForce line. With GeForce 3, the company introduced programmable shader functionality into their 3D architecture, in line with the release of Microsoft's DirectX 8.0. The GeForce 4 Ti was an enhancement of the GeForce 3 technology. With real-time 3D graphics technology continually advancing, the release of DirectX 9.0 brought further refinement of programmable pipeline technology with the arrival of Shader Model 2.0. The GeForce FX series is Nvidia's first generation Direct3D 9-compliant hardware.
The series was manufactured on TSMC's 130 nm fabrication process. It is compliant with Shader Model 2.0/2.0A, allowing more flexibility in complex shader/fragment programs and much higher arithmetic precision. It supports a number of new memory technologies, including DDR2, GDDR2 and GDDR3 and saw Nvidia's first implementation of a memory data bus wider than 128 bits. The anisotropic filtering implementation has potentially higher quality than previous Nvidia designs. Anti-aliasing methods have been enhanced and additional modes are available compared to GeForce 4. Memory bandwidth and fill-rate optimization mechanisms have been improved. Some members of the series offer double fill-rate in z-buffer/stencil-only passes.
The series also brought improvements to the company's video processing hardware, in the form of the Video Processing Engine (VPE), which was first deployed in the GeForce 4 MX. The primary addition, compared to previous Nvidia GPUs, was per-pixel video-deinterlacing.
The initial version of the GeForce FX (the 5800) was one of the first cards to come equipped with a large dual-slot cooler. Called "Flow FX", the cooler was very large in comparison to ATI's small, single-slot cooler on the 9700 series. It was jokingly referred to as the "Dustbuster", due to a high level of fan noise.
The advertising campaign for the GeForce FX featured the Dawn, which was the work of several veterans from the computer animation Final Fantasy: The Spirits Within. Nvidia touted it as "The Dawn of Cinematic Computing".
Nvidia debuted a new campaign to motivate developers to optimize their titles for Nvidia hardware at the Game Developers Conference (GDC) in 2002. In exchange for prominently displaying the Nvidia logo on the outside of the game packaging, the company offered free access to a state-of-the-art test lab in Eastern Europe, that tested against 500 different PC configurations for compatibility. Developers also had extensive access to Nvidia engineers, who helped produce code optimized for the company's products.
Hardware based on the NV30 project didn't launch until near the end of 2002, several months after ATI had released their competing DirectX 9 architecture.
== Overall performance ==
GeForce FX is an architecture designed with DirectX 7, 8 and 9 software in mind. Its performance for DirectX 7 and 8 was generally equal to ATI's competing products with the mainstream versions of the chips, and somewhat faster in the case of the 5900 and 5950 models, but it is much less competitive across the entire range for software that primarily uses DirectX 9 features.
Its weak performance in processing Shader Model 2 programs is caused by several factors. The NV3x design has less overall parallelism and calculation throughput than its competitors. It is more difficult, compared to GeForce 6 and ATI Radeon R300 series, to achieve high efficiency with the architecture due to architectural weaknesses and a resulting heavy reliance on optimized pixel shader code. While the architecture was compliant overall with the DirectX 9 specification, it was optimized for performance with 16-bit shader code, which is less than the 24-bit minimum that the standard requires. When 32-bit shader code is used, the architecture's performance is severely hampered. Proper instruction ordering and instruction composition of shader code is critical for making most of the available computational resources.
== Hardware refreshes and diversification ==
Nvidia's initial release, the GeForce FX 5800, was intended as a high-end part. At the time, there were no GeForce FX products for the other segments of the market. The GeForce 4 MX continued in its role as the budget video card and the older GeForce 4 Ti cards filled in the mid-range.
In April 2003, the company introduced the GeForce FX 5600 and the GeForce FX 5200 to address the other market segments. Each had an "Ultra" variant and a slower, budget-oriented variant and all used conventional single-slot cooling solutions. The 5600 Ultra had respectable performance overall but it was slower than the Radeon 9600 Pro and sometimes slower than the GeForce 4 Ti series. The FX 5200 did not perform as well as the DirectX 7.0 generation GeForce 4 MX440 or Radeon 9000 Pro in some benchmarks.
In May 2003, Nvidia launched the GeForce FX 5900 Ultra, a new high-end product to replace the low-volume and disappointing FX 5800. Based upon a revised GPU called NV35, which fixed some of the DirectX 9 shortcomings of the discontinued NV30, this product was more competitive with the Radeon 9700 and 9800. In addition to redesigning parts of the GPU, the company moved to a 256-bit memory data bus, allowing for significantly higher memory bandwidth than the 5800 even when utilizing more common DDR SDRAM instead of DDR2. The 5900 Ultra performed somewhat better than the Radeon 9800 Pro in games not heavily using shader model 2, and had a quieter cooling system than the 5800.
In October 2003, Nvidia released the GeForce FX 5700 and GeForce FX 5950. The 5700 was a mid-range card using the NV36 GPU with technology from NV35 while the 5950 was a high-end card again using the NV35 GPU but with additional clock speed. The 5950 also featured a redesigned version of the 5800's FlowFX cooler, this time using a larger, slower fan and running much quieter as a result. The 5700 provided strong competition for the Radeon 9600 XT in games limited to light use of shader model 2. The 5950 was competitive with the Radeon 9800 XT, again as long as pixel shaders were lightly used.
In December 2003, the company launched the GeForce FX 5900XT, a graphics card intended for the mid-range segment. It was similar to the 5900 Ultra, but clocked slower and used slower memory. It more thoroughly competed with Radeon 9600 XT, but was still behind in a few shader-intense scenarios.
The GeForce FX line moved to PCI Express in early 2004 with a number of models, including the PCX 5300, PCX 5750, PCX 5900, and PCX 5950. These cards were largely the same as their AGP predecessors with similar model numbers. To operate on the PCIe bus, an AGP-to-PCIe "HSI bridge" chip on the video card converted the PCIe signals into AGP signals for the GPU.
Also in 2004, the GeForce FX 5200 / 5300 series that utilized the NV34 GPU received a new member with the FX 5500.
== GeForce FX model information ==
All models support OpenGL 1.5 (2.1 (software) with latest drivers)
The GeForce FX series runs vertex shaders in an array
=== GeForce FX Go 5 (Go 5xxx) series ===
The GeForce FX Go 5 series for notebooks architecture.
1 Vertex shaders: pixel shaders: texture mapping units: render output units
* The GeForce FX series runs vertex shaders in an array
** GeForce FX series has limited OpenGL 2.1 support(with the last Windows XP driver released for it, 175.19).
== Discontinued support ==
NVIDIA has ceased driver support for GeForce FX series.
=== Final drivers ===
Windows 9x & Windows Me: 81.98 released on December 21, 2005; Download;
Product Support List Windows 95/98/Me – 81.98.
Driver version 81.98 for Windows 9x/Me was the last driver version ever released by Nvidia for these systems; no new official releases were later made for these systems.
Windows 2000, 32-bit Windows XP & Media Center Edition: 175.19 released on June 23, 2008; Download.
Note that the 175.19 driver is known to break Windows Remote Desktop (RDP). The last version before the problem is 174.74. This was apparently fixed in 177.83, however this version is not available for the GeForce FX series of graphic cards. Also worthwhile to note is that 163.75 is the last known good driver that correctly handles the adjustment of the video overlay color properties for the GeForce FX series. Subsequent WHQL drivers do not handle the whole range of possible video overlay adjustments (169.21) or have no effect on those (175.xx).
Windows XP (32-bit): 175.40 released on August 1, 2008; Download.
Windows Vista (32-bit): 96.85 released on October 17, 2006; Download;
Windows Vista (64-bit): 97.34 released on November 21, 2006; Download.
The drivers for Windows 2000/XP can also be installed on Windows Vista (and later versions), however there will be no support for desktop compositing or the Aero effects of the operating system.
(Products supported list also on this page)
Windows 95/98/Me Driver Archive
Windows XP/2000 Driver Archive
Linux/BSD/Solaris: 169.12 released on February 26, 2008; Download.
Also available: 177.67 (beta) released on August 19, 2008; Download.
Unix Driver Archive
== See also ==
List of Nvidia graphics processing units
Rankine (microarchitecture)
GeForce 4 series
GeForce 6 series
GeForce 7 series
== References ==
== External links ==
Nvidia: Cinematic Computing for Every User
ForceWare 81.98 drivers, Final Windows 9x/ME driver release
Geforce 175.19 drivers, Final Windows XP driver release
Museum of Interesting Tech article Picture and specifications for the FX5800
Driver Downloads
laptopvideo2go.com Contains an archive of drivers and modified .INF files for the GeForce FX series
techPowerUp! GPU Database | Wikipedia/GeForce_FX |
In parallel computing, a barrier is a type of synchronization method. A barrier for a group of threads or processes in the source code means any thread/process must stop at this point and cannot proceed until all other threads/processes reach this barrier.
Many collective routines and directive-based parallel languages impose implicit barriers. For example, a parallel do loop in Fortran with OpenMP will not be allowed to continue on any thread until the last iteration is completed. This is in case the program relies on the result of the loop immediately after its completion. In message passing, any global communication (such as reduction or scatter) may imply a barrier.
In concurrent computing, a barrier may be in a raised or lowered state. The term latch is sometimes used to refer to a barrier that starts in the raised state and cannot be re-raised once it is in the lowered state. The term count-down latch is sometimes used to refer to a latch that is automatically lowered once a predetermined number of threads/processes have arrived.
== Implementation ==
Take an example for thread, known as the thread barrier. The thread barrier needs a variable to keep track of the total number of threads that have entered the barrier. Whenever there are enough threads enter the barrier, it will be lifted. A synchronization primitive like mutex is also needed when implementing the thread barrier.
This thread barrier method is also known as Centralized Barrier as the threads have to wait in front of a "central barrier" until the expected number of threads have reached the barrier before it is lifted.
The following C code, which implemented thread barrier by using POSIX Threads will demonstrate this procedure:
In this program, the thread barrier is defined as a struct, struct _thread_barrier, which include:
total_thread: Total threads in the process
thread_barrier_number: Total number of threads expected to enter the thread barrier so that it can be lifted
lock: A POSIX thread mutex lock
Based on the definition of barrier, we need to implement a function like thread_barrier_wait() in this program which will "monitor" the total number of thread in the program in order to life the barrier.
In this program, every thread calls thread_barrier_wait() will be blocked until THREAD_BARRIERS_NUMBER threads reach the thread barrier.
The result of that program is:As we can see from the program, there are just only 2 threads are created. Those 2 thread both have thread_func(), as the thread function handler, which call thread_barrier_wait(&barrier), while thread barrier expected 3 threads to call thread_barrier_wait (THREAD_BARRIERS_NUMBER = 3) in order to be lifted.
Change TOTAL_THREADS to 3 and the thread barrier is lifted:
=== Sense-Reversal Centralized Barrier ===
Beside decreasing the total thread number by one for every thread successfully passing the thread barrier, thread barrier can use opposite values to mark for every thread state as passing or stopping. For example, thread 1 with state value is 0 means it's stopping at the barrier, thread 2 with state value is 1 means it has passed the barrier, thread 3's state value = 0 means it's stopping at the barrier and so on. This is known as Sense-Reversal.
The following C code demonstrates this:
This program has all features similar to the previous Centralized Barrier source code. It just only implements in a different way by using 2 new variables:
local_sense: A thread local Boolean variable to check whether THREAD_BARRIERS_NUMBER have arrived at the barrier.
flag: A Boolean member of struct _thread_barrier to indicate whether THREAD_BARRIERS_NUMBER have arrived at the barrier
When a thread stops at the barrier, local_sense's value is toggled. When there are less than THREAD_BARRIERS_NUMBER threads stopping at the thread barrier, those threads will keep waiting with the condition that the flag member of struct _thread_barrier is not equal to the private local_sense variable.
When there are exactly THREAD_BARRIERS_NUMBER threads stopping at the thread barrier, the total thread number is reset to 0, and the flag is set to local_sense.
=== Combining Tree Barrier ===
The potential problem with the Centralized Barrier is that due to all the threads repeatedly accessing the global variable for pass/stop, the communication traffic is rather high, which decreases the scalability.
This problem can be resolved by regrouping the threads and using multi-level barrier, e.g. Combining Tree Barrier. Also hardware implementations may have the advantage of higher scalability.
A Combining Tree Barrier is a hierarchical way of implementing barrier to resolve the scalability by avoiding the case that all threads are spinning at the same location.
In k-Tree Barrier, all threads are equally divided into subgroups of k threads and a first-round synchronizations are done within these subgroups. Once all subgroups have done their synchronizations, the first thread in each subgroup enters the second level for further synchronization. In the second level, like in the first level, the threads form new subgroups of k threads and synchronize within groups, sending out one thread in each subgroup to next level and so on. Eventually, in the final level there is only one subgroup to be synchronized. After the final-level synchronization, the releasing signal is transmitted to upper levels and all threads get past the barrier.
=== Hardware Barrier Implementation ===
The hardware barrier uses hardware to implement the above basic barrier model.
The simplest hardware implementation uses dedicated wires to transmit signal to implement barrier. This dedicated wire performs OR/AND operation to act as the pass/block flags and thread counter. For small systems, such a model works and communication speed is not a major concern. In large multiprocessor systems this hardware design can make barrier implementation have high latency. The network connection among processors is one implementation to lower the latency, which is analogous to Combining Tree Barrier.
== POSIX Thread barrier functions ==
POSIX Threads standard directly supports thread barrier functions which can be used to block the specified threads or the whole process at the barrier until other threads to reach that barrier. 3 main API supports by POSIX to implement thread barriers are:
pthread_barrier_init()
Init the thread barrier with the number of threads needed to wait at the barrier in order to lift it
pthread_barrier_destroy()
Destroy the thread barrier to release back the resource
pthread_barrier_wait()
Calling this function will block the current thread until the number of threads specified by pthread_barrier_init() call pthread_barrier_wait() to lift the barrier.
The following example (implemented in C with pthread API) will use thread barrier to block all the threads of the main process and therefore block the whole process:The result of that source code is:As we can see from the source code, there are just only two threads are created. Those 2 thread both have thread_func(), as the thread function handler, which call pthread_barrier_wait(&barrier), while thread barrier expected 3 threads to call pthread_barrier_wait (THREAD_BARRIERS_NUMBER = 3) in order to be lifted.
Change TOTAL_THREADS to 3 and the thread barrier is lifted:As main() is treated as a thread, i.e the "main" thread of the process, calling pthread_barrier_wait() inside main() will block the whole process until other threads reach the barrier. The following example will use thread barrier, with pthread_barrier_wait() inside main(), to block the process/main thread for 5 seconds as waiting the 2 "newly created" thread to reach the thread barrier:
This example doesn't use pthread_join() to wait for 2 "newly created" threads to complete. It calls pthread_barrier_wait() inside main(), in order to block the main thread, so that the process will be blocked until 2 threads finish its operation after 5 seconds wait (line 9 - sleep(5)).
== See also ==
Fork–join model
Rendezvous (Plan 9)
Memory barrier
== References ==
== External links ==
"Parallel Programming with Barrier Synchronization". sourceallies.com. March 2012. | Wikipedia/Barrier_(computer_science) |
In computer science, a fiber is a particularly lightweight thread of execution.
Like threads, fibers share address space. However, fibers use cooperative multitasking while threads use preemptive multitasking. Threads often depend on the kernel's thread scheduler to preempt a busy thread and resume another thread; fibers yield themselves to run another fiber while executing.
== Threads, fibers and coroutines ==
The key difference between fibers and kernel threads is that fibers use cooperative context switching, instead of preemptive time-slicing. In effect, fibers extend the concurrency taxonomy:
on a single computer, multiple processes can run
within a single process, multiple threads can run
within a single thread, multiple fibers can run
Fibers (sometimes called stackful coroutines or user mode cooperatively scheduled threads) and stackless coroutines (compiler synthesized state machines) represent two distinct programming facilities with vast performance and functionality differences.
== Advantages and disadvantages ==
Because fibers multitask cooperatively, thread safety is less of an issue than with preemptively scheduled threads, and synchronization constructs including spinlocks and atomic operations are unnecessary when writing fibered code, as they are implicitly synchronized. However, many libraries yield a fiber implicitly as a method of conducting non-blocking I/O; as such, some caution and documentation reading is advised. A disadvantage is that fibers cannot utilize multiprocessor machines without also using preemptive threads; however, an M:N threading model with no more preemptive threads than CPU cores can be more efficient than either pure fibers or pure preemptive threading.
In some server programs, fibers are used to soft block themselves to allow their single-threaded parent programs to continue working. In this design, fibers are used mostly for I/O access which does not need CPU processing. This allows the main program to continue with what it is doing. Fibers yield control to the single-threaded main program, and when the I/O operation is completed fibers continue where they left off.
== Operating system support ==
Less support from the operating system is needed for fibers than for threads. They can be implemented in modern Unix systems using the library functions getcontext, setcontext and swapcontext in ucontext.h, as in GNU Portable Threads, or in assembler as boost.fiber.
On Microsoft Windows, fibers are created using the ConvertThreadToFiber and CreateFiber calls; a fiber that is currently suspended may be resumed in any thread. Fiber-local storage, analogous to thread-local storage, may be used to create unique copies of variables.
Symbian OS used a similar concept to fibers in its Active Scheduler. An active object contained one fiber to be executed by the Active Scheduler when one of several outstanding asynchronous calls completed. Several Active objects could be waiting to execute (based on priority) and each one had to restrict its own execution time.
== Fiber implementation examples ==
Fibers can be implemented without operating system support, although some operating systems or libraries provide explicit support for them.
Win32 supplies a fiber API (Windows NT 3.51 SP3 and later)
The C++ Boost libraries have a fiber class since Boost version 1.62
Ruby had Green threads (before version 1.9)
Netscape Portable Runtime (includes a user-space fibers implementation)
ribs2
PHP since version 8.1
Rust fibers that use Futures under the hood
Crystal provides fibers as a part of the language and standard library
== See also ==
setcontext/getcontext library routines
Green threads and virtual threads
call-with-current-continuation
== References ==
== External links ==
GNU Portable threads
"Portable Coroutine Library". Freecode.
Fiber Pool A multicore-capable C++ framework based on fibers for Microsoft Windows.
State Threads
Protothreads
ribs2
boost.fiber | Wikipedia/Fiber_(computer_science) |
Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid (liquids and gases) with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial validation of such software is typically performed using experimental apparatus such as wind tunnels. In addition, previously performed analytical or empirical analysis of a particular problem can be used for comparison. A final validation is often performed using full-scale testing, such as flight tests.
CFD is applied to a range of research and engineering problems in multiple fields of study and industries, including aerodynamics and aerospace analysis, hypersonics, weather simulation, natural science and environmental engineering, industrial system design and analysis, biological engineering, fluid flows and heat transfer, engine and combustion analysis, and visual effects for film and games.
== Background and history ==
The fundamental basis of almost all CFD problems is the Navier–Stokes equations, which define a number of single-phase (gas or liquid, but not both) fluid flows. These equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, for small perturbations in subsonic and supersonic flows (not transonic or hypersonic) these equations can be linearized to yield the linearized potential equations.
Historically, methods were first developed to solve the linearized potential equations. Two-dimensional (2D) methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s.
One of the earliest type of calculations resembling modern CFD are those by Lewis Fry Richardson, in the sense that these calculations used finite differences and divided the physical space in cells. Although they failed dramatically, these calculations, together with Richardson's book Weather Prediction by Numerical Process, set the basis for modern CFD and numerical meteorology. In fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardson's 1922 book.
The computer power available paced development of three-dimensional methods. Probably the first work using computers to model fluid flow, as governed by the Navier–Stokes equations, was performed at Los Alamos National Lab, in the T3 group. This group was led by Francis H. Harlow, who is widely considered one of the pioneers of CFD. From 1957 to late 1960s, this group developed a variety of numerical methods to simulate transient two-dimensional fluid flows, such as particle-in-cell method, fluid-in-cell method, vorticity stream function method, and
marker-and-cell method. Fromm's vorticity-stream-function method for 2D, transient, incompressible flow was the first treatment of strongly contorting incompressible flows in the world.
The first paper with three-dimensional model was published by John Hess and A.M.O. Smith of Douglas Aircraft in 1967. This method discretized the surface of the geometry with panels, giving rise to this class of programs being called Panel Methods. Their method itself was simplified, in that it did not include lifting flows and hence was mainly applied to ship hulls and aircraft fuselages. The first lifting Panel Code (A230) was described in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. In time, more advanced three-dimensional Panel Codes were developed at Boeing (PANAIR, A502), Lockheed (Quadpan), Douglas (HESS), McDonnell Aircraft (MACAERO), NASA (PMARC) and Analytical Methods (WBAERO, USAERO and VSAERO). Some (PANAIR, HESS and MACAERO) were higher order codes, using higher order distributions of surface singularities, while others (Quadpan, PMARC, USAERO and VSAERO) used single singularities on each surface panel. The advantage of the lower order codes was that they ran much faster on the computers of the time. Today, VSAERO has grown to be a multi-order code and is the most widely used program of this class. It has been used in the development of a number of submarines, surface ships, automobiles, helicopters, aircraft, and more recently wind turbines. Its sister code, USAERO is an unsteady panel method that has also been used for modeling such things as high speed trains and racing yachts. The NASA PMARC code from an early version of VSAERO and a derivative of PMARC, named CMARC, is also commercially available.
In the two-dimensional realm, a number of Panel Codes have been developed for airfoil analysis and design. The codes typically have a boundary layer analysis included, so that viscous effects can be modeled. Richard Eppler developed the PROFILE code, partly with NASA funding, which became available in the early 1980s. This was soon followed by Mark Drela's XFOIL code. Both PROFILE and XFOIL incorporate two-dimensional panel codes, with coupled boundary layer codes for airfoil analysis work. PROFILE uses a conformal transformation method for inverse airfoil design, while XFOIL has both a conformal transformation and an inverse panel method for airfoil design.
An intermediate step between Panel Codes and Full Potential codes were codes that used the Transonic Small Disturbance equations. In particular, the three-dimensional WIBCO code, developed by Charlie Boppe of Grumman Aircraft in the early 1980s has seen heavy use.
Developers turned to Full Potential codes, as panel methods could not calculate the non-linear flow present at transonic speeds. The first description of a means of using the Full Potential equations was published by Earll Murman and Julian Cole of Boeing in 1970. Frances Bauer, Paul Garabedian and David Korn of the Courant Institute at New York University (NYU) wrote a series of two-dimensional Full Potential airfoil codes that were widely used, the most important being named Program H. A further growth of Program H was developed by Bob Melnik and his group at Grumman Aerospace as Grumfoil. Antony Jameson, originally at Grumman Aircraft and the Courant Institute of NYU, worked with David Caughey to develop the important three-dimensional Full Potential code FLO22 in 1975. A number of Full Potential codes emerged after this, culminating in Boeing's Tranair (A633) code, which still sees heavy use.
The next step was the Euler equations, which promised to provide more accurate solutions of transonic flows. The methodology used by Jameson in his three-dimensional FLO57 code (1981) was used by others to produce such programs as Lockheed's TEAM program and IAI/Analytical Methods' MGAERO program. MGAERO is unique in being a structured cartesian mesh code, while most other such codes use structured body-fitted grids (with the exception of NASA's highly successful CART3D code, Lockheed's SPLITFLOW code and Georgia Tech's NASCART-GT). Antony Jameson also developed the three-dimensional AIRPLANE code which made use of unstructured tetrahedral grids.
In the two-dimensional realm, Mark Drela and Michael Giles, then graduate students at MIT, developed the ISES Euler program (actually a suite of programs) for airfoil design and analysis. This code first became available in 1986 and has been further developed to design, analyze and optimize single or multi-element airfoils, as the MSES program. MSES sees wide use throughout the world. A derivative of MSES, for the design and analysis of airfoils in a cascade, is MISES, developed by Harold Youngren while he was a graduate student at MIT.
The Navier–Stokes equations were the ultimate target of development. Two-dimensional codes, such as NASA Ames' ARC2D code first emerged. A number of three-dimensional codes were developed (ARC3D, OVERFLOW, CFL3D are three successful NASA contributions), leading to numerous commercial packages.
Recently CFD methods have gained traction for modeling the flow behavior of granular materials within various chemical processes in engineering. This approach has emerged as a cost-effective alternative, offering a nuanced understanding of complex flow phenomena while minimizing expenses associated with traditional experimental methods.
== Hierarchy of fluid flow equations ==
CFD can be seen as a group of computational methodologies (discussed below) used to solve equations governing fluid flow. In the application of CFD, a critical step is to decide which set of physical assumptions and related equations need to be used for the problem at hand. To illustrate this step, the following summarizes the physical assumptions/simplifications taken in equations of a flow that is single-phase (see multiphase flow and two-phase flow), single-species (i.e., it consists of one chemical species), non-reacting, and (unless said otherwise) compressible. Thermal radiation is neglected, and body forces due to gravity are considered (unless said otherwise). In addition, for this type of flow, the next discussion highlights the hierarchy of flow equations solved with CFD. Note that some of the following equations could be derived in more than one way.
Conservation laws (CL): These are the most fundamental equations considered with CFD in the sense that, for example, all the following equations can be derived from them. For a single-phase, single-species, compressible flow one considers the conservation of mass, conservation of linear momentum, and conservation of energy.
Continuum conservation laws (CCL): Start with the CL. Assume that mass, momentum and energy are locally conserved: These quantities are conserved and cannot "teleport" from one place to another but can only move by a continuous flow (see continuity equation). Another interpretation is that one starts with the CL and assumes a continuum medium (see continuum mechanics). The resulting system of equations is unclosed since to solve it one needs further relationships/equations: (a) constitutive relationships for the viscous stress tensor; (b) constitutive relationships for the diffusive heat flux; (c) an equation of state (EOS), such as the ideal gas law; and, (d) a caloric equation of state relating temperature with quantities such as enthalpy or internal energy.
Compressible Navier-Stokes equations (C-NS): Start with the CCL. Assume a Newtonian viscous stress tensor (see Newtonian fluid) and a Fourier heat flux (see heat flux). The C-NS need to be augmented with an EOS and a caloric EOS to have a closed system of equations.
Incompressible Navier-Stokes equations (I-NS): Start with the C-NS. Assume that density is always and everywhere constant. Another way to obtain the I-NS is to assume that the Mach number is very small and that temperature differences in the fluid are very small as well. As a result, the mass-conservation and momentum-conservation equations are decoupled from the energy-conservation equation, so one only needs to solve for the first two equations.
Compressible Euler equations (EE): Start with the C-NS. Assume a frictionless flow with no diffusive heat flux.
Weakly compressible Navier-Stokes equations (WC-NS): Start with the C-NS. Assume that density variations depend only on temperature and not on pressure. For example, for an ideal gas, use
ρ
=
p
0
/
(
R
T
)
{\displaystyle \rho =p_{0}/(RT)}
, where
p
0
{\displaystyle p_{0}}
is a conveniently-defined reference pressure that is always and everywhere constant,
ρ
{\displaystyle \rho }
is density,
R
{\displaystyle R}
is the specific gas constant, and
T
{\displaystyle T}
is temperature. As a result, the WC-NS do not capture acoustic waves. It is also common in the WC-NS to neglect the pressure-work and viscous-heating terms in the energy-conservation equation. The WC-NS are also called the C-NS with the low-Mach-number approximation.
Boussinesq equations: Start with the C-NS. Assume that density variations are always and everywhere negligible except in the gravity term of the momentum-conservation equation (where density multiplies the gravitational acceleration). Also assume that various fluid properties such as viscosity, thermal conductivity, and heat capacity are always and everywhere constant. The Boussinesq equations are widely used in microscale meteorology.
Compressible Reynolds-averaged Navier–Stokes equations and compressible Favre-averaged Navier-Stokes equations (C-RANS and C-FANS): Start with the C-NS. Assume that any flow variable
f
{\displaystyle f}
, such as density, velocity and pressure, can be represented as
f
=
F
+
f
″
{\displaystyle f=F+f''}
, where
F
{\displaystyle F}
is the ensemble-average of any flow variable, and
f
″
{\displaystyle f''}
is a perturbation or fluctuation from this average.
f
″
{\displaystyle f''}
is not necessarily small. If
F
{\displaystyle F}
is a classic ensemble-average (see Reynolds decomposition) one obtains the Reynolds-averaged Navier–Stokes equations. And if
F
{\displaystyle F}
is a density-weighted ensemble-average one obtains the Favre-averaged Navier-Stokes equations. As a result, and depending on the Reynolds number, the range of scales of motion is greatly reduced, something which leads to much faster solutions in comparison to solving the C-NS. However, information is lost, and the resulting system of equations requires the closure of various unclosed terms, notably the Reynolds stress.
Ideal flow or potential flow equations: Start with the EE. Assume zero fluid-particle rotation (zero vorticity) and zero flow expansion (zero divergence). The resulting flowfield is entirely determined by the geometrical boundaries. Ideal flows can be useful in modern CFD to initialize simulations.
Linearized compressible Euler equations (LEE): Start with the EE. Assume that any flow variable
f
{\displaystyle f}
, such as density, velocity and pressure, can be represented as
f
=
f
0
+
f
′
{\displaystyle f=f_{0}+f'}
, where
f
0
{\displaystyle f_{0}}
is the value of the flow variable at some reference or base state, and
f
′
{\displaystyle f'}
is a perturbation or fluctuation from this state. Furthermore, assume that this perturbation
f
′
{\displaystyle f'}
is very small in comparison with some reference value. Finally, assume that
f
0
{\displaystyle f_{0}}
satisfies "its own" equation, such as the EE. The LEE and its multiple variations are widely used in computational aeroacoustics.
Sound wave or acoustic wave equation: Start with the LEE. Neglect all gradients of
f
0
{\displaystyle f_{0}}
and
f
′
{\displaystyle f'}
, and assume that the Mach number at the reference or base state is very small. The resulting equations for density, momentum and energy can be manipulated into a pressure equation, giving the well-known sound wave equation.
Shallow water equations (SW): Consider a flow near a wall where the wall-parallel length-scale of interest is much larger than the wall-normal length-scale of interest. Start with the EE. Assume that density is always and everywhere constant, neglect the velocity component perpendicular to the wall, and consider the velocity parallel to the wall to be spatially-constant.
Boundary layer equations (BL): Start with the C-NS (I-NS) for compressible (incompressible) boundary layers. Assume that there are thin regions next to walls where spatial gradients perpendicular to the wall are much larger than those parallel to the wall.
Bernoulli equation: Start with the EE. Assume that density variations depend only on pressure variations. See Bernoulli's Principle.
Steady Bernoulli equation: Start with the Bernoulli Equation and assume a steady flow. Or start with the EE and assume that the flow is steady and integrate the resulting equation along a streamline.
Stokes Flow or creeping flow equations: Start with the C-NS or I-NS. Neglect the inertia of the flow. Such an assumption can be justified when the Reynolds number is very low. As a result, the resulting set of equations is linear, something which simplifies greatly their solution.
Two-dimensional channel flow equation: Consider the flow between two infinite parallel plates. Start with the C-NS. Assume that the flow is steady, two-dimensional, and fully developed (i.e., the velocity profile does not change along the streamwise direction). Note that this widely-used fully-developed assumption can be inadequate in some instances, such as some compressible, microchannel flows, in which case it can be supplanted by a locally fully-developed assumption.
One-dimensional Euler equations or one-dimensional gas-dynamic equations (1D-EE): Start with the EE. Assume that all flow quantities depend only on one spatial dimension.
Fanno flow equation: Consider the flow inside a duct with constant area and adiabatic walls. Start with the 1D-EE. Assume a steady flow, no gravity effects, and introduce in the momentum-conservation equation an empirical term to recover the effect of wall friction (neglected in the EE). To close the Fanno flow equation, a model for this friction term is needed. Such a closure involves problem-dependent assumptions.
Rayleigh flow equation. Consider the flow inside a duct with constant area and either non-adiabatic walls without volumetric heat sources or adiabatic walls with volumetric heat sources. Start with the 1D-EE. Assume a steady flow, no gravity effects, and introduce in the energy-conservation equation an empirical term to recover the effect of wall heat transfer or the effect of the heat sources (neglected in the EE).
== Methodology ==
In all of these approaches the same basic procedure is followed.
During preprocessing
The geometry and physical bounds of the problem can be defined using computer aided design (CAD). From there, data can be suitably processed (cleaned-up) and the fluid volume (or fluid domain) is extracted.
The volume occupied by the fluid is divided into discrete cells (the mesh). The mesh may be uniform or non-uniform, structured or unstructured, consisting of a combination of hexahedral, tetrahedral, prismatic, pyramidal or polyhedral elements.
The physical modeling is defined – for example, the equations of fluid motion + enthalpy + radiation + species conservation
Boundary conditions are defined. This involves specifying the fluid behaviour and properties at all bounding surfaces of the fluid domain. For transient problems, the initial conditions are also defined.
The simulation is started and the equations are solved iteratively as a steady-state or transient.
Finally a postprocessor is used for the analysis and visualization of the resulting solution.
=== Discretization methods ===
The stability of the selected discretisation is generally established numerically rather than analytically as with simple linear problems. Special care must also be taken to ensure that the discretisation handles discontinuous solutions gracefully. The Euler equations and Navier–Stokes equations both admit shocks and contact surfaces.
Some of the discretization methods being used are:
==== Finite volume method ====
The finite volume method (FVM) is a common approach used in CFD codes, as it has an advantage in memory usage and solution speed, especially for large problems, high Reynolds number turbulent flows, and source term dominated flows (like combustion).
In the finite volume method, the governing partial differential equations (typically the Navier-Stokes equations, the mass and energy conservation equations, and the turbulence equations) are recast in a conservative form, and then solved over discrete control volumes. This discretization guarantees the conservation of fluxes through a particular control volume. The finite volume equation yields governing equations in the form,
∂
∂
t
∭
Q
d
V
+
∬
F
d
A
=
0
,
{\displaystyle {\frac {\partial }{\partial t}}\iiint Q\,dV+\iint F\,d\mathbf {A} =0,}
where
Q
{\displaystyle Q}
is the vector of conserved variables,
F
{\displaystyle F}
is the vector of fluxes (see Euler equations or Navier–Stokes equations),
V
{\displaystyle V}
is the volume of the control volume element, and
A
{\displaystyle \mathbf {A} }
is the surface area of the control volume element.
==== Finite element method ====
The finite element method (FEM) is used in structural analysis of solids, but is also applicable to fluids. However, the FEM formulation requires special care to ensure a conservative solution. The FEM formulation has been adapted for use with fluid dynamics governing equations. Although FEM must be carefully formulated to be conservative, it is much more stable than the finite volume approach. FEM also provides more accurate solutions for smooth problems comparing to FVM. Another advantage of FEM is that it can handle complex geometries and boundary conditions. However, FEM can require more memory and has slower solution times than the FVM.
In this method, a weighted residual equation is formed:
R
i
=
∭
W
i
Q
d
V
e
{\displaystyle R_{i}=\iiint W_{i}Q\,dV^{e}}
where
R
i
{\displaystyle R_{i}}
is the equation residual at an element vertex
i
{\displaystyle i}
,
Q
{\displaystyle Q}
is the conservation equation expressed on an element basis,
W
i
{\displaystyle W_{i}}
is the weight factor, and
V
e
{\displaystyle V^{e}}
is the volume of the element.
==== Finite difference method ====
The finite difference method (FDM) has historical importance and is simple to program. It is currently only used in few specialized codes, which handle complex geometry with high accuracy and efficiency by using embedded boundaries or overlapping grids (with the solution interpolated across each grid).
∂
Q
∂
t
+
∂
F
∂
x
+
∂
G
∂
y
+
∂
H
∂
z
=
0
{\displaystyle {\frac {\partial Q}{\partial t}}+{\frac {\partial F}{\partial x}}+{\frac {\partial G}{\partial y}}+{\frac {\partial H}{\partial z}}=0}
where
Q
{\displaystyle Q}
is the vector of conserved variables, and
F
{\displaystyle F}
,
G
{\displaystyle G}
, and
H
{\displaystyle H}
are the fluxes in the
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
z
{\displaystyle z}
directions respectively.
==== Spectral element method ====
Spectral element method is a finite element type method. It requires the mathematical problem (the partial differential equation) to be cast in a weak formulation. This is typically done by multiplying the differential equation by an arbitrary test function and integrating over the whole domain. Purely mathematically, the test functions are completely arbitrary - they belong to an infinite-dimensional function space. Clearly an infinite-dimensional function space cannot be represented on a discrete spectral element mesh; this is where the spectral element discretization begins. The most crucial thing is the choice of interpolating and testing functions. In a standard, low order FEM in 2D, for quadrilateral elements the most typical choice is the bilinear test or interpolating function of the form
v
(
x
,
y
)
=
a
x
+
b
y
+
c
x
y
+
d
{\displaystyle v(x,y)=ax+by+cxy+d}
. In a spectral element method however, the interpolating and test functions are chosen to be polynomials of a very high order (typically e.g. of the 10th order in CFD applications). This guarantees the rapid convergence of the method. Furthermore, very efficient integration procedures must be used, since the number of integrations to be performed in numerical codes is big. Thus, high order Gauss integration quadratures are employed, since they achieve the highest accuracy with the smallest number of computations to be carried out.
At the time there are some academic CFD codes based on the spectral element method and some more are currently under development, since the new time-stepping schemes arise in the scientific world.
==== Lattice Boltzmann method ====
The lattice Boltzmann method (LBM) with its simplified kinetic picture on a lattice provides a computationally efficient description of hydrodynamics.
Unlike the traditional CFD methods, which solve the conservation equations of macroscopic properties (i.e., mass, momentum, and energy) numerically, LBM models the fluid consisting of fictive particles, and such particles perform consecutive propagation and collision processes over a discrete lattice mesh. In this method, one works with the discrete in space and time version of the kinetic evolution equation in the Boltzmann Bhatnagar-Gross-Krook (BGK) form.
==== Vortex method ====
The vortex method, also Lagrangian Vortex Particle Method, is a meshfree technique for the simulation of incompressible turbulent flows. In it, vorticity is discretized onto Lagrangian particles, these computational elements being called vortices, vortons, or vortex particles. Vortex methods were developed as a grid-free methodology that would not be limited by the fundamental smoothing effects associated with grid-based methods. To be practical, however, vortex methods require means for rapidly computing velocities from the vortex elements – in other words they require the solution to a particular form of the N-body problem (in which the motion of N objects is tied to their mutual influences). This breakthrough came in the 1980s with the development of the Barnes-Hut and fast multipole method (FMM) algorithms. These paved the way to practical computation of the velocities from the vortex elements.
Software based on the vortex method offer a new means for solving tough fluid dynamics problems with minimal user intervention. All that is required is specification of problem geometry and setting of boundary and initial conditions. Among the significant advantages of this modern technology;
It is practically grid-free, thus eliminating numerous iterations associated with RANS and LES.
All problems are treated identically. No modeling or calibration inputs are required.
Time-series simulations, which are crucial for correct analysis of acoustics, are possible.
The small scale and large scale are accurately simulated at the same time.
==== Boundary element method ====
In the boundary element method, the boundary occupied by the fluid is divided into a surface mesh.
==== High-resolution discretization schemes ====
High-resolution schemes are used where shocks or discontinuities are present. Capturing sharp changes in the solution requires the use of second or higher-order numerical schemes that do not introduce spurious oscillations. This usually necessitates the application of flux limiters to ensure that the solution is total variation diminishing.
=== Turbulence models ===
In computational modeling of turbulent flows, one common objective is to obtain a model that can predict quantities of interest, such as fluid velocity, for use in engineering designs of the system being modeled. For turbulent flows, the range of length scales and complexity of phenomena involved in turbulence make most modeling approaches prohibitively expensive; the resolution required to resolve all scales involved in turbulence is beyond what is computationally possible. The primary approach in such cases is to create numerical models to approximate unresolved phenomena. This section lists some commonly used computational models for turbulent flows.
Turbulence models can be classified based on computational expense, which corresponds to the range of scales that are modeled versus resolved (the more turbulent scales that are resolved, the finer the resolution of the simulation, and therefore the higher the computational cost). If a majority or all of the turbulent scales are not modeled, the computational cost is very low, but the tradeoff comes in the form of decreased accuracy.
In addition to the wide range of length and time scales and the associated computational cost, the governing equations of fluid dynamics contain a non-linear convection term and a non-linear and non-local pressure gradient term. These nonlinear equations must be solved numerically with the appropriate boundary and initial conditions.
==== Reynolds-averaged Navier–Stokes ====
Reynolds-averaged Navier–Stokes (RANS) equations are the oldest approach to turbulence modeling. An ensemble version of the governing equations is solved, which introduces new apparent stresses known as Reynolds stresses. This adds a second-order tensor of unknowns for which various models can provide different levels of closure. It is a common misconception that the RANS equations do not apply to flows with a time-varying mean flow because these equations are 'time-averaged'. In fact, statistically unsteady (or non-stationary) flows can equally be treated. This is sometimes referred to as URANS. There is nothing inherent in Reynolds averaging to preclude this, but the turbulence models used to close the equations are valid only as long as the time over which these changes in the mean occur is large compared to the time scales of the turbulent motion containing most of the energy.
RANS models can be divided into two broad approaches:
Boussinesq hypothesis
This method involves using an algebraic equation for the Reynolds stresses which include determining the turbulent viscosity, and depending on the level of sophistication of the model, solving transport equations for determining the turbulent kinetic energy and dissipation. Models include k-ε (Launder and Spalding), Mixing Length Model (Prandtl), and Zero Equation Model (Cebeci and Smith). The models available in this approach are often referred to by the number of transport equations associated with the method. For example, the Mixing Length model is a "Zero Equation" model because no transport equations are solved; the
k
−
ϵ
{\displaystyle k-\epsilon }
is a "Two Equation" model because two transport equations (one for
k
{\displaystyle k}
and one for
ϵ
{\displaystyle \epsilon }
) are solved.
Reynolds stress model (RSM)
This approach attempts to actually solve transport equations for the Reynolds stresses. This means introduction of several transport equations for all the Reynolds stresses and hence this approach is much more costly in CPU effort.
==== Large eddy simulation ====
Large eddy simulation (LES) is a technique in which the smallest scales of the flow are removed through a filtering operation, and their effect modeled using subgrid scale models. This allows the largest and most important scales of the turbulence to be resolved, while greatly reducing the computational cost incurred by the smallest scales. This method requires greater computational resources than RANS methods, but is far cheaper than DNS.
==== Detached eddy simulation ====
Detached eddy simulations (DES) is a modification of a RANS model in which the model switches to a subgrid scale formulation in regions fine enough for LES calculations. Regions near solid boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore, the grid resolution for DES is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. Though DES was initially formulated for the Spalart-Allmaras model (Philippe R. Spalart et al., 1997), it can be implemented with other RANS models (Strelets, 2001), by appropriately modifying the length scale which is explicitly or implicitly involved in the RANS model. So while Spalart–Allmaras model based DES acts as LES with a wall model, DES based on other models (like two equation models) behave as a hybrid RANS-LES model. Grid generation is more complicated than for a simple RANS or LES case due to the RANS-LES switch. DES is a non-zonal approach and provides a single smooth velocity field across the RANS and the LES regions of the solutions.
==== Direct numerical simulation ====
Direct numerical simulation (DNS) resolves the entire range of turbulent length scales. This marginalizes the effect of models, but is extremely expensive. The computational cost is proportional to
R
e
3
{\displaystyle Re^{3}}
. DNS is intractable for flows with complex geometries or flow configurations.
==== Coherent vortex simulation ====
The coherent vortex simulation approach decomposes the turbulent flow field into a coherent part, consisting of organized vortical motion, and the incoherent part, which is the random background flow. This decomposition is done using wavelet filtering. The approach has much in common with LES, since it uses decomposition and resolves only the filtered portion, but different in that it does not use a linear, low-pass filter. Instead, the filtering operation is based on wavelets, and the filter can be adapted as the flow field evolves. Farge and Schneider tested the CVS method with two flow configurations and showed that the coherent portion of the flow exhibited the
−
40
39
{\displaystyle -{\frac {40}{39}}}
energy spectrum exhibited by the total flow, and corresponded to coherent structures (vortex tubes), while the incoherent parts of the flow composed homogeneous background noise, which exhibited no organized structures. Goldstein and Vasilyev applied the FDV model to large eddy simulation, but did not assume that the wavelet filter eliminated all coherent motions from the subfilter scales. By employing both LES and CVS filtering, they showed that the SFS dissipation was dominated by the SFS flow field's coherent portion.
==== PDF methods ====
Probability density function (PDF) methods for turbulence, first introduced by Lundgren, are based on tracking the one-point PDF of the velocity,
f
V
(
v
;
x
,
t
)
d
v
{\displaystyle f_{V}({\boldsymbol {v}};{\boldsymbol {x}},t)d{\boldsymbol {v}}}
, which gives the probability of the velocity at point
x
{\displaystyle {\boldsymbol {x}}}
being between
v
{\displaystyle {\boldsymbol {v}}}
and
v
+
d
v
{\displaystyle {\boldsymbol {v}}+d{\boldsymbol {v}}}
. This approach is analogous to the kinetic theory of gases, in which the macroscopic properties of a gas are described by a large number of particles. PDF methods are unique in that they can be applied in the framework of a number of different turbulence models; the main differences occur in the form of the PDF transport equation. For example, in the context of large eddy simulation, the PDF becomes the filtered PDF. PDF methods can also be used to describe chemical reactions, and are particularly useful for simulating chemically reacting flows because the chemical source term is closed and does not require a model. The PDF is commonly tracked by using Lagrangian particle methods; when combined with large eddy simulation, this leads to a Langevin equation for subfilter particle evolution.
==== Vorticity confinement method ====
The vorticity confinement (VC) method is an Eulerian technique used in the simulation of turbulent wakes. It uses a solitary-wave like approach to produce a stable solution with no numerical spreading. VC can capture the small-scale features to within as few as 2 grid cells. Within these features, a nonlinear difference equation is solved as opposed to the finite difference equation. VC is similar to shock capturing methods, where conservation laws are satisfied, so that the essential integral quantities are accurately computed.
==== Linear eddy model ====
The Linear eddy model is a technique used to simulate the convective mixing that takes place in turbulent flow. Specifically, it provides a mathematical way to describe the interactions of a scalar variable within the vector flow field. It is primarily used in one-dimensional representations of turbulent flow, since it can be applied across a wide range of length scales and Reynolds numbers. This model is generally used as a building block for more complicated flow representations, as it provides high resolution predictions that hold across a large range of flow conditions.
=== Two-phase flow ===
The modeling of two-phase flow is still under development. Different methods have been proposed, including the Volume of fluid method, the level-set method and front tracking. These methods often involve a tradeoff between maintaining a sharp interface or conserving mass . This is crucial since the evaluation of the density, viscosity and surface tension is based on the values averaged over the interface.
=== Solution algorithms ===
Discretization in the space produces a system of ordinary differential equations for unsteady problems and algebraic equations for steady problems. Implicit or semi-implicit methods are generally used to integrate the ordinary differential equations, producing a system of (usually) nonlinear algebraic equations. Applying a Newton or Picard iteration produces a system of linear equations which is nonsymmetric in the presence of advection and indefinite in the presence of incompressibility. Such systems, particularly in 3D, are frequently too large for direct solvers, so iterative methods are used, either stationary methods such as successive overrelaxation or Krylov subspace methods. Krylov methods such as GMRES, typically used with preconditioning, operate by minimizing the residual over successive subspaces generated by the preconditioned operator.
Multigrid has the advantage of asymptotically optimal performance on a number of problems. Traditional solvers and preconditioners are effective at reducing high-frequency components of the residual, but low-frequency components typically require a number of iterations to reduce. By operating on multiple scales, multigrid reduces all components of the residual by similar factors, leading to a mesh-independent number of iterations.
For indefinite systems, preconditioners such as incomplete LU factorization, additive Schwarz, and multigrid perform poorly or fail entirely, so the problem structure must be used for effective preconditioning. Methods commonly used in CFD are the SIMPLE and Uzawa algorithms which exhibit mesh-dependent convergence rates, but recent advances based on block LU factorization combined with multigrid for the resulting definite systems have led to preconditioners that deliver mesh-independent convergence rates.
=== Unsteady aerodynamics ===
CFD made a major break through in late 70s with the introduction of LTRAN2, a 2-D code to model oscillating airfoils based on transonic small perturbation theory by Ballhaus and associates. It uses a Murman-Cole switch algorithm for modeling the moving shock-waves. Later it was extended to 3-D with use of a rotated difference scheme by AFWAL/Boeing that resulted in LTRAN3.
=== Biomedical engineering ===
CFD investigations are used to clarify the characteristics of aortic flow in details that are beyond the capabilities of experimental measurements. To analyze these conditions, CAD models of the human vascular system are extracted employing modern imaging techniques such as MRI or Computed Tomography. A 3D model is reconstructed from this data and the fluid flow can be computed. Blood properties such as density and viscosity, and realistic boundary conditions (e.g. systemic pressure) have to be taken into consideration. Therefore, making it possible to analyze and optimize the flow in the cardiovascular system for different applications.
=== CPU versus GPU ===
Traditionally, CFD simulations are performed on CPUs.
In a more recent trend, simulations are also performed on GPUs. These typically contain slower but more processors. For CFD algorithms that feature good parallelism performance (i.e. good speed-up by adding more cores) this can greatly reduce simulation times. Fluid-implicit particle and lattice-Boltzmann methods are typical examples of codes that scale well on GPUs.
== See also ==
== References ==
== Notes ==
Anderson, John D. (1995). Computational Fluid Dynamics: The Basics With Applications. Science/Engineering/Math. McGraw-Hill Science. ISBN 978-0-07-001685-9.
Patankar, Suhas (1980). Numerical Heat Transfer and Fluid Flow. Hemisphere Series on Computational Methods in Mechanics and Thermal Science. Taylor & Francis. ISBN 978-0-89116-522-4.
== External links ==
Course: Computational Fluid Dynamics – Suman Chakraborty (Indian Institute of Technology Kharagpur)
Course: Numerical PDE Techniques for Scientists and Engineers, Open access Lectures and Codes for Numerical PDEs, including a modern view of Compressible CFD | Wikipedia/Computational_Fluid_Dynamics |
In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; for some operations, these algorithms provide a useful alternative to traditional blocking implementations. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.
The word "non-blocking" was traditionally used to describe telecommunications networks that could route a connection through a set of relays "without having to re-arrange existing calls" (see Clos network). Also, if the telephone exchange "is not defective, it can always make the connection" (see nonblocking minimal spanning switch).
== Motivation ==
The traditional approach to multi-threaded programming is to use locks to synchronize access to shared resources. Synchronization primitives such as mutexes, semaphores, and critical sections are all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently, if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free.
Blocking a thread can be undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything: if the blocked thread had been performing a high-priority or real-time task, it would be highly undesirable to halt its progress.
Other problems are less obvious. For example, certain interactions between locks can lead to error conditions such as deadlock, livelock, and priority inversion. Using locks also involves a trade-off between coarse-grained locking, which can significantly reduce opportunities for parallelism, and fine-grained locking, which requires more careful design, increases locking overhead and is more prone to bugs.
Unlike blocking algorithms, non-blocking algorithms do not suffer from these downsides, and in addition are safe for use in interrupt handlers: even though the preempted thread cannot be resumed, progress is still possible without it. In contrast, global data structures protected by mutual exclusion cannot safely be accessed in an interrupt handler, as the preempted thread may be the one holding the lock. While this can be rectified by masking interrupt requests during the critical section, this requires the code in the critical section to have bounded (and preferably short) running time, or excessive interrupt latency may be observed.
A lock-free data structure can be used to improve performance.
A lock-free data structure increases the amount of time spent in parallel execution rather than serial execution, improving performance on a multi-core processor, because access to the shared data structure does not need to be serialized to stay coherent.
== Implementation ==
With few exceptions, non-blocking algorithms use atomic read-modify-write primitives that the hardware must provide, the most notable of which is compare and swap (CAS). Critical sections are almost always implemented using standard interfaces over these primitives (in the general case, critical sections will be blocking, even when implemented with these primitives). In the 1990s all non-blocking algorithms had to be written "natively" with the underlying primitives to achieve acceptable performance. However, the emerging field of software transactional memory promises standard abstractions for writing efficient non-blocking code.
Much research has also been done in providing basic data structures such as stacks, queues, sets, and hash tables. These allow programs to easily exchange data between threads asynchronously.
Additionally, some non-blocking data structures are weak enough to be implemented without special atomic primitives. These exceptions include:
a single-reader single-writer ring buffer FIFO, with a size which evenly divides the overflow of one of the available unsigned integer types, can unconditionally be implemented safely using only a memory barrier
Read-copy-update with a single writer and any number of readers. (The readers are wait-free; the writer is usually lock-free, until it needs to reclaim memory).
Read-copy-update with multiple writers and any number of readers. (The readers are wait-free; multiple writers generally serialize with a lock and are not obstruction-free).
Several libraries internally use lock-free techniques, but it is difficult to write lock-free code that is correct.
Non-blocking algorithms generally involve a series of read, read-modify-write, and write instructions in a carefully designed order.
Optimizing compilers can aggressively re-arrange operations.
Even when they don't, many modern CPUs often re-arrange such operations (they have a "weak consistency model"),
unless a memory barrier is used to tell the CPU not to reorder.
C++11 programmers can use std::atomic in <atomic>,
and C11 programmers can use <stdatomic.h>,
both of which supply types and functions that tell the compiler not to re-arrange such instructions, and to insert the appropriate memory barriers.
== Wait-freedom ==
Wait-freedom is the strongest non-blocking guarantee of progress, combining guaranteed system-wide throughput with starvation-freedom. An algorithm is wait-free if every operation has a bound on the number of steps the algorithm will take before the operation completes.
This property is critical for real-time systems and is always nice to have as long as the performance cost is not too high.
It was shown in the 1980s that all algorithms can be implemented wait-free, and many transformations from serial code, called universal constructions, have been demonstrated. However, the resulting performance does not in general match even naïve blocking designs. Several papers have since improved the performance of universal constructions, but still, their performance is far below blocking designs.
Several papers have investigated the difficulty of creating wait-free algorithms. For example, it has been shown that the widely available atomic conditional primitives, CAS and LL/SC, cannot provide starvation-free implementations of many common data structures without memory costs growing linearly in the number of threads.
However, these lower bounds do not present a real barrier in practice, as spending a cache line or exclusive reservation granule (up to 2 KB on ARM) of store per thread in the shared memory is not considered too costly for practical systems. Typically, the amount of store logically required is a word, but physically CAS operations on the same cache line will collide, and LL/SC operations in the same exclusive reservation granule will collide, so the amount of store physically required is greater.
Wait-free algorithms were rare until 2011, both in research and in practice. However, in 2011 Kogan and Petrank presented a wait-free queue building on the CAS primitive, generally available on common hardware. Their construction expanded the lock-free queue of Michael and Scott, which is an efficient queue often used in practice. A follow-up paper by Kogan and Petrank provided a method for making wait-free algorithms fast and used this method to make the wait-free queue practically as fast as its lock-free counterpart. A subsequent paper by Timnat and Petrank provided an automatic mechanism for generating wait-free data structures from lock-free ones. Thus, wait-free implementations are now available for many data-structures.
Under reasonable assumptions, Alistarh, Censor-Hillel, and Shavit showed that lock-free algorithms are practically wait-free. Thus, in the absence of hard deadlines, wait-free algorithms may not be worth the additional complexity that they introduce.
== Lock-freedom ==
Lock-freedom allows individual threads to starve but guarantees system-wide throughput. An algorithm is lock-free if, when the program threads are run for a sufficiently long time, at least one of the threads makes
progress (for some sensible definition of progress).
All wait-free algorithms are lock-free.
In particular, if one thread is suspended, then a lock-free algorithm guarantees that the remaining threads can still make progress. Hence, if two threads can contend for the same mutex lock or spinlock, then the algorithm is not lock-free. (If we suspend one thread that holds the lock, then the second thread will block.)
An algorithm is lock-free if infinitely often operation by some processors will succeed in a finite number of steps. For instance, if N processors are trying to execute an operation, some of the N processes will succeed in finishing the operation in a finite number of steps and others might fail and retry on failure. The difference between wait-free and lock-free is that wait-free operation by each process is guaranteed to succeed in a finite number of steps, regardless of the other processors.
In general, a lock-free algorithm can run in four phases: completing one's own operation, assisting an obstructing operation, aborting an obstructing operation, and waiting. Completing one's own operation is complicated by the possibility of concurrent assistance and abortion, but is invariably the fastest path to completion.
The decision about when to assist, abort or wait when an obstruction is met is the responsibility of a contention manager. This may be very simple (assist higher priority operations, abort lower priority ones), or may be more optimized to achieve better throughput, or lower the latency of prioritized operations.
Correct concurrent assistance is typically the most complex part of a lock-free algorithm, and often very costly to execute: not only does the assisting thread slow down, but thanks to the mechanics of shared memory, the thread being assisted will be slowed, too, if it is still running.
== Obstruction-freedom ==
Obstruction-freedom is the weakest natural non-blocking progress guarantee. An algorithm is obstruction-free if at any point, a single thread executed in isolation (i.e., with all obstructing threads suspended) for a bounded number of steps will complete its operation. All lock-free algorithms are obstruction-free.
Obstruction-freedom demands only that any partially completed operation can be aborted and the changes made rolled back. Dropping concurrent assistance can often result in much simpler algorithms that are easier to validate. Preventing the system from continually live-locking is the task of a contention manager.
Some obstruction-free algorithms use a pair of "consistency markers" in the data structure. Processes reading the data structure first read one consistency marker, then read the relevant data into an internal buffer, then read the other marker, and then compare the markers. The data is consistent if the two markers are identical. Markers may be non-identical when the read is interrupted by another process updating the data structure. In such a case, the process discards the data in the internal buffer and tries again.
== See also ==
Deadlock
Java ConcurrentMap#Lock-free atomicity
Liveness
Lock (computer science)
Mutual exclusion
Priority inversion
Resource starvation
== References ==
== External links ==
An Introduction to Lock-Free Programming
Non-blocking Algorithms | Wikipedia/Non-blocking_algorithm |
In computer science, comparator networks are abstract devices built up of a fixed number of "wires", carrying values, and comparator modules that connect pairs of wires, swapping the values on the wires if they are not in a desired order. Such networks are typically designed to perform sorting on fixed numbers of values, in which case they are called sorting networks.
Sorting networks differ from general comparison sorts in that they are not capable of handling arbitrarily large inputs, and in that their sequence of comparisons is set in advance, regardless of the outcome of previous comparisons. In order to sort larger amounts of inputs, new sorting networks must be constructed. This independence of comparison sequences is useful for parallel execution and for implementation in hardware. Despite the simplicity of sorting nets, their theory is surprisingly deep and complex. Sorting networks were first studied circa 1954 by Armstrong, Nelson and O'Connor, who subsequently patented the idea.
Sorting networks can be implemented either in hardware or in software. Donald Knuth describes how the comparators for binary integers can be implemented as simple, three-state electronic devices. Batcher, in 1968, suggested using them to construct switching networks for computer hardware, replacing both buses and the faster, but more expensive, crossbar switches. Since the 2000s, sorting nets (especially bitonic mergesort) are used by the GPGPU community for constructing sorting algorithms to run on graphics processing units.
== Introduction ==
A sorting network consists of two types of items: comparators and wires. The wires are thought of as running from left to right, carrying values (one per wire) that traverse the network all at the same time. Each comparator connects two wires. When a pair of values, traveling through a pair of wires, encounter a comparator, the comparator swaps the values if and only if the top wire's value is greater or equal to the bottom wire's value.
In a formula, if the top wire carries x and the bottom wire carries y, then after hitting a comparator the wires carry
x
′
=
min
(
x
,
y
)
{\displaystyle x'=\min(x,y)}
and
y
′
=
max
(
x
,
y
)
{\displaystyle y'=\max(x,y)}
, respectively, so the pair of values is sorted.: 635 A network of wires and comparators that will correctly sort all possible inputs into ascending order is called a sorting network or Kruskal hub. By reflecting the network, it is also possible to sort all inputs into descending order.
The full operation of a simple sorting network is shown below. It is evident why this sorting network will correctly sort the inputs; note that the first four comparators will "sink" the largest value to the bottom and "float" the smallest value to the top. The final comparator sorts out the middle two wires.
=== Depth and efficiency ===
The efficiency of a sorting network can be measured by its total size, meaning the number of comparators in the network, or by its depth, defined (informally) as the largest number of comparators that any input value can encounter on its way through the network. Noting that sorting networks can perform certain comparisons in parallel (represented in the graphical notation by comparators that lie on the same vertical line), and assuming all comparisons to take unit time, it can be seen that the depth of the network is equal to the number of time steps required to execute it.: 636–637
=== Insertion and Bubble networks ===
We can easily construct a network of any size recursively using the principles of insertion and selection. Assuming we have a sorting network of size n, we can construct a network of size n + 1 by "inserting" an additional number into the already sorted subnet (using the principle underlying insertion sort). We can also accomplish the same thing by first "selecting" the lowest value from the inputs and then sort the remaining values recursively (using the principle underlying bubble sort).
The structure of these two sorting networks are very similar. A construction of the two different variants, which collapses together comparators that can be performed simultaneously shows that, in fact, they are identical.
The insertion network (or equivalently, bubble network) has a depth of 2n - 3, where n is the number of values. This is better than the O(n log n) time needed by random-access machines, but it turns out that there are much more efficient sorting networks with a depth of just O(log2 n), as described below.
=== Zero-one principle ===
While it is easy to prove the validity of some sorting networks (like the insertion/bubble sorter), it is not always so easy. There are n! permutations of numbers in an n-wire network, and to test all of them would take a significant amount of time, especially when n is large. The number of test cases can be reduced significantly, to 2n, using the so-called zero-one principle. While still exponential, this is smaller than n! for all n ≥ 4, and the difference grows quite quickly with increasing n.
The zero-one principle states that, if a sorting network can correctly sort all 2n sequences of zeros and ones, then it is also valid for arbitrary ordered inputs. This not only drastically cuts down on the number of tests needed to ascertain the validity of a network, it is of great use in creating many constructions of sorting networks as well.
The principle can be proven by first observing the following fact about comparators: when a monotonically increasing function f is applied to the inputs, i.e., x and y are replaced by f(x) and f(y), then the comparator produces min(f(x), f(y)) = f(min(x, y)) and max(f(x), f(y)) = f(max(x, y)). By induction on the depth of the network, this result can be extended to a lemma stating that if the network transforms the sequence a1, ..., an into b1, ..., bn, it will transform f(a1), ..., f(an) into f(b1), ..., f(bn). Suppose that some input a1, ..., an contains two items ai < aj, and the network incorrectly swaps these in the output. Then it will also incorrectly sort f(a1), ..., f(an) for the function
f
(
x
)
=
{
1
if
x
>
a
i
0
otherwise.
{\displaystyle f(x)={\begin{cases}1\ &{\mbox{if }}x>a_{i}\\0\ &{\mbox{otherwise.}}\end{cases}}}
This function is monotonic, so we have the zero-one principle as the contrapositive.: 640–641
== Constructing sorting networks ==
Various algorithms exist to construct sorting networks of depth O(log2 n) (hence size O(n log2 n)) such as Batcher odd–even mergesort, bitonic sort, Shell sort, and the Pairwise sorting network. These networks are often used in practice.
It is also possible to construct networks of depth O(log n) (hence size O(n log n)) using a construction called the AKS network, after its discoverers Ajtai, Komlós, and Szemerédi. While an important theoretical discovery, the AKS network has very limited practical application because of the large linear constant hidden by the Big-O notation.: 653 These are partly due to a construction of an expander graph.
A simplified version of the AKS network was described by Paterson in 1990, who noted that "the constants obtained for the depth bound still prevent the construction being of practical value".
A more recent construction called the zig-zag sorting network of size O(n log n) was discovered by Goodrich in 2014. While its size is much smaller than that of AKS networks, its depth O(n log n) makes it unsuitable for a parallel implementation.
=== Optimal sorting networks ===
For small, fixed numbers of inputs n, optimal sorting networks can be constructed, with either minimal depth (for maximally parallel execution) or minimal size (number of comparators). These networks can be used to increase the performance of larger sorting networks resulting from the recursive constructions of, e.g., Batcher, by halting the recursion early and inserting optimal nets as base cases. The following table summarizes the optimality results for small networks for which the optimal depth is known:
For larger networks neither the optimal depth nor the optimal size are currently known. The bounds known so far are provided in the table below:
The first sixteen depth-optimal networks are listed in Knuth's Art of Computer Programming, and have been since the 1973 edition; however, while the optimality of the first eight was established by Floyd and Knuth in the 1960s, this property wasn't proven for the final six until 2014 (the cases nine and ten having been decided in 1991).
For one to twelve inputs, minimal (i.e. size-optimal) sorting networks are known, and for higher values, lower bounds on their sizes S(n) can be derived inductively using a lemma due to Van Voorhis (p. 240): S(n) ≥ S(n − 1) + ⌈log2n⌉. The first ten optimal networks have been known since 1969, with the first eight again being known as optimal since the work of Floyd and Knuth, but optimality of the cases n = 9 and n = 10 took until 2014 to be resolved.
The optimality of the smallest known sorting networks for n = 11 and n = 12 was resolved in 2020.
Some work in designing optimal sorting network has been done using genetic algorithms: D. Knuth mentions that the smallest known sorting network for n = 13 was found by Hugues Juillé in 1995 "by simulating an evolutionary process of genetic breeding" (p. 226), and that the minimum depth sorting networks for n = 9 and n = 11 were found by Loren Schwiebert in 2001 "using genetic methods" (p. 229).
=== Complexity of testing sorting networks ===
Unless P=NP, the problem of testing whether a candidate network is a sorting network is likely to remain difficult for networks of large sizes, due to the problem being co-NP-complete.
== References ==
Angel, O.; Holroyd, A. E.; Romik, D.; Virág, B. (2007). "Random sorting networks". Advances in Mathematics. 215 (2): 839–868. arXiv:math/0609538. doi:10.1016/j.aim.2007.05.019.
== External links ==
List of smallest sorting networks for given number of inputs
Sorting Networks
CHAPTER 28: SORTING NETWORKS
Sorting Networks
Tool for generating and graphing sorting networks
Sorting networks and the END algorithm
Lipton, Richard J.; Regan, Ken (24 April 2014). "Galactic Sorting Networks". Gödel’s Lost Letter and P=NP.
Sorting Networks validity | Wikipedia/Sorting_networks |
The Level-set method (LSM) is a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. LSM can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects. LSM makes it easier to perform computations on shapes with sharp corners and shapes that change topology (such as by splitting in two or developing holes). These characteristics make LSM effective for modeling objects that vary in time, such as an airbag inflating or a drop of oil floating in water.
== Overview ==
The figure on the right illustrates several ideas about LSM. In the upper left corner is a bounded region with a well-behaved boundary. Below it, the red surface is the graph of a level set function
φ
{\displaystyle \varphi }
determining this shape, and the flat blue region represents the X-Y plane. The boundary of the shape is then the zero-level set of
φ
{\displaystyle \varphi }
, while the shape itself is the set of points in the plane for which
φ
{\displaystyle \varphi }
is positive (interior of the shape) or zero (at the boundary).
In the top row, the shape's topology changes as it is split in two. It is challenging to describe this transformation numerically by parameterizing the boundary of the shape and following its evolution. An algorithm can be used to detect the moment the shape splits in two and then construct parameterizations for the two newly obtained curves. On the bottom row, however, the plane at which the level set function is sampled is translated upwards, on which the shape's change in topology is described. It is less challenging to work with a shape through its level-set function rather than with itself directly, in which a method would need to consider all the possible deformations the shape might undergo.
Thus, in two dimensions, the level-set method amounts to representing a closed curve
Γ
{\displaystyle \Gamma }
(such as the shape boundary in our example) using an auxiliary function
φ
{\displaystyle \varphi }
, called the level-set function. The curve
Γ
{\displaystyle \Gamma }
is represented as the zero-level set of
φ
{\displaystyle \varphi }
by
Γ
=
{
(
x
,
y
)
∣
φ
(
x
,
y
)
=
0
}
,
{\displaystyle \Gamma =\{(x,y)\mid \varphi (x,y)=0\},}
and the level-set method manipulates
Γ
{\displaystyle \Gamma }
implicitly through the function
φ
{\displaystyle \varphi }
. This function
φ
{\displaystyle \varphi }
is assumed to take positive values inside the region delimited by the curve
Γ
{\displaystyle \Gamma }
and negative values outside.
== The level-set equation ==
If the curve
Γ
{\displaystyle \Gamma }
moves in the normal direction with a speed
v
{\displaystyle v}
, then by chain rule and implicit differentiation, it can be determined that the level-set function
φ
{\displaystyle \varphi }
satisfies the level-set equation
∂
φ
∂
t
=
v
|
∇
φ
|
.
{\displaystyle {\frac {\partial \varphi }{\partial t}}=v|\nabla \varphi |.}
Here,
|
⋅
|
{\displaystyle |\cdot |}
is the Euclidean norm (denoted customarily by single bars in partial differential equations), and
t
{\displaystyle t}
is time. This is a partial differential equation, in particular a Hamilton–Jacobi equation, and can be solved numerically, for example, by using finite differences on a Cartesian grid.
However, the numerical solution of the level set equation may require advanced techniques. Simple finite difference methods fail quickly. Upwinding methods such as the Godunov method are considered better; however, the level set method does not guarantee preservation of the volume and shape of the set level in an advection field that maintains shape and size, for example, a uniform or rotational velocity field. Instead, the shape of the level set may become distorted, and the level set may disappear over a few time steps. Therefore, high-order finite difference schemes, such as high-order essentially non-oscillatory (ENO) schemes, are often required, and even then, the feasibility of long-term simulations is questionable. More advanced methods have been developed to overcome this; for example, combinations of the leveling method with tracking marker particles suggested by the velocity field.
== Example ==
Consider a unit circle in
R
2
{\textstyle \mathbb {R} ^{2}}
, shrinking in on itself at a constant rate, i.e. each point on the boundary of the circle moves along its inwards pointing normally at some fixed speed. The circle will shrink and eventually collapse down to a point. If an initial distance field is constructed (i.e. a function whose value is the signed Euclidean distance to the boundary, positive interior, negative exterior) on the initial circle, the normalized gradient of this field will be the circle normal.
If the field has a constant value subtracted from it in time, the zero level (which was the initial boundary) of the new fields will also be circular and will similarly collapse to a point. This is due to this being effectively the temporal integration of the Eikonal equation with a fixed front velocity.
== Applications ==
In mathematical modeling of combustion, LSM is used to describe the instantaneous flame surface, known as the G equation.
Level-set data structures have been developed to facilitate the use of the level-set method in computer applications.
Computational fluid dynamics
Trajectory planning
Optimization
Image processing
Computational biophysics
Discrete complex dynamics (visualization of the parameter plane and the dynamic plane)
== History ==
The level-set method was developed in 1979 by Alain Dervieux, and subsequently popularized by Stanley Osher and James Sethian. It has since become popular in many disciplines, such as image processing, computer graphics, computational geometry, optimization, computational fluid dynamics, and computational biology.
== See also ==
== References ==
== External links ==
See Ronald Fedkiw's academic web page for many pictures and animations showing how the level-set method can be used to model real-life phenomena.
Multivac is a C++ library for front tracking in 2D with level-set methods.
James Sethian's web page on level-set method.
Stanley Osher's homepage.
The Level Set Method. MIT 16.920J / 2.097J / 6.339J. Numerical Methods for Partial Differential Equations by Per-Olof Persson. March 8, 2005
Lecture 11: The Level Set Method: MIT 18.086. Mathematical Methods for Engineers II by Gilbert Strang | Wikipedia/Level_set_methods |
The bitcoin protocol is the set of rules that govern the functioning of bitcoin. Its key components and principles are: a peer-to-peer decentralized network with no central oversight; the blockchain technology, a public ledger that records all bitcoin transactions; mining and proof of work, the process to create new bitcoins and verify transactions; and cryptographic security.
Users broadcast cryptographically signed messages to the network using bitcoin cryptocurrency wallet software. These messages are proposed transactions, changes to be made in the ledger. Each node has a copy of the ledger's entire transaction history. If a transaction violates the rules of the bitcoin protocol, it is ignored, as transactions only occur when the entire network reaches a consensus that they should take place. This "full network consensus" is achieved when each node on the network verifies the results of a proof-of-work operation called mining. Mining packages groups of transactions into blocks, and produces a hash code that follows the rules of the bitcoin protocol. Creating this hash requires expensive energy, but a network node can verify the hash is valid using very little energy. If a miner proposes a block to the network, and its hash is valid, the block and its ledger changes are added to the blockchain, and the network moves on to yet unprocessed transactions. In case there is a dispute, then the longest chain is considered to be correct. A new block is created every 10 minutes, on average.
Changes to the bitcoin protocol require consensus among the network participants. The bitcoin protocol has inspired the creation of numerous other digital currencies and blockchain-based technologies, making it a foundational technology in the field of cryptocurrencies.
== Blockchain ==
Blockchain technology is a decentralized and secure digital ledger that records transactions across a network of computers. It ensures transparency, immutability, and tamper resistance, making data manipulation difficult. Blockchain is the underlying technology for cryptocurrencies like bitcoin and has applications beyond finance, such as supply chain management and smart contracts.
=== Transactions ===
The network requires minimal structure to share transactions. An ad hoc decentralized network of volunteers is sufficient. Messages are broadcast on a best-effort basis, and nodes can leave and rejoin the network at will. Upon reconnection, a node downloads and verifies new blocks from other nodes to complete its local copy of the blockchain.
== Mining ==
Bitcoin uses a proof-of-work system or a proof-or-transaction to form a distributed timestamp server as a peer-to-peer network. This work is often called bitcoin mining. During mining, practically all of the computing power of the bitcoin network is used to solve cryptographic tasks, which is proof of work. Their purpose is to ensure that the generation of valid blocks involves a certain amount of effort so that subsequent modification of the blockchain, such as in the 51% attack scenario, can be practically ruled out. Because of the difficulty, miners form "mining pools" to get payouts despite these high power requirements, costly hardware deployments, and hardware under control. As a result of the Chinese ban on bitcoin mining in 2021, the United States currently holds the largest share of bitcoin mining pools.
Requiring a proof of work to accept a new block to the blockchain was Satoshi Nakamoto's key innovation. The mining process involves identifying a block that, when hashed twice with SHA-256, yields a number smaller than the given difficulty target. While the average work required increases in inverse proportion to the difficulty target, a hash can always be verified by executing a single round of double SHA-256.
For the bitcoin timestamp network, a valid proof of work is found by incrementing a nonce until a value is found that gives the block's hash the required number of leading zero bits. Once the hashing has produced a valid result, the block cannot be changed without redoing the work. As later blocks are chained after it, the work to change the block would include redoing the work for each subsequent block. If there is a deviation in consensus then a blockchain fork can occur.
Majority consensus in bitcoin is represented by the longest chain, which required the greatest amount of effort to produce. If a majority of computing power is controlled by honest nodes, the honest chain will grow fastest and outpace any competing chains. To modify a past block, an attacker would have to redo the proof-of-work of that block and all blocks after it and then surpass the work of the honest nodes. The probability of a slower attacker catching up diminishes exponentially as subsequent blocks are added.
To compensate for increasing hardware speed and varying interest in running nodes over time, the difficulty of finding a valid hash is adjusted roughly every two weeks. If blocks are generated too quickly, the difficulty increases and more hashes are required to make a block and to generate new bitcoins.
=== Difficulty and mining pools ===
Bitcoin mining is a competitive endeavor. An "arms race" has been observed through the various hashing technologies that have been used to mine bitcoins: basic central processing units (CPUs), high-end graphics processing units (GPUs), field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) all have been used, each reducing the profitability of the less-specialized technology. Bitcoin-specific ASICs are now the primary method of mining bitcoin and have surpassed GPU speed by as much as 300-fold. The difficulty of the mining process is periodically adjusted to the mining power active on the network. As bitcoins have become more difficult to mine, computer hardware manufacturing companies have seen an increase in sales of high-end ASIC products.
Computing power is often bundled together or "pooled" to reduce variance in miner income. Individual mining rigs often have to wait for long periods to confirm a block of transactions and receive payment. In a pool, all participating miners get paid every time a participating server solves a block. This payment depends on the amount of work an individual miner contributed to help find that block, and the payment system used by the pool.
=== Environmental effects ===
=== Mined bitcoins ===
By convention, the first transaction in a block is a special transaction that produces new bitcoins owned by the creator of the block. This is the incentive for nodes to support the network. It provides a way to move new bitcoins into circulation. The reward for mining halves every 210,000 blocks. It started at 50 bitcoin, dropped to 25 in late 2012, and to 6.25 bitcoin in 2020. The most recent halving, which occurred on 20 April 2024 at 12:09am UTC (with block number 840,000), reduced the block reward to 3.125 bitcoins. The next halving is expected to occur in 2028, when the block reward will fall to 1.625 bitcoins. This halving process is programmed to continue a maximum of 64 times before new coin creation ceases.
== Payment verification ==
Each miner can choose which transactions are included in or exempted from a block. A greater number of transactions in a block does not equate to greater computational power required to solve that block.
As noted in Nakamoto's whitepaper, it is possible to verify bitcoin payments without running a full network node (simplified payment verification, SPV). A user only needs a copy of the block headers of the longest chain, which are available by querying network nodes until it is apparent that the longest chain has been obtained; then, get the Merkle tree branch linking the transaction to its block. Linking the transaction to a place in the chain demonstrates that a network node has accepted it, and blocks added after it further establish the confirmation.
== Protocol features ==
=== Security ===
Various potential attacks on the bitcoin network and its use as a payment system, real or theoretical, have been considered. The bitcoin protocol includes several features that protect it against some of those attacks, such as unauthorized spending, double spending, forging bitcoins, and tampering with the blockchain. Other attacks, such as theft of private keys, require due care by users.
==== Unauthorized spending ====
Unauthorized spending is mitigated by bitcoin's implementation of public-private key cryptography. For example, when Alice sends a bitcoin to Bob, Bob becomes the new owner of the bitcoin. Eve, observing the transaction, might want to spend the bitcoin Bob just received, but she cannot sign the transaction without the knowledge of Bob's private key.
==== Double spending ====
A specific problem that an internet payment system must solve is double-spending, whereby a user pays the same coin to two or more different recipients. An example of such a problem would be if Eve sent a bitcoin to Alice and later sent the same bitcoin to Bob. The bitcoin network guards against double-spending by recording all bitcoin transfers in a ledger (the blockchain) that is visible to all users, and ensuring for all transferred bitcoins that they have not been previously spent.: 4
==== Race attack ====
If Eve offers to pay Alice a bitcoin in exchange for goods and signs a corresponding transaction, it is still possible that she also creates a different transaction at the same time sending the same bitcoin to Bob. By the rules, the network accepts only one of the transactions. This is called a race attack, since there is a race between the recipients to accept the transaction first. Alice can reduce the risk of race attack stipulating that she will not deliver the goods until Eve's payment to Alice appears in the blockchain.
A variant race attack (which has been called a Finney attack by reference to Hal Finney) requires the participation of a miner. Instead of sending both payment requests (to pay Bob and Alice with the same coins) to the network, Eve issues only Alice's payment request to the network, while the accomplice tries to mine a block that includes the payment to Bob instead of Alice. There is a positive probability that the rogue miner will succeed before the network, in which case the payment to Alice will be rejected. As with the plain race attack, Alice can reduce the risk of a Finney attack by waiting for the payment to be included in the blockchain.
==== History modification ====
Each block that is added to the blockchain, starting with the block containing a given transaction, is called a confirmation of that transaction. Ideally, merchants and services that receive payment in bitcoin should wait for at least a few confirmations to be distributed over the network before assuming that the payment was done. The more confirmations that the merchant waits for, the more difficult it is for an attacker to successfully reverse the transaction—unless the attacker controls more than half the total network power, in which case it is called a 51% attack, or a majority attack.
Although more difficult for attackers of a smaller size, there may be financial incentives that make history modification attacks profitable.
=== Scalability ===
=== Privacy ===
==== Deanonymisation of clients ====
Deanonymisation is a strategy in data mining in which anonymous data is cross-referenced with other sources of data to re-identify the anonymous data source. Along with transaction graph analysis, which may reveal connections between bitcoin addresses (pseudonyms), there is a possible attack which links a user's pseudonym to its IP address. If the peer is using Tor, the attack includes a method to separate the peer from the Tor network, forcing them to use their real IP address for any further transactions. The cost of the attack on the full bitcoin network was estimated to be under €1500 per month, as of 2014.
== See also ==
Lists of network protocols
Web3
== References ==
=== Works cited ===
de Vries, Alex; Gallersdörfer, Ulrich; Klaaßen, Lena; Stoll, Christian (16 March 2022). "Revisiting Bitcoin's carbon footprint". Joule. 6 (3): 498–502. Bibcode:2022Joule...6..498D. doi:10.1016/j.joule.2022.02.005. ISSN 2542-4351. S2CID 247143939. | Wikipedia/Bitcoin_network |
Checkpointing is a technique that provides fault tolerance for computing systems. It involves saving a snapshot of an application's state, so that it can restart from that point in case of failure. This is particularly important for long-running applications that are executed in failure-prone computing systems.
== Checkpointing in distributed systems ==
In the distributed computing environment, checkpointing is a technique that helps tolerate failures that would otherwise force a long-running application to restart from the beginning. The most basic way to implement checkpointing is to stop the application, copy all the required data from the memory to reliable storage (e.g., parallel file system), then continue with execution. In the case of failure, when the application restarts, it does not need to start from scratch. Rather, it will read the latest state ("the checkpoint") from the stable storage and execute from that point. While there is ongoing debate on whether checkpointing is the dominant I/O workload on distributed computing systems, the general consensus is that checkpointing is one of the major I/O workloads.
There are two main approaches for checkpointing in the distributed computing systems: coordinated checkpointing and uncoordinated checkpointing. In the coordinated checkpointing approach, processes must ensure that their checkpoints are consistent. This is usually achieved by some kind of two-phase commit protocol algorithm. In the uncoordinated checkpointing, each process checkpoints its own state independently. It must be stressed that simply forcing processes to checkpoint their state at fixed time intervals is not sufficient to ensure global consistency. The need for establishing a consistent state (i.e., no missing messages or duplicated messages) may force other processes to roll back to their checkpoints, which in turn may cause other processes to roll back to even earlier checkpoints, which in the most extreme case may mean that the only consistent state found is the initial state (the so-called domino effect).
== Implementations for applications ==
=== Save State ===
One of the original and now most common means of application checkpointing was a "save state" feature in interactive applications, in which the user of the application could save the state of all variables and other data and either continue working or exit the application and restart the application and restore the saved state at a later time. This was implemented through a "save" command or menu option in the application. In many cases, it became standard practice to ask the user, if they had unsaved work when exiting an application, if they wanted to save their work before doing so.
This functionality became extremely important for usability in applications in which a particular task could not be completed in one sitting (such as playing a video game expected to take dozens of hours) or in which the work was being done over a long period of time (such as data entry into a document such as rows in a spreadsheet).
The problem with save state is it requires the operator of a program to request the save. For non-interactive programs, including automated or batch processed workloads, the ability to checkpoint such applications also had to be automated.
=== Checkpoint/Restart ===
As batch applications began to handle tens to hundreds of thousands of transactions, where each transaction might process one record from one file against several different files, the need for the application to be restartable at some point without the need to rerun the entire job from scratch became imperative. Thus the "checkpoint/restart" capability was born, in which after a number of transactions had been processed, a "snapshot" or "checkpoint" of the state of the application could be taken. If the application failed before the next checkpoint, it could be restarted by giving it the checkpoint information and the last place in the transaction file where a transaction had successfully completed. The application could then restart at that point.
Checkpointing tends to be expensive, so it was generally not done with every record, but at some reasonable compromise between the cost of a checkpoint vs. the value of the computer time needed to reprocess a batch of records. Thus the number of records processed for each checkpoint might range from 25 to 200, depending on cost factors, the relative complexity of the application and the resources needed to successfully restart the application.
=== Fault Tolerance Interface (FTI) ===
FTI is a library that aims to provide computational scientists with an easy way to perform checkpoint/restart in a scalable fashion. FTI leverages local storage plus multiple replications and erasures techniques to provide several levels of reliability and performance. FTI provides application-level checkpointing that allows users to select which data needs to be protected, in order to improve efficiency and avoid space, time and energy waste. It offers a direct data interface so that users do not need to deal with files and/or directory names. All metadata is managed by FTI in a transparent fashion for the user. If desired, users can dedicate one process per node to overlap fault tolerance workload and scientific computation, so that post-checkpoint tasks are executed asynchronously.
=== Berkeley Lab Checkpoint/Restart (BLCR) ===
The Future Technologies Group at the Lawrence National Laboratories are developing a hybrid kernel/user implementation of checkpoint/restart called BLCR. Their goal is to provide a robust, production quality implementation that checkpoints a wide range of applications, without requiring changes to be made to application code. BLCR focuses on checkpointing parallel applications that communicate through MPI, and on compatibility with the software suite produced by the SciDAC Scalable Systems Software ISIC. Its work is broken down into 4 main areas: Checkpoint/Restart for Linux (CR), Checkpointable MPI Libraries, Resource Management Interface to Checkpoint/Restart and Development of Process Management Interfaces.
=== DMTCP ===
DMTCP (Distributed MultiThreaded Checkpointing) is a tool for transparently checkpointing the state of an arbitrary group of programs spread across many machines and connected by sockets. It does not modify the user's program or the operating system. Among the applications supported by DMTCP are Open MPI, Python, Perl, and many programming languages and shell scripting languages. With the use of TightVNC, it can also checkpoint and restart X Window applications, as long as they do not use extensions (e.g. no OpenGL or video). Among the Linux features supported by DMTCP are open file descriptors, pipes, sockets, signal handlers, process id and thread id virtualization (ensure old pids and tids continue to work upon restart), ptys, fifos, process group ids, session ids, terminal attributes, and mmap/mprotect (including mmap-based shared memory). DMTCP supports the OFED API for InfiniBand on an experimental basis.
=== Collaborative checkpointing ===
Some recent protocols perform collaborative checkpointing by storing fragments of the checkpoint in nearby nodes. This is helpful because it avoids the cost of storing to a parallel file system (which often becomes a bottleneck for large-scale systems) and it uses storage that is closer. This has found use particularly in large-scale supercomputing clusters. The challenge is to ensure that when the checkpoint is needed when recovering from a failure, the nearby nodes with fragments of the checkpoints are available.
=== Docker ===
Docker and the underlying technology contain a checkpoint and restore mechanism.
=== CRIU ===
CRIU is a user space checkpoint library.
== Implementation for embedded and ASIC devices ==
=== Mementos ===
Mementos is a software system that transforms general-purpose tasks into interruptible programs for platforms with frequent interruptions such as power outages. It was designed for batteryless embedded devices such as RFID tags and smart cards which rely on harvesting energy from ambient background sources. Mementos frequently senses the available energy in the system and decides whether to checkpoint the program due to impending power loss versus continuing computation. If checkpointing, data will be stored in a non-volatile memory. When the energy becomes sufficient for reboot, the data is retrieved from non-volatile memory and the program continues from the stored state. Mementos has been implemented on the MSP430 family of microcontrollers. Mementos is named after Christopher Nolan's Memento.
=== Idetic ===
Idetic is a set of automatic tools which helps application-specific integrated circuit (ASIC) developers automatically embed checkpoints in their designs. It targets high-level synthesis tools and adds the checkpoints at the register-transfer level (Verilog code). It uses a dynamic programming approach to locate low overhead points in the state machine of the design. Since the checkpointing in hardware level involves sending the data of dependent registers to a non-volatile memory, the optimum points are required to have minimum number of registers to store. Idetic is deployed and evaluated on energy harvesting RFID tag device.
== See also ==
Process image
Save states, a similar concept provided by video game console emulators
== References ==
== Further reading ==
Yibei Ling, Jie Mi, Xiaola Lin: A Variational Calculus Approach to Optimal Checkpoint Placement. IEEE Trans. Computers 50(7): 699-708 (2001)
R.E. Ahmed, R.C. Frazier, and P.N. Marinos, " Cache-Aided Rollback Error Recovery (CARER) Algorithms for Shared-Memory Multiprocessor Systems", IEEE 20th International Symposium on Fault-Tolerant Computing (FTCS-20), Newcastle upon Tyne, UK, June 26–28, 1990, pp. 82–88.
== External links ==
LibCkpt
FTI
Berkeley Lab Checkpoint/Restart (BLCR)
Distributed MultiThreaded CheckPointing (DMTCP)
OpenVZ
CRIU
Cryopid2 | Wikipedia/Application_checkpointing |
Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.
== Overview ==
Computer graphics studies manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities.
Connected studies include:
Applied mathematics
Computational geometry
Computational topology
Computer vision
Image processing
Information visualization
Scientific visualization
Applications of computer graphics include:
Print design
Digital art
Special effects
Video games
Visual effects
== History ==
There are several international conferences and journals where the most significant results in computer graphics are published. Among them are the SIGGRAPH and Eurographics conferences and the Association for Computing Machinery (ACM) Transactions on Graphics journal. The joint Eurographics and ACM SIGGRAPH symposium series features the major venues for the more specialized sub-fields: Symposium on Geometry Processing, Symposium on Rendering, Symposium on Computer Animation, and High Performance Graphics.
As in the rest of computer science, conference publications in computer graphics are generally more significant than journal publications (and subsequently have lower acceptance rates).
== Subfields ==
A broad classification of major subfields in computer graphics might be:
Geometry: ways to represent and process surfaces
Animation: ways to represent and manipulate motion
Rendering: algorithms to reproduce light transport
Imaging: image acquisition or image editing
=== Geometry ===
The subfield of geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on its exterior, boundary representations are most commonly used. Two dimensional surfaces are a good representation for most objects, though they may be non-manifold. Since surfaces are not finite, discrete digital approximations are used. Polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have become more popular recently (see for instance the Symposium on Point-Based Graphics). These representations are Lagrangian, meaning the spatial locations of the samples are independent. Recently, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example).
Geometry subfields include:
Implicit surface modeling – an older subfield which examines the use of algebraic surfaces, constructive solid geometry, etc., for surface representation.
Digital geometry processing – surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading.
Discrete differential geometry – a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics.
Point-based graphics – a recent field which focuses on points as the fundamental representation of surfaces.
Subdivision surfaces
Out-of-core mesh processing – another recent field which focuses on mesh datasets that do not fit in main memory.
=== Animation ===
The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically, most work in this field has focused on parametric and data-driven models, but recently physical simulation has become more popular as computers have become more powerful computationally.
Animation subfields include:
Performance capture
Character animation
Physical simulation (e.g. cloth modeling, animation of fluid dynamics, etc.)
=== Rendering ===
Rendering generates images from a model. Rendering may simulate light transport to create realistic images or it may create images that have a particular artistic style in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light).
Rendering subfields include:
Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.
Scattering: Models of scattering (how light interacts with the surface at a given point) and shading (how material properties vary across the surface) are used to describe the appearance of a surface. In graphics these problems are often studied within the context of rendering since they can substantially affect the design of rendering algorithms. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function (BSDF). The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (There is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.)
Non-photorealistic rendering
Physically based rendering – concerned with generating images according to the laws of geometric optics
Real-time rendering – focuses on rendering for interactive applications, typically using specialized hardware like GPUs
Relighting – recent area concerned with quickly re-rendering scenes
== Notable researchers ==
== Applications for their use ==
Bitmap Design / Image Editing
Adobe Photoshop
Corel Photo-Paint
GIMP
Krita
Vector drawing
Adobe Illustrator
CorelDRAW
Inkscape
Affinity Designer
Sketch
Architecture
VariCAD
FreeCAD
AutoCAD
QCAD
LibreCAD
DataCAD
Corel Designer
Video editing
Adobe Premiere Pro
Sony Vegas
Final Cut
DaVinci Resolve
Cinelerra
VirtualDub
Sculpting, Animation, and 3D Modeling
Blender 3D
Wings 3D
ZBrush
Sculptris
SolidWorks
Rhino3D
SketchUp
3ds Max
Cinema 4D
Maya
Houdini
Digital composition
Nuke
Blackmagic Fusion
Adobe After Effects
Natron
Rendering
V-Ray
RedShift
RenderMan
Octane Render
Mantra
Lumion (Architectural visualization)
Other applications examples
ACIS - geometric core
Autodesk Softimage
POV-Ray
Scribus
Silo
Hexagon
Lightwave
== See also ==
== References ==
== Further reading ==
Foley et al. Computer Graphics: Principles and Practice.
Shirley. Fundamentals of Computer Graphics.
Watt. 3D Computer Graphics.
== External links ==
A Critical History of Computer Graphics and Animation
History of Computer Graphics series of articles
=== Industry ===
Industrial labs doing "blue sky" graphics research include:
Adobe Advanced Technology Labs
MERL
Microsoft Research – Graphics
Nvidia Research
Major film studios notable for graphics research include:
ILM
PDI/Dreamworks Animation
Pixar | Wikipedia/Graphics_processing |
The GeForce 200 series is a series of Tesla-based GeForce graphics processing units developed by Nvidia.
== Architecture ==
The GeForce 200 series introduced Nvidia's second generation of the Tesla microarchitecture, Nvidia's unified shader architecture; the first major update to it since introduced with the GeForce 8 series.
The GeForce GTX 280 and GTX 260 are based on the same processor core. During the manufacturing process, GTX chips were binned and separated through defect testing of the core's logic functionality. Those that failed to meet the GTX 280 hardware specification were re-tested and binned as GTX 260 (which is specified with fewer stream processors, fewer ROPs and a narrower memory bus).
In late 2008, Nvidia re-released the GTX 260 with 216 stream processors, up from 192. Effectively, there were two GTX 260 cards in production with non-trivial performance differences.
The GeForce 200 series GPUs (GT200a/b GPU), excluding GeForce GTS 250, GTS 240 GPUs (these are older G92b GPUs), have double precision support for use in GPGPU applications. GT200 GPUs also have improved performance in geometry shading.
As of August 2018, the GT200 is the seventh largest commercial GPU ever constructed, consisting of 1.4 billion transistors covering a 576 mm2 die surface area built on a 65 nm process. It is the fifth largest CMOS-logic chip that has been fabricated at the TSMC foundry. The GeForce 400 series have since superseded the GT200 chips in transistor count, but the original GT200 dies still exceed the GF100 die size. It is larger than even the Kepler-based GK210 GPU used in the Tesla K80, which has 7.1 billion transistors on a 561 mm2 die manufactured in 28 nm. The Ampere GA100 is currently the largest commercial GPU ever fabricated at 826 mm2 with 54.2 billion transistors.
Nvidia officially announced and released the retail version of the previously OEM only GeForce 210 (GT218 GPU) and GeForce GT 220 (GT216 GPU) on October 12, 2009. Nvidia officially announced and released the GeForce GT 240 (GT215 GPU) on November 17, 2009. The new 40nm GPUs feature the new PureVideo HD VP4 decoder hardware in them, as the older GeForce 8 and 9 GPUs only have PureVideo HD VP2 or VP3 (G98). They also support Compute Capability 1.2, whereas older GeForce 8 and 9 GPUs only supported Compute Capability 1.1. All GT21x GPUs also contain an audio processor inside and support eight-channel LPCM output through HDMI.
== Chipset table ==
=== GeForce 200 series ===
All models support Coverage Sample Anti-Aliasing, Angle-Independent Anisotropic Filtering, 240-bit OpenEXR HDR.
1 Unified Shaders : Texture mapping units : Render output units
==== Features ====
Compute Capability: 1.1 (G92 [GTS250] GPU)
Compute Capability: 1.2 (GT215, GT216, GT218 GPUs)
Compute Capability: 1.3 has double precision support for use in GPGPU applications. (GT200a/b GPUs only)
=== GeForce 200M (2xxM) Series ===
The GeForce 200M Series is a graphics processor architecture for notebooks.
1 Unified Shaders : Texture mapping units : Render output units
== Discontinued support ==
Nvidia ceased driver support for GeForce 200 series on April 1, 2016.
Windows XP 32-bit & Media Center Edition: version 340.52 released on July 29, 2014; Download
Windows XP 64-bit: version 340.52 released on July 29, 2014; Download
Windows Vista, 7, 8, 8.1 32-bit: version 342.01 (WHQL) released on December 14, 2016; Download
Windows Vista, 7, 8, 8.1 64-bit: version 342.01 (WHQL) released on December 14, 2016; Download
Windows 10, 32-bit: version 342.01 (WHQL) released on December 14, 2016; Download
Windows 10, 64-bit: version 342.01 (WHQL) released on December 14, 2016; Download
Linux, 64-bit: version 340.108 released on December 23, 2019; Download
== Gallery ==
== See also ==
GeForce 8 series
GeForce 9 series
GeForce 100 series
GeForce 300 series
GeForce 400 series
GeForce 500 series
GeForce 600 series
GeForce 700 series
GeForce 800M series
GeForce 900 series
GeForce 10 series
Nvidia Quadro
Nvidia Tesla
List of Nvidia graphics processing units
== References ==
== External links == | Wikipedia/GeForce_200_series |
In computer science, resource starvation is a problem encountered in concurrent computing where a process is perpetually denied necessary resources to process its work. Starvation may be caused by errors in a scheduling or mutual exclusion algorithm, but can also be caused by resource leaks, and can be intentionally caused via a denial-of-service attack such as a fork bomb.
When starvation is impossible in a concurrent algorithm, the algorithm is called starvation-free, lockout-freed or said to have finite bypass. This property is an instance of liveness, and is one of the two requirements for any mutual exclusion algorithm; the other being correctness. The name "finite bypass" means that any process (concurrent part) of the algorithm is bypassed at most a finite number times before being allowed access to the shared resource.
== Scheduling ==
Starvation is usually caused by an overly simplistic scheduling algorithm. For example, if a (poorly designed) multi-tasking system always switches between the first two tasks while a third never gets to run, then the third task is being starved of CPU time. The scheduling algorithm, which is part of the kernel, is supposed to allocate resources equitably; that is, the algorithm should allocate resources so that no process perpetually lacks necessary resources.
Many operating system schedulers employ the concept of process priority. A high priority process A will run before a low priority process B. If the high priority process (process A) blocks and never yields, the low priority process (B) will (in some systems) never be scheduled—it will experience starvation. If there is an even higher priority process X, which is dependent on a result from process B, then process X might never finish, even though it is the most important process in the system. This condition is called a priority inversion. Modern scheduling algorithms normally contain code to guarantee that all processes will receive a minimum amount of each important resource (most often CPU time) in order to prevent any process from being subjected to starvation.
In computer networks, especially wireless networks, scheduling algorithms may suffer from scheduling starvation. An example is maximum throughput scheduling.
Starvation is normally caused by deadlock in that it causes a process to freeze. Two or more processes become deadlocked when each of them is doing nothing while waiting for a resource occupied by another program in the same set. On the other hand, a process is in starvation when it is waiting for a resource that is continuously given to other processes. Starvation-freedom is a stronger guarantee than the absence of deadlock: a mutual exclusion algorithm that must choose to allow one of two processes into a critical section and picks one arbitrarily is deadlock-free, but not starvation-free.
A possible solution to starvation is to use a scheduling algorithm with priority queue that also uses the aging technique. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.
== See also ==
Dining philosophers problem
== References == | Wikipedia/Starvation_(computer_science) |
Methodical culturalism is a philosophical approach developed by Peter Janich and his pupils. Its core statement is that science is not developed from purely theoretical considerations, but as a development of everyday, proto-scientific human behavior—in other words, that science is a stylized form of everyday knowledge-forming practice.
Thus, from the viewpoint of methodical culturalism, science is understood as a continuation of the practical processes of the everyday world and must be analyzed from this aspect systematically and methodically.
Methodical culturalism is a development of the methodical constructivism of the Erlangen School of constructivism.
== See also ==
Action theory
Constructivist epistemology
== External links ==
Peter Janich: Kulturalismus (in German)
Peter Janich: Kultur des Wissens – natürlich begrenzt? (in German)
Rafael Capurro zum Informationsbegriff von Peter Janich (in German)
Dirk Hartmann: Willensfreiheit und die Autonomie der Kulturwissenschaften (pdf-Datei; 176 KB) (in German) | Wikipedia/Methodical_culturalism |
An adaptive system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole that together are able to respond to environmental changes or changes in the interacting parts, in a way analogous to either continuous physiological homeostasis or evolutionary adaptation in biology. Feedback loops represent a key feature of adaptive systems, such as ecosystems and individual organisms; or in the human world, communities, organizations, and families. Adaptive systems can be organized into a hierarchy.
Artificial adaptive systems include robots with control systems that utilize negative feedback to maintain desired states.
== The law of adaptation ==
The law of adaptation may be stated informally as:
Every adaptive system converges to a state in which all kind of stimulation ceases.
Formally, the law can be defined as follows:
Given a system
S
{\displaystyle S}
, we say that a physical event
E
{\displaystyle E}
is a stimulus for the system
S
{\displaystyle S}
if and only if the probability
P
(
S
→
S
′
|
E
)
{\displaystyle P(S\rightarrow S'|E)}
that the system suffers a change or be perturbed (in its elements or in its processes) when the event
E
{\displaystyle E}
occurs is strictly greater than the prior probability that
S
{\displaystyle S}
suffers a change independently of
E
{\displaystyle E}
:
P
(
S
→
S
′
|
E
)
>
P
(
S
→
S
′
)
{\displaystyle P(S\rightarrow S'|E)>P(S\rightarrow S')}
Let
S
{\displaystyle S}
be an arbitrary system subject to changes in time
t
{\displaystyle t}
and let
E
{\displaystyle E}
be an arbitrary event that is a stimulus for the system
S
{\displaystyle S}
: we say that
S
{\displaystyle S}
is an adaptive system if and only if when t tends to infinity
(
t
→
∞
)
{\displaystyle (t\rightarrow \infty )}
the probability that the system
S
{\displaystyle S}
change its behavior
(
S
→
S
′
)
{\displaystyle (S\rightarrow S')}
in a time step
t
0
{\displaystyle t_{0}}
given the event
E
{\displaystyle E}
is equal to the probability that the system change its behavior independently of the occurrence of the event
E
{\displaystyle E}
. In mathematical terms:
-
P
t
0
(
S
→
S
′
|
E
)
>
P
t
0
(
S
→
S
′
)
>
0
{\displaystyle P_{t_{0}}(S\rightarrow S'|E)>P_{t_{0}}(S\rightarrow S')>0}
-
lim
t
→
∞
P
t
(
S
→
S
′
|
E
)
=
P
t
(
S
→
S
′
)
{\displaystyle \lim _{t\rightarrow \infty }P_{t}(S\rightarrow S'|E)=P_{t}(S\rightarrow S')}
Thus, for each instant
t
{\displaystyle t}
will exist a temporal interval
h
{\displaystyle h}
such that:
P
t
+
h
(
S
→
S
′
|
E
)
−
P
t
+
h
(
S
→
S
′
)
<
P
t
(
S
→
S
′
|
E
)
−
P
t
(
S
→
S
′
)
{\displaystyle P_{t+h}(S\rightarrow S'|E)-P_{t+h}(S\rightarrow S')<P_{t}(S\rightarrow S'|E)-P_{t}(S\rightarrow S')}
== Benefit of self-adjusting systems ==
In an adaptive system, a parameter changes slowly and has no preferred value. In a self-adjusting system though, the parameter value “depends on the history of the system dynamics”. One of the most important qualities of self-adjusting systems is its “adaptation to the edge of chaos” or ability to avoid chaos. Practically speaking, by heading to the edge of chaos without going further, a leader may act spontaneously yet without disaster. A March/April 2009 Complexity article further explains the self-adjusting systems used and the realistic implications. Physicists have shown that adaptation to the edge of chaos occurs in almost all systems with feedback.
== Practopoietic theory ==
According to practopoietic theory, creation of adaptive behavior involves special, poietic interactions among different levels of system organization. These interactions are described on the basis of cybernetic theory in particular, good regulator theorem. In practopoietic systems, lower levels of organization determine the properties of higher levels of organization, but not the other way around. This ensures that lower levels of organization (e.g., genes) always possess cybernetically more general knowledge than the higher levels of organization—knowledge at a higher level being a special case of the knowledge at the lower level. At the highest level of organization lies the overt behavior. Cognitive operations lay in the middle parts of that hierarchy, above genes and below behavior. For behavior to be adaptive, at least three adaptive traverses are needed.
== See also ==
== Notes ==
== References ==
Martin H., Jose Antonio; Javier de Lope; Darío Maravall (2009). "Adaptation, Anticipation and Rationality in Natural and Artificial Systems: Computational Paradigms Mimicking Nature". Natural Computing. 8 (4): 757–775. doi:10.1007/s11047-008-9096-6. S2CID 2723451.
== External links == | Wikipedia/Adaptive_systems |
A mental representation (or cognitive representation), in philosophy of mind, cognitive psychology, neuroscience, and cognitive science, is a hypothetical internal cognitive symbol that represents external reality or its abstractions.
Mental representation is the mental imagery of things that are not actually present to the senses. In contemporary philosophy, specifically in fields of metaphysics such as philosophy of mind and ontology, a mental representation is one of the prevailing ways of explaining and describing the nature of ideas and concepts.
Mental representations (or mental imagery) enable representing things that have never been experienced as well as things that do not exist. Our brains and mental imageries allow us to imagine things have either never happened or are impossible and do not exist. Although visual imagery is more likely to be recalled, mental imagery may involve representations in any of the sensory modalities, such as hearing, smell, or taste. Stephen Kosslyn proposes that images are used to help solve certain types of problems. We are able to visualize the objects in question and mentally represent the images to solve it.
Mental representations also allow people to experience things right in front of them—however, the process of how the brain interprets and stores the representational content is debated.
== Representational theories of mind ==
Representationalism (also known as indirect realism) is the view that representations are the main way we access external reality.
The representational theory of mind attempts to explain the nature of ideas, concepts and other mental content in contemporary philosophy of mind, cognitive science and experimental psychology. In contrast to theories of naïve or direct realism, the representational theory of mind postulates the actual existence of mental representations which act as intermediaries between the observing subject and the objects, processes or other entities observed in the external world. These intermediaries stand for or represent to the mind the objects of that world.
The original or "classical" representational theory probably can be traced back to Thomas Hobbes and was a dominant theme in classical empiricism in general. According to this version of the theory, the mental representations were images (often called "ideas") of the objects or states of affairs represented. For modern adherents, such as Jerry Fodor and Steven Pinker, the representational system consists rather of an internal language of thought (i.e., mentalese). The contents of thoughts are represented in symbolic structures (the formulas of mentalese) which, analogously to natural languages but on a much more abstract level, possess a syntax and semantics very much like those of natural languages. For the Portuguese logician and cognitive scientist Luis M. Augusto, at this abstract, formal level, the syntax of thought is the set of symbol rules (i.e., operations, processes, etc. on and with symbol structures) and the semantics of thought is the set of symbol structures (concepts and propositions). Content (i.e., thought) emerges from the meaningful co-occurrence of both sets of symbols. For instance, "8 x 9" is a meaningful co-occurrence, whereas "CAT x §" is not; "x" is a symbol rule called for by symbol structures such as "8" and "9", but not by "CAT" and "§".
Canadian philosopher P. Thagard noted in his work "Introduction to Cognitive Science", that "most cognitive scientists agree that knowledge in the human mind consists of mental representations" and that "cognitive science asserts: that people have mental procedures that operate by means of mental representations for the implementation of thinking and action"
=== Strong vs weak, restricted vs unrestricted ===
There are two types of representationalism, strong and weak. Strong representationalism attempts to reduce phenomenal character to intentional content. On the other hand, weak representationalism claims only that phenomenal character supervenes on intentional content. Strong representationalism aims to provide a theory about the nature of phenomenal character, and offers a solution to the hard problem of consciousness. In contrast to this, weak representationalism does not aim to provide a theory of consciousness, nor does it offer a solution to the hard problem of consciousness.
Strong representationalism can be further broken down into restricted and unrestricted versions. The restricted version deals only with certain kinds of phenomenal states e.g. visual perception. Most representationalists endorse an unrestricted version of representationalism. According to the unrestricted version, for any state with phenomenal character that state's phenomenal character reduces to its intentional content. Only this unrestricted version of representationalism is able to provide a general theory about the nature of phenomenal character, as well as offer a potential solution to the hard problem of consciousness. The successful reduction of the phenomenal character of a state to its intentional content would provide a solution to the hard problem of consciousness once a physicalist account of intentionality is worked out.
=== Problems for the unrestricted version ===
When arguing against the unrestricted version of representationalism people will often bring up phenomenal mental states that appear to lack intentional content. The unrestricted version seeks to account for all phenomenal states. Thus, for it to be true, all states with phenomenal character must have intentional content to which that character is reduced. Phenomenal states without intentional content therefore serve as a counterexample to the unrestricted version. If the state has no intentional content its phenomenal character will not be reducible to that state's intentional content, for it has none to begin with.
A common example of this kind of state are moods. Moods are states with phenomenal character that are generally thought to not be directed at anything in particular. Moods are thought to lack directedness, unlike emotions, which are typically thought to be directed at particular things. People conclude that because moods are undirected they are also nonintentional i.e. they lack intentionality or aboutness. Because they are not directed at anything they are not about anything. Because they lack intentionality they will lack any intentional content. Lacking intentional content their phenomenal character will not be reducible to intentional content, refuting the representational doctrine.
Though emotions are typically considered as having directedness and intentionality this idea has also been called into question. One might point to emotions a person all of a sudden experiences that do not appear to be directed at or about anything in particular. Emotions elicited by listening to music are another potential example of undirected, nonintentional emotions. Emotions aroused in this way do not seem to necessarily be about anything, including the music that arouses them.
=== Responses ===
In response to this objection, a proponent of representationalism might reject the undirected non-intentionality of moods, and attempt to identify some intentional content they might plausibly be thought to possess. The proponent of representationalism might also reject the narrow conception of intentionality as being directed at a particular thing, arguing instead for a broader kind of intentionality.
There are three alternative kinds of directedness/intentionality one might posit for moods.
Outward directedness: What it is like to be in mood M is to have a certain kind of outwardly focused representational content.
Inward directedness: What it is like to be in mood M is to have a certain kind of inwardly focused representational content.
Hybrid directedness: What it is like to be in mood M is to have both a certain kind of outwardly focused representational content and a certain kind of inwardly focused representational content.
In the case of outward directedness, moods might be directed at either the world as a whole, a changing series of objects in the world, or unbound emotion properties projected by people onto things in the world. In the case of inward directedness, moods are directed at the overall state of a person's body. In the case of hybrid, directedness moods are directed at some combination of inward and outward things.
=== Further objections ===
Even if one can identify some possible intentional content for moods we might still question whether that content is able to sufficiently capture the phenomenal character of the mood states they are a part of. Amy Kind contends that in the case of all the previously mentioned kinds of directedness (outward, inward, and hybrid) the intentional content supplied to the mood state is not capable of sufficiently capturing the phenomenal aspects of the mood states. In the case of inward directedness, the phenomenology of the mood does not seem tied to the state of one's body, and even if one's mood is reflected by the overall state of one's body that person will not necessarily be aware of it, demonstrating the insufficiency of the intentional content to adequately capture the phenomenal aspects of the mood. In the case of outward directedness, the phenomenology of the mood and its intentional content does not seem to share the corresponding relation they should given that the phenomenal character is supposed to reduce to the intentional content. Hybrid directedness, if it can even get off the ground, faces the same objection.
== Philosophers ==
There is a wide debate on what kinds of representations exist. There are several philosophers who address differing aspects of the debate. Such philosophers include Alex Morgan, Gualtiero Piccinini, and Uriah Kriegel.
=== Alex Morgan ===
There are "job description" representations. That is representations that represent something—have intentionality, have a special relation—the represented object does not need to exist, and content plays a causal role in what gets represented:.
Structural representations are also important. These types of representations are basically mental maps that we have in our minds that correspond exactly to those objects in the world (the intentional content). According to Morgan, structural representations are not the same as mental representations—there is nothing mental about them: plants can have structural representations.
There are also internal representations. These types of representations include those that involve future decisions, episodic memories, or any type of projection into the future.
=== Gualtiero Piccinini ===
In Gualtiero Piccinini's forthcoming work, he discusses topics on natural and nonnatural mental representations. He relies on the natural definition of mental representations given by Grice (1957) where P entails that P. e.g. Those spots mean measles, entails that the patient has measles. Then there are nonnatural representations: P does not entail P. e.g. The 3 rings on the bell of a bus mean the bus is full—the rings on the bell are independent of the fullness of the bus—we could have assigned something else (just as arbitrary) to signify that the bus is full.
=== Uriah Kriegel ===
There are also objective and subjective mental representations. Objective representations are closest to tracking theories—where the brain simply tracks what is in the environment. Subjective representations can vary person-to-person. The relationship between these two types of representation can vary.
Objective varies, but the subjective does not: e.g. brain-in-a-vat
Subjective varies, but the objective does not: e.g. color-inverted world
All representations found in objective and none in the subjective: e.g. thermometer
All representations found in subjective and none in the objective: e.g. an agent that experiences in a void.
Eliminativists think that subjective representations do not exist. Reductivists think subjective representations are reducible to objective. Non-reductivists think that subjective representations are real and distinct.
== Decoding mental representation in cognitive psychology ==
In the field of cognitive psychology, mental representations refer to patterns of neural activity that encode abstract concepts or representational “copies” of sensory information from the outside world. For example, our iconic memory can store a brief sensory copy of visual information, lasting a fraction of a second. This allows the brain to process visual details about a brief visual event, like another car driving past on the highway. Other mental representations are more abstract, like goals, conceptual representations, or verbal labels (“car”).
=== fMRI ===
Functional Magnetic Resonance Imaging (fMRI) is a powerful tool in cognitive science for exploring the neural correlates of mental representations. “A powerful feature of event-related fMRI is that the experimenter can choose to combine the data from completed scans in many different ways.” For instance, if participants are instructed to visualize a certain object or scene, fMRI can determine the engaged brain regions (primary visual cortex for visual imagery; hippocampus for episodic memory). By recording patterns of brain activity, functional magnetic resonance imaging (fMRI) can be used to quantify and decode different kinds of mental representations. Certain ideas, perceptions, or mental images may be associated with these patterns, which are a reflection of underlying neurological processes. For example, one study tested if fMRI could accurately measure the mental representations that are triggered when viewing a simple image. Participants' were shown 1,200 images of natural objects and printed letters while brain activity was recorded from multiple regions of visual cortex (V1-4), lateral occipital complex). Using deep neural networks (DNNs), the authors were then able to “recreate” the original images, based only on the brain data. These reconstructed images were remarkably similar to the original, preserving important elements like texture, shape, and color. A new group of participants was able to correctly identify the original image based on the reconstructed image 95 percent of the time.
For instance, if participants are instructed to visualize a certain object or scene, fMRI can determine the engaged brain regions (primary visual cortex for visual imagery; hippocampus for episodic memory). Such patterns provide a glimpse into neural encoding of mental states, and act as bridges between neural activity and subjective experience. Advocates for cognitive science consider fMRI research critical to exposing how mental representations are spread and overlapped. These methods have demonstrated that conceptual representations, such as "tools" versus "animals", are not limited to discrete brain regions but rather span networks encompassing associative, motor, and sensory regions. This illustrates how mental models combine semantic and perceptual aspects to provide a more complex and dynamic view of cognition. Furthermore, by showing how experiences gradually alter mental representations, fMRI research has advanced our understanding of brain plasticity. fMRI offers a glimpse into the brain underpinnings of thought and organization by mapping these processes.
=== Multi-Voxel Pattern Analysis ===
Multi-Voxel Pattern Analysis is a data processing method that is used to analyze multiple sets of patterns simultaneously. This analysis is also commonly used in cognitive psychology, to examine brain imaging data when paired with fMRI. This testing essentially allows researchers to analyze whether a particular mental representation is active within a particular brain region. With fMRI activation, the visual perception of the brain can be analyzed and decoded. In certain regions of the brain, such as the retinotopic region, researchers have the ability to predict features of the visual perception, such as lines or patterns, the awareness of the individual, features that weren't originally analyzed, as well as the identifying perceived images of an individual. After thorough research, studies have shown that patterns of imagery and perception are more seen in the ventral temporal cortex, than they are in the retinotopic region of the brain. These results show that without new information entering the brain, it has the ability to reactivate certain patterns of neural activity that have been active before. With this analysis, researchers are able to understand the process in which the brain decodes information and identify ways in which this information is represented.
=== Restricted vs. unrestricted decoding of mental representations ===
When scientists study the brain, they want to understand how our thoughts, feelings, and perceptions are represented in brain activity. One way they do this is through something called neural decoding, where they try to figure out what's going on in the brain by analyzing patterns of brain activity. There are two main ways to approach this: restricted decoding and unrestricted decoding. Here's how they differ:
=== Restricted decoding ===
Restricted decoding is when scientists focus on brain activity tied to a specific task or stimulus. Basically, this relates to activities like recognizing an object, solving a problem, or looking at a picture and scientists track the brain activity related to that task. For example, if a person looks at a picture of a face, certain areas of their brain will light up in a predictable way. Researchers can then study these patterns and "decode" the brain activity to figure out what they are seeing or thinking.
So, restricted decoding is pretty focused. The brain activity is tied to one thing, like a specific object or task. The goal is to figure out how the brain represents specific things (like seeing a face or recognizing a word) when a person is actively engaging with something.
For example, with fMRI scans, researchers can track brain activity while people look at different objects or images. They can use this data to predict what the person is seeing based on the neural patterns, since those patterns are relatively consistent when someone is exposed to the same thing (like a particular image or object).
=== Unrestricted decoding ===
Unrestricted decoding is a bit more relaxed. Instead of focusing on a task, researchers look at brain activity when people are not doing anything specific, for example when they are resting or thinking freely. This approach is more about understanding general mental states or abstract thoughts which are not linked to a specific task or stimulus.
For example, someone might just be asked to relax and think about whatever comes to mind. Researchers would then try to decode the brain patterns to figure out what's going on in their head, whether they're feeling happy, sad, or even daydreaming. Since the brain is in a more free-flowing state, the patterns are a lot less predictable, and scientists often use tools like machine learning to help interpret the data.
In other words, unrestricted decoding is about trying to figure out what is happening in the brain when it is not responding to a clear task, including identifying what kinds of emotions, memories, or random thoughts are popping up.
== See also ==
== References ==
== Further reading ==
Augusto, Luis M. (2013). 'Unconscious Representations 1: Belying the Traditional Model of Human Cognition.' Axiomathes 23.4, 645–663. Preprint
Goldman, Alvin I (2014). 'The Bodily Formats Approach to Embodied Cognition.' Current Controversies in Philosophy of Mind. ed. Uriah Kriegel. New York, NY: Routledge, 91–108.
Henrich, J. & Boyd, R. (2002). Culture and cognition: Why cultural evolution does not require replication of representations. Culture and Cognition, 2, 87–112. Full text
Kind, Amy (2014). 'The Case against Representationalism about Moods.' Current Controversies in Philosophy of Mind. ed. Uriah Kriegel. New York, NY: Routledge, 113–34.
Kriegel, Uriah (2014). 'Two Notions of Mental Representation.' Current Controversies in Philosophy of Mind. ed. Uriah Kriegel. New York, NY: Routledge, 161–79.
Rupert, Robert D.(2014). 'The Sufficiency of Objective Representation.' Current Controversies in Philosophy of Mind. ed. Uriah Kriegel. New York, NY: Routledge, 180–95.
Shapiro, Lawrence (2014). 'When Is Cognition Embodied.' Current Controversies in Philosophy of Mind. ed. Uriah Kriegel. New York, NY: Routledge, 73–90.
== External links ==
Mental Representation – Stanford Encyclopedia of Philosophy | Wikipedia/Representational_theory_of_mind |
In computer science, a data structure is a data organization and storage format that is usually chosen for efficient access to data. More precisely, a data structure is a collection of data values, the relationships among them, and the functions or operations that can be applied to the data, i.e., it is an algebraic structure about data.
== Usage ==
Data structures serve as the basis for abstract data types (ADT). The ADT defines the logical form of the data type. The data structure implements the physical form of the data type.
Different types of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, relational databases commonly use B-tree indexes for data retrieval, while compiler implementations usually use hash tables to look up identifiers.
Data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Data structures can be used to organize the storage and retrieval of information stored in both main memory and secondary memory.
== Implementation ==
Data structures can be implemented using a variety of programming languages and techniques, but they all share the common goal of efficiently organizing and storing data. Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by a pointer—a bit string, representing a memory address, that can be itself stored in memory and manipulated by the program. Thus, the array and record data structures are based on computing the addresses of data items with arithmetic operations, while the linked data structures are based on storing addresses of data items within the structure itself. This approach to data structuring has profound implications for the efficiency and scalability of algorithms. For instance, the contiguous memory allocation in arrays facilitates rapid access and modification operations, leading to optimized performance in sequential data processing scenarios.
The implementation of a data structure usually requires writing a set of procedures that create and manipulate instances of that structure. The efficiency of a data structure cannot be analyzed separately from those operations. This observation motivates the theoretical concept of an abstract data type, a data structure that is defined indirectly by the operations that may be performed on it, and the mathematical properties of those operations (including their space and time cost).
== Examples ==
There are numerous types of data structures, generally built upon simpler primitive data types. Well known examples are:
An array is a number of elements in a specific order, typically all of the same type (depending on the language, individual elements may either all be forced to be the same type, or may be of almost any type). Elements are accessed using an integer index to specify which element is required. Typical implementations allocate contiguous memory words for the elements of arrays (but this is not always a necessity). Arrays may be fixed-length or resizable.
A linked list (also just called list) is a linear collection of data elements of any type, called nodes, where each node has itself a value, and points to the next node in the linked list. The principal advantage of a linked list over an array is that values can always be efficiently inserted and removed without relocating the rest of the list. Certain other operations, such as random access to a certain element, are however slower on lists than on arrays.
A record (also called tuple or struct) is an aggregate data structure. A record is a value that contains other values, typically in fixed number and sequence and typically indexed by names. The elements of records are usually called fields or members. In the context of object-oriented programming, records are known as plain old data structures to distinguish them from objects.
Hash tables, also known as hash maps, are data structures that provide fast retrieval of values based on keys. They use a hashing function to map keys to indexes in an array, allowing for constant-time access in the average case. Hash tables are commonly used in dictionaries, caches, and database indexing. However, hash collisions can occur, which can impact their performance. Techniques like chaining and open addressing are employed to handle collisions.
Graphs are collections of nodes connected by edges, representing relationships between entities. Graphs can be used to model social networks, computer networks, and transportation networks, among other things. They consist of vertices (nodes) and edges (connections between nodes). Graphs can be directed or undirected, and they can have cycles or be acyclic. Graph traversal algorithms include breadth-first search and depth-first search.
Stacks and queues are abstract data types that can be implemented using arrays or linked lists. A stack has two primary operations: push (adds an element to the top of the stack) and pop (removes the topmost element from the stack), that follow the Last In, First Out (LIFO) principle. Queues have two main operations: enqueue (adds an element to the rear of the queue) and dequeue (removes an element from the front of the queue) that follow the First In, First Out (FIFO) principle.
Trees represent a hierarchical organization of elements. A tree consists of nodes connected by edges, with one node being the root and all other nodes forming subtrees. Trees are widely used in various algorithms and data storage scenarios. Binary trees (particularly heaps), AVL trees, and B-trees are some popular types of trees. They enable efficient and optimal searching, sorting, and hierarchical representation of data.
A trie, or prefix tree, is a special type of tree used to efficiently retrieve strings. In a trie, each node represents a character of a string, and the edges between nodes represent the characters that connect them. This structure is especially useful for tasks like autocomplete, spell-checking, and creating dictionaries. Tries allow for quick searches and operations based on string prefixes.
== Language support ==
Most assembly languages and some low-level languages, such as BCPL (Basic Combined Programming Language), lack built-in support for data structures. On the other hand, many high-level programming languages and some higher-level assembly languages, such as MASM, have special syntax or other built-in support for certain data structures, such as records and arrays. For example, the C (a direct descendant of BCPL) and Pascal languages support structs and records, respectively, in addition to vectors (one-dimensional arrays) and multi-dimensional arrays.
Most programming languages feature some sort of library mechanism that allows data structure implementations to be reused by different programs. Modern languages usually come with standard libraries that implement the most common data structures. Examples are the C++ Standard Template Library, the Java Collections Framework, and the Microsoft .NET Framework.
Modern languages also generally support modular programming, the separation between the interface of a library module and its implementation. Some provide opaque data types that allow clients to hide implementation details. Object-oriented programming languages, such as C++, Java, and Smalltalk, typically use classes for this purpose.
Many known data structures have concurrent versions which allow multiple computing threads to access a single concrete instance of a data structure simultaneously.
== See also ==
== References ==
== Bibliography ==
Peter Brass, Advanced Data Structures, Cambridge University Press, 2008, ISBN 978-0521880374
Donald Knuth, The Art of Computer Programming, vol. 1. Addison-Wesley, 3rd edition, 1997, ISBN 978-0201896831
Dinesh Mehta and Sartaj Sahni, Handbook of Data Structures and Applications, Chapman and Hall/CRC Press, 2004, ISBN 1584884355
Niklaus Wirth, Algorithms and Data Structures, Prentice Hall, 1985, ISBN 978-0130220059
== Further reading ==
Open Data Structures by Pat Morin
G. H. Gonnet and R. Baeza-Yates, Handbook of Algorithms and Data Structures - in Pascal and C, second edition, Addison-Wesley, 1991, ISBN 0-201-41607-7
Ellis Horowitz and Sartaj Sahni, Fundamentals of Data Structures in Pascal, Computer Science Press, 1984, ISBN 0-914894-94-3
== External links ==
Descriptions from the Dictionary of Algorithms and Data Structures
Data structures course
An Examination of Data Structures from .NET perspective
Schaffer, C. Data Structures and Algorithm Analysis | Wikipedia/data_structure |
W. H. Freeman and Company is an imprint of Macmillan Higher Education, a division of Macmillan Publishers. Macmillan publishes monographs and textbooks for the sciences under the imprint.
== History ==
The company, W. H. Freeman and Company Publishing was founded in 1946 by William H. Freeman Jr., who had been a salesman and editor at Macmillan Publishing.
Freeman later founded Freeman, Cooper and Company in San Francisco.
== Works ==
Titles published by W. H. Freeman include James Watson's Recombinant DNA (1983), William J. Kaufmann III's The Universe (1985), Jon Rogawski's Calculus (2007), and Peter Atkins’ Physical Chemistry (2014).
== References ==
== External links ==
Official website
Official W. H. Freeman and Company website (archived 5 February 2009) | Wikipedia/Computer_Science_Press |
The Smith–Waterman algorithm performs local sequence alignment; that is, for determining similar regions between two strings of nucleic acid sequences or protein sequences. Instead of looking at the entire sequence, the Smith–Waterman algorithm compares segments of all possible lengths and optimizes the similarity measure.
The algorithm was first proposed by Temple F. Smith and Michael S. Waterman in 1981. Like the Needleman–Wunsch algorithm, of which it is a variation, Smith–Waterman is a dynamic programming algorithm. As such, it has the desirable property that it is guaranteed to find the optimal local alignment with respect to the scoring system being used (which includes the substitution matrix and the gap-scoring scheme). The main difference to the Needleman–Wunsch algorithm is that negative scoring matrix cells are set to zero. Traceback procedure starts at the highest scoring matrix cell and proceeds until a cell with score zero is encountered, yielding the highest scoring local alignment. Because of its quadratic time complexity, it often cannot be practically applied to large-scale problems and is replaced in favor of computationally more efficient alternatives such as (Gotoh, 1982), (Altschul and Erickson, 1986), and (Myers and Miller, 1988).
== History ==
In 1970, Saul B. Needleman and Christian D. Wunsch proposed a heuristic homology algorithm for sequence alignment, also referred to as the Needleman–Wunsch algorithm. It is a global alignment algorithm that requires
O
(
m
n
)
{\displaystyle O(mn)}
calculation steps (
m
{\displaystyle m}
and
n
{\displaystyle n}
are the lengths of the two sequences being aligned). It uses the iterative calculation of a matrix for the purpose of showing global alignment. In the following decade, Sankoff, Reichert, Beyer and others formulated alternative heuristic algorithms for analyzing gene sequences. Sellers introduced a system for measuring sequence distances. In 1976, Waterman et al. added the concept of gaps into the original measurement system. In 1981, Smith and Waterman published their Smith–Waterman algorithm for calculating local alignment.
The Smith–Waterman algorithm is fairly demanding of time: To align two sequences of lengths
m
{\displaystyle m}
and
n
{\displaystyle n}
,
O
(
m
2
n
+
n
2
m
)
{\displaystyle O(m^{2}n+n^{2}m)}
time is required. Gotoh and Altschul optimized the algorithm to
O
(
m
n
)
{\displaystyle O(mn)}
steps. The space complexity was optimized by Myers and Miller from
O
(
m
n
)
{\displaystyle O(mn)}
to
O
(
n
)
{\displaystyle O(n)}
(linear), where
n
{\displaystyle n}
is the length of the shorter sequence, for the case where only one of the many possible optimal alignments is desired. Chowdhury, Le, and Ramachandran later optimized the cache performance of the algorithm while keeping the space usage linear in the total length of the input sequences.
== Motivation ==
In recent years, genome projects conducted on a variety of organisms generated massive amounts of sequence data for genes and proteins, which requires computational analysis. Sequence alignment shows the relations between genes or between proteins, leading to a better understanding of their homology and functionality. Sequence alignment can also reveal conserved domains and motifs.
One motivation for local alignment is the difficulty of obtaining correct alignments in regions of low similarity between distantly related biological sequences, because mutations have added too much 'noise' over evolutionary time to allow for a meaningful comparison of those regions. Local alignment avoids such regions altogether and focuses on those with a positive score, i.e. those with an evolutionarily conserved signal of similarity. A prerequisite for local alignment is a negative expectation score. The expectation score is defined as the average score that the scoring system (substitution matrix and gap penalties) would yield for a random sequence.
Another motivation for using local alignments is that there is a reliable statistical model (developed by Karlin and Altschul) for optimal local alignments. The alignment of unrelated sequences tends to produce optimal local alignment scores which follow an extreme value distribution. This property allows programs to produce an expectation value for the optimal local alignment of two sequences, which is a measure of how often two unrelated sequences would produce an optimal local alignment whose score is greater than or equal to the observed score. Very low expectation values indicate that the two sequences in question might be homologous, meaning they might share a common ancestor.
== Algorithm ==
Let
A
=
a
1
a
2
.
.
.
a
n
{\displaystyle A=a_{1}a_{2}...a_{n}}
and
B
=
b
1
b
2
.
.
.
b
m
{\displaystyle B=b_{1}b_{2}...b_{m}}
be the sequences to be aligned, where
n
{\displaystyle n}
and
m
{\displaystyle m}
are the lengths of
A
{\displaystyle A}
and
B
{\displaystyle B}
respectively.
Determine the substitution matrix and the gap penalty scheme.
s
(
a
,
b
)
{\displaystyle s(a,b)}
- Similarity score of the elements that constituted the two sequences
W
k
{\displaystyle W_{k}}
- The penalty of a gap that has length
k
{\displaystyle k}
Construct a scoring matrix
H
{\displaystyle H}
and initialize its first row and first column. The size of the scoring matrix is
(
n
+
1
)
∗
(
m
+
1
)
{\displaystyle (n+1)*(m+1)}
. The matrix uses 0-based indexing.
H
k
0
=
H
0
l
=
0
f
o
r
0
≤
k
≤
n
a
n
d
0
≤
l
≤
m
{\displaystyle H_{k0}=H_{0l}=0\quad for\quad 0\leq k\leq n\quad and\quad 0\leq l\leq m}
Fill the scoring matrix using the equation below.
H
i
j
=
max
{
H
i
−
1
,
j
−
1
+
s
(
a
i
,
b
j
)
,
max
k
≥
1
{
H
i
−
k
,
j
−
W
k
}
,
max
l
≥
1
{
H
i
,
j
−
l
−
W
l
}
,
0
(
1
≤
i
≤
n
,
1
≤
j
≤
m
)
{\displaystyle H_{ij}=\max {\begin{cases}H_{i-1,j-1}+s(a_{i},b_{j}),\\\max _{k\geq 1}\{H_{i-k,j}-W_{k}\},\\\max _{l\geq 1}\{H_{i,j-l}-W_{l}\},\\0\end{cases}}\qquad (1\leq i\leq n,1\leq j\leq m)}
where
H
i
−
1
,
j
−
1
+
s
(
a
i
,
b
j
)
{\displaystyle H_{i-1,j-1}+s(a_{i},b_{j})}
is the score of aligning
a
i
{\displaystyle a_{i}}
and
b
j
{\displaystyle b_{j}}
,
H
i
−
k
,
j
−
W
k
{\displaystyle H_{i-k,j}-W_{k}}
is the score if
a
i
{\displaystyle a_{i}}
is at the end of a gap of length
k
{\displaystyle k}
,
H
i
,
j
−
l
−
W
l
{\displaystyle H_{i,j-l}-W_{l}}
is the score if
b
j
{\displaystyle b_{j}}
is at the end of a gap of length
l
{\displaystyle l}
,
0
{\displaystyle 0}
means there is no similarity up to
a
i
{\displaystyle a_{i}}
and
b
j
{\displaystyle b_{j}}
.
Traceback. Starting at the highest score in the scoring matrix
H
{\displaystyle H}
and ending at a matrix cell that has a score of 0, traceback based on the source of each score recursively to generate the best local alignment.
== Explanation ==
Smith–Waterman algorithm aligns two sequences by matches/mismatches (also known as substitutions), insertions, and deletions. Both insertions and deletions are the operations that introduce gaps, which are represented by dashes. The Smith–Waterman algorithm has several steps:
Determine the substitution matrix and the gap penalty scheme. A substitution matrix assigns each pair of bases or amino acids a score for match or mismatch. Usually matches get positive scores, whereas mismatches get relatively lower scores. A gap penalty function determines the score cost for opening or extending gaps. It is suggested that users choose the appropriate scoring system based on the goals. In addition, it is also a good practice to try different combinations of substitution matrices and gap penalties.
Initialize the scoring matrix. The dimensions of the scoring matrix are 1+length of each sequence respectively. All the elements of the first row and the first column are set to 0. The extra first row and first column make it possible to align one sequence to another at any position, and setting them to 0 makes the terminal gap free from penalty.
Scoring. Score each element from left to right, top to bottom in the matrix, considering the outcomes of substitutions (diagonal scores) or adding gaps (horizontal and vertical scores). If none of the scores are positive, this element gets a 0. Otherwise the highest score is used and the source of that score is recorded.
Traceback. Starting at the element with the highest score, traceback based on the source of each score recursively, until 0 is encountered. The segments that have the highest similarity score based on the given scoring system is generated in this process. To obtain the second best local alignment, apply the traceback process starting at the second highest score outside the trace of the best alignment.
=== Comparison with the Needleman–Wunsch algorithm ===
The Smith–Waterman algorithm finds the segments in two sequences that have similarities while the Needleman–Wunsch algorithm aligns two complete sequences. Therefore, they serve different purposes. Both algorithms use the concepts of a substitution matrix, a gap penalty function, a scoring matrix, and a traceback process. Three main differences are:
One of the most important distinctions is that no negative score is assigned in the scoring system of the Smith–Waterman algorithm, which enables local alignment. When any element has a score lower than zero, it means that the sequences up to this position have no similarities; this element will then be set to zero to eliminate influence from previous alignment. In this way, calculation can continue to find alignment in any position afterwards.
The initial scoring matrix of Smith–Waterman algorithm enables the alignment of any segment of one sequence to an arbitrary position in the other sequence. In Needleman–Wunsch algorithm, however, end gap penalty also needs to be considered in order to align the full sequences.
=== Substitution matrix ===
Each base substitution or amino acid substitution is assigned a score. In general, matches are assigned positive scores, and mismatches are assigned relatively lower scores. Take DNA sequence as an example. If matches get +1, mismatches get -1, then the substitution matrix is:
This substitution matrix can be described as:
s
(
a
i
,
b
j
)
=
{
+
1
,
a
i
=
b
j
−
1
,
a
i
≠
b
j
{\displaystyle s(a_{i},b_{j})={\begin{cases}+1,\quad a_{i}=b_{j}\\-1,\quad a_{i}\neq b_{j}\end{cases}}}
Different base substitutions or amino acid substitutions can have different scores. The substitution matrix of amino acids is usually more complicated than that of the bases. See PAM, BLOSUM.
=== Gap penalty ===
Gap penalty designates scores for insertion or deletion. A simple gap penalty strategy is to use fixed score for each gap. In biology, however, the score needs to be counted differently for practical reasons. On one hand, partial similarity between two sequences is a common phenomenon; on the other hand, a single gene mutation event can result in insertion of a single long gap. Therefore, connected gaps forming a long gap usually is more favored than multiple scattered, short gaps. In order to take this difference into consideration, the concepts of gap opening and gap extension have been added to the scoring system. The gap opening score is usually higher than the gap extension score. For instance, the default parameters in EMBOSS Water are: gap opening = 10, gap extension = 0.5.
Here we discuss two common strategies for gap penalty. See Gap penalty for more strategies.
Let
W
k
{\displaystyle W_{k}}
be the gap penalty function for a gap of length
k
{\displaystyle k}
:
==== Linear ====
A linear gap penalty has the same scores for opening and extending a gap:
W
k
=
k
W
1
{\displaystyle W_{k}=kW_{1}}
,
where
W
1
{\displaystyle W_{1}}
is the cost of a single gap.
The gap penalty is directly proportional to the gap length. When linear gap penalty is used, the Smith–Waterman algorithm can be simplified to:
H
i
j
=
max
{
H
i
−
1
,
j
−
1
+
s
(
a
i
,
b
j
)
,
H
i
−
1
,
j
−
W
1
,
H
i
,
j
−
1
−
W
1
,
0
{\displaystyle H_{ij}=\max {\begin{cases}H_{i-1,j-1}+s(a_{i},b_{j}),\\H_{i-1,j}-W_{1},\\H_{i,j-1}-W_{1},\\0\end{cases}}}
The simplified algorithm uses
O
(
m
n
)
{\displaystyle O(mn)}
steps. When an element is being scored, only the gap penalties from the elements that are directly adjacent to this element need to be considered.
==== Affine ====
An affine gap penalty considers gap opening and extension separately:
W
k
=
u
k
+
v
(
u
>
0
,
v
>
0
)
{\displaystyle W_{k}=uk+v\quad (u>0,v>0)}
,
where
v
{\displaystyle v}
is the gap opening penalty, and
u
{\displaystyle u}
is the gap extension penalty. For example, the penalty for a gap of length 2 is
2
u
+
v
{\displaystyle 2u+v}
.
An arbitrary gap penalty was used in the original Smith–Waterman algorithm paper. It uses
O
(
m
2
n
)
{\displaystyle O(m^{2}n)}
steps, therefore is quite demanding of time. Gotoh optimized the steps for an affine gap penalty to
O
(
m
n
)
{\displaystyle O(mn)}
, but the optimized algorithm only attempts to find one optimal alignment, and the optimal alignment is not guaranteed to be found. Altschul modified Gotoh's algorithm to find all optimal alignments while maintaining the computational complexity. Later, Myers and Miller pointed out that Gotoh and Altschul's algorithm can be further modified based on the method that was published by Hirschberg in 1975, and applied this method. Myers and Miller's algorithm can align two sequences using
O
(
n
)
{\displaystyle O(n)}
space, with
n
{\displaystyle n}
being the length of the shorter sequence. Chowdhury, Le, and Ramachandran later showed how to run Gotoh's algorithm cache-efficiently in linear space using a different recursive divide-and-conquer strategy than the one used by Hirschberg. The resulting algorithm runs faster than Myers and Miller's algorithm in practice due to its superior cache performance.
==== Gap penalty example ====
Take the alignment of sequences TACGGGCCCGCTAC and TAGCCCTATCGGTCA as an example.
When linear gap penalty function is used, the result is (Alignments performed by EMBOSS Water. Substitution matrix is DNAfull (similarity score: +5 for matching characters otherwise -4). Gap opening and extension are 0.0 and 1.0 respectively):
TACGGGCCCGCTA-C
TA---G-CC-CTATC
When affine gap penalty is used, the result is (Gap opening and extension are 5.0 and 1.0 respectively):
TACGGGCCCGCTA
TA---GCC--CTA
This example shows that an affine gap penalty can help avoid scattered small gaps.
=== Scoring matrix ===
The function of the scoring matrix is to conduct one-to-one comparisons between all components in two sequences and record the optimal alignment results. The scoring process reflects the concept of dynamic programming. The final optimal alignment is found by iteratively expanding the growing optimal alignment. In other words, the current optimal alignment is generated by deciding which path (match/mismatch or inserting gap) gives the highest score from the previous optimal alignment. The size of the matrix is the length of one sequence plus 1 by the length of the other sequence plus 1. The additional first row and first column serve the purpose of aligning one sequence to any positions in the other sequence. Both the first line and the first column are set to 0 so that end gap is not penalized. The initial scoring matrix is:
== Example ==
Take the alignment of DNA sequences TGTTACGG and GGTTGACTA as an example. Use the following scheme:
Substitution matrix:
s
(
a
i
,
b
j
)
=
{
+
3
,
a
i
=
b
j
−
3
,
a
i
≠
b
j
{\displaystyle s(a_{i},b_{j})={\begin{cases}+3,\quad a_{i}=b_{j}\\-3,\quad a_{i}\neq b_{j}\end{cases}}}
Gap penalty:
W
k
=
2
k
{\displaystyle W_{k}=2k}
(a linear gap penalty of
W
1
=
2
{\displaystyle W_{1}=2}
)
Initialize and fill the scoring matrix, shown as below. This figure shows the scoring process of the first three elements. The yellow color indicates the bases that are being considered. The red color indicates the highest possible score for the cell being scored.
The finished scoring matrix is shown below on the left. The blue color shows the highest score. An element can receive score from more than one element, each will form a different path if this element is traced back. In case of multiple highest scores, traceback should be done starting with each highest score. The traceback process is shown below on the right. The best local alignment is generated in the reverse direction.
The alignment result is:
G T T - A C
G T T G A C
== Implementation ==
An implementation of the Smith–Waterman Algorithm, SSEARCH, is available in the FASTA sequence analysis package from UVA FASTA Downloads. This implementation includes Altivec accelerated code for PowerPC G4 and G5 processors that speeds up comparisons 10–20-fold, using a modification of the Wozniak, 1997 approach, and an SSE2 vectorization developed by Farrar making optimal protein sequence database searches quite practical. A library, SSW, extends Farrar's implementation to return alignment information in addition to the optimal Smith–Waterman score.
== Accelerated versions ==
=== FPGA ===
Cray demonstrated acceleration of the Smith–Waterman algorithm using a reconfigurable computing platform based on FPGA chips, with results showing up to 28x speed-up over standard microprocessor-based solutions. Another FPGA-based version of the Smith–Waterman algorithm shows FPGA (Virtex-4) speedups up to 100x over a 2.2 GHz Opteron processor. The TimeLogic DeCypher and CodeQuest systems also accelerate Smith–Waterman and Framesearch using PCIe FPGA cards.
A 2011 Master's thesis includes an analysis of FPGA-based Smith–Waterman acceleration.
In a 2016 publication OpenCL code compiled with Xilinx SDAccel accelerates genome sequencing, beats CPU/GPU performance/W by 12-21x, a very efficient implementation was presented. Using one PCIe FPGA card equipped with a Xilinx Virtex-7 2000T FPGA, the performance per Watt level was better than CPU/GPU by 12-21x.
=== GPU ===
Lawrence Livermore National Laboratory and the United States (US) Department of Energy's Joint Genome Institute implemented an accelerated version of Smith–Waterman local sequence alignment searches using graphics processing units (GPUs) with preliminary results showing a 2x speed-up over software implementations. A similar method has already been implemented in the Biofacet software since 1997, with the same speed-up factor.
Several GPU implementations of the algorithm in NVIDIA's CUDA C platform are also available. When compared to the best known CPU implementation (using SIMD instructions on the x86 architecture), by Farrar, the performance tests of this solution using a single NVidia GeForce 8800 GTX card show a slight increase in performance for smaller sequences, but a slight decrease in performance for larger ones. However, the same tests running on dual NVidia GeForce 8800 GTX cards are almost twice as fast as the Farrar implementation for all sequence sizes tested.
A newer GPU CUDA implementation of SW is now available that is faster than previous versions and also removes limitations on query lengths. See CUDASW++.
Eleven different SW implementations on CUDA have been reported, three of which report speedups of 30X.
Finally, other GPU-accelerated implementations of the Smith-Waterman can be found in NVIDIA Parabricks, NVIDIA's software suite for genome analysis.
=== SIMD ===
In 2000, a fast implementation of the Smith–Waterman algorithm using the single instruction, multiple data (SIMD) technology available in Intel Pentium MMX processors and similar technology was described in a publication by Rognes and Seeberg. In contrast to the Wozniak (1997) approach, the new implementation was based on vectors parallel with the query sequence, not diagonal vectors. The company Sencel Bioinformatics has applied for a patent covering this approach. Sencel is developing the software further and provides executables for academic use free of charge.
A SSE2 vectorization of the algorithm (Farrar, 2007) is now available providing an 8-16-fold speedup on Intel/AMD processors with SSE2 extensions. When running on Intel processor using the Core microarchitecture the SSE2 implementation achieves a 20-fold increase. Farrar's SSE2 implementation is available as the SSEARCH program in the FASTA sequence comparison package. The SSEARCH is included in the European Bioinformatics Institute's suite of similarity searching programs.
Danish bioinformatics company CLC bio has achieved speed-ups of close to 200 over standard software implementations with SSE2 on an Intel 2.17 GHz Core 2 Duo CPU, according to a publicly available white paper.
Accelerated version of the Smith–Waterman algorithm, on Intel and Advanced Micro Devices (AMD) based Linux servers, is supported by the GenCore 6 package, offered by Biocceleration. Performance benchmarks of this software package show up to 10 fold speed acceleration relative to standard software implementation on the same processor.
Currently the only company in bioinformatics to offer both SSE and FPGA solutions accelerating Smith–Waterman, CLC bio has achieved speed-ups of more than 110 over standard software implementations with CLC Bioinformatics Cube.
The fastest implementation of the algorithm on CPUs with SSSE3 can be found the SWIPE software (Rognes, 2011), which is available under the GNU Affero General Public License. In parallel, this software compares residues from sixteen different database sequences to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. It is faster than BLAST when using the BLOSUM50 matrix.
An implementation of Smith–Waterman named diagonalsw, in C and C++, uses SIMD instruction sets (SSE4.1 for the x86 platform and AltiVec for the PowerPC platform). It is released under an open-source MIT License.
=== Cell Broadband Engine ===
In 2008, Farrar described a port of the Striped Smith–Waterman to the Cell Broadband Engine and reported speeds of 32 and 12 GCUPS on an IBM QS20 blade and a Sony PlayStation 3, respectively.
== Limitations ==
Fast expansion of genetic data challenges speed of current DNA sequence alignment algorithms. Essential needs for an efficient and accurate method for DNA variant discovery demand innovative approaches for parallel processing in real time.
== See also ==
Bioinformatics
Sequence alignment
Sequence mining
Needleman–Wunsch algorithm
Levenshtein distance
BLAST
FASTA
== References ==
== External links ==
JAligner — an open source Java implementation of the Smith–Waterman algorithm
B.A.B.A. — an applet (with source) which visually explains the algorithm
FASTA/SSEARCH — services page at the EBI
UGENE Smith–Waterman plugin — an open source SSEARCH compatible implementation of the algorithm with graphical interface written in C++
OPAL — an SIMD C/C++ library for massive optimal sequence alignment
diagonalsw — an open-source C/C++ implementation with SIMD instruction sets (notably SSE4.1) under the MIT license
SSW — an open-source C++ library providing an API to an SIMD implementation of the Smith–Waterman algorithm under the MIT license
melodic sequence alignment — a javascript implementation for melodic sequence alignment
DRAGMAP A C++ port of the Illumina DRAGEN FPGA implementation | Wikipedia/Smith–Waterman_algorithm |
In computer science, the Rabin–Karp algorithm or Karp–Rabin algorithm is a string-searching algorithm created by Richard M. Karp and Michael O. Rabin (1987) that uses hashing to find an exact match of a pattern string in a text. It uses a rolling hash to quickly filter out positions of the text that cannot match the pattern, and then checks for a match at the remaining positions. Generalizations of the same idea can be used to find more than one match of a single pattern, or to find matches for more than one pattern.
To find a single match of a single pattern, the expected time of the algorithm is linear in the combined length of the pattern and text,
although its worst-case time complexity is the product of the two lengths. To find multiple matches, the expected time is linear in the input lengths, plus the combined length of all the matches, which could be greater than linear. In contrast, the Aho–Corasick algorithm can find all matches of multiple patterns in worst-case time and space linear in the input length and the number of matches (instead of the total length of the matches).
A practical application of the algorithm is detecting plagiarism. Given source material, the algorithm can rapidly search through a paper for instances of sentences from the source material, ignoring details such as case and punctuation. Because of the abundance of the sought strings, single-string searching algorithms are impractical.
== Overview ==
A naive string matching algorithm compares the given pattern against all positions in the given text. Each comparison takes time proportional to the length of the pattern,
and the number of positions is proportional to the length of the text. Therefore, the worst-case time for such a method is proportional to the product of the two lengths.
In many practical cases, this time can be significantly reduced by cutting short the comparison at each position as soon as a mismatch is found, but this idea cannot guarantee any speedup.
Several string-matching algorithms, including the Knuth–Morris–Pratt algorithm and the Boyer–Moore string-search algorithm, reduce the worst-case time for string matching by extracting more information from each mismatch, allowing them to skip over positions of the text that are guaranteed not to match the pattern. The Rabin–Karp algorithm instead achieves its speedup by using a hash function to quickly perform an approximate check for each position, and then only performing an exact comparison at the positions that pass this approximate check.
A hash function is a function which converts every string into a numeric value, called its hash value; for example, we might have hash("hello")=5. If two strings are equal, their hash values are also equal. For a well-designed hash function, the inverse is true, in an approximate sense: strings that are unequal are very unlikely to have equal hash values. The Rabin–Karp algorithm proceeds by computing, at each position of the text, the hash value of a string starting at that position with the same length as the pattern. If this hash value equals the hash value of the pattern, it performs a full comparison at that position.
In order for this to work well, the hash function should be selected randomly from a family of hash functions that are unlikely to produce many false positives, that is, positions of the text which have the same hash value as the pattern but do not actually match the pattern. These positions contribute to the running time of the algorithm unnecessarily, without producing a match. Additionally, the hash function used should be a rolling hash, a hash function whose value can be quickly updated from each position of the text to the next. Recomputing the hash function from scratch at each position would be too slow.
== The algorithm ==
The algorithm is as shown:
Lines 2, 4, and 6 each require O(m) time. However, line 2 is only executed once, and line 6 is only executed if the hash values match, which is unlikely to happen more than a few times. Line 5 is executed O(n) times, but each comparison only requires constant time, so its impact is O(n). The issue is line 4.
Naively computing the hash value for the substring s[i+1..i+m] requires O(m) time because each character is examined. Since the hash computation is done on each loop, the algorithm with a naive hash computation requires O(mn) time, the same complexity as a straightforward string matching algorithm. For speed, the hash must be computed in constant time. The trick is the variable hs already contains the previous hash value of s[i..i+m-1]. If that value can be used to compute the next hash value in constant time, then computing successive hash values will be fast.
The trick can be exploited using a rolling hash. A rolling hash is a hash function specially designed to enable this operation. A trivial (but not very good) rolling hash function just adds the values of each character in the substring. This rolling hash formula can compute the next hash value from the previous value in constant time:
s[i+1..i+m] = s[i..i+m-1] - s[i] + s[i+m]
This simple function works, but will result in statement 5 being executed more often than other more sophisticated rolling hash functions such as those discussed in the next section.
Good performance requires a good hashing function for the encountered data. If the hashing is poor (such as producing the same hash value for every input), then line 6 would be executed O(n) times (i.e. on every iteration of the loop). Because character-by-character comparison of strings with length m takes O(m) time, the whole algorithm then takes a worst-case O(mn) time.
== Hash function used ==
The key to the Rabin–Karp algorithm's performance is the efficient computation of hash values of the successive substrings of the text. The Rabin fingerprint is a popular and effective rolling hash function. The hash function described here is not a Rabin fingerprint, but it works equally well. It treats every substring as a number in some base, the base being usually the size of the character set.
For example, if the substring is "hi", the base is 256, and prime modulus is 101, then the hash value would be
[(104 × 256 ) % 101 + 105] % 101 = 65
(ASCII of 'h' is 104 and of 'i' is 105)
Technically, this algorithm is only similar to the true number in a non-decimal system representation, since for example we could have the "base" less than one of the "digits". See hash function for a much more detailed discussion. The essential benefit achieved by using a rolling hash such as the Rabin fingerprint is that it is possible to compute the hash value of the next substring from the previous one by doing only a constant number of operations, independent of the substrings' lengths.
For example, if we have text "abracadabra" and we are searching for a pattern of length 3, the hash of the first substring, "abr", using 256 as the base, and 101 as the prime modulus is:
// ASCII a = 97, b = 98, r = 114.
hash("abr") = [ ( [ ( [ (97 × 256) % 101 + 98 ] % 101 ) × 256 ] % 101 ) + 114 ] % 101 = 4
We can then compute the hash of the next substring, "bra", from the hash of "abr" by subtracting the number added for the first 'a' of "abr", i.e. 97 × 2562, multiplying by the base and adding for the last a of "bra", i.e. 97 × 2560. Like so:
// old hash (-ve avoider) old 'a' left base offset base shift new 'a' prime modulus
hash("bra") = [ ( 4 + 101 - 97 * [(256%101)*256] % 101 ) * 256 + 97 ] % 101 = 30
If we are matching the search string "bra", using similar calculation of hash("abr"),
hash'("bra") = [ ( [ ( [ ( 98 × 256) %101 + 114] % 101 ) × 256 ] % 101) + 97 ] % 101 = 30
If the substrings in question are long, this algorithm achieves great savings compared with many other hashing schemes.
Theoretically, there exist other algorithms that could provide convenient recomputation, e.g. multiplying together ASCII values of all characters so that shifting substring would only entail dividing the previous hash by the first character value, then multiplying by the new last character's value. The limitation, however, is the limited size of the integer data type and the necessity of using modular arithmetic to scale down the hash results. Meanwhile, naive hash functions do not produce large numbers quickly, but, just like adding ASCII values, are likely to cause many hash collisions and hence slow down the algorithm. Hence the described hash function is typically the preferred one in the Rabin–Karp algorithm.
== Multiple pattern search ==
The Rabin–Karp algorithm is inferior for single pattern searching to Knuth–Morris–Pratt algorithm, Boyer–Moore string-search algorithm and other faster single pattern string searching algorithms because of its slow worst case behavior. However, it is a useful algorithm for multiple pattern search.
To find any of a large number, say k, fixed length patterns in a text, a simple variant of the Rabin–Karp algorithm uses a Bloom filter or a set data structure to check whether the hash of a given string belongs to a set of hash values of patterns we are looking for:
We assume all the substrings have a fixed length m.
A naïve way to search for k patterns is to repeat a single-pattern search taking O(n+m) time, totaling in O((n+m)k) time. In contrast, the above algorithm can find all k patterns in O(n+km) expected time, assuming that a hash table check works in O(1) expected time.
== Notes ==
== References ==
=== Sources ===
Candan, K. Selçuk; Sapino, Maria Luisa (2010). Data Management for Multimedia Retrieval. Cambridge University Press. pp. 205–206. ISBN 978-0-521-88739-7. (for the Bloom filter extension)
Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001-09-01) [1990]. "The Rabin–Karp algorithm". Introduction to Algorithms (2nd ed.). Cambridge, Massachusetts: MIT Press. pp. 911–916. ISBN 978-0-262-03293-3.
Karp, Richard M.; Rabin, Michael O. (March 1987). "Efficient randomized pattern-matching algorithms". IBM Journal of Research and Development. 31 (2): 249–260. CiteSeerX 10.1.1.86.9502. doi:10.1147/rd.312.0249.
== External links ==
"Rabin–Karp Algorithm/Rolling Hash" (PDF). MIT 6.006: Introduction to Algorithms 2011- Lecture Notes. MIT. | Wikipedia/Rabin–Karp_algorithm |
In computer science, the Raita algorithm is a string searching algorithm which improves the performance of Boyer–Moore–Horspool algorithm. This algorithm preprocesses the string being searched for the pattern, which is similar to Boyer–Moore string-search algorithm. The searching pattern of particular sub-string in a given string is different from Boyer–Moore–Horspool algorithm. This algorithm was published by Timo Raita in 1991.
== Description ==
Raita algorithm searches for a pattern "P" in a given text "T" by comparing each character of pattern in the given text. Searching will be done as follows. Window for a text "T" is defined as the length of "P".
First, last character of the pattern is compared with the rightmost character of the window.
If there is a match, first character of the pattern is compared with the leftmost character of the window.
If they match again, it compares the middle character of the pattern with middle character of the window.
If everything in the pre-check is successful, then the original comparison starts from the second character to last but one. If there is a mismatch at any stage in the algorithm, it performs the bad character shift function which was computed in pre-processing phase. Bad character shift function is identical to the one proposed in Boyer–Moore–Horspool algorithm.
A modern formulation of a similar pre-check is found in std::string::find, a linear/quadratic string-matcher, in libc++ and libstdc++. Assuming a well-optimized version of memcmp, not skipping characters in the "original comparison" tends to be more efficient as the pattern is likely to be aligned.
== C Code for Raita algorithm ==
== Example ==
Pattern: abddb
Text:abbaabaabddbabadbb
Pre- Processing stage:
a b d
4 3 1
Attempt 1:
abbaabaabddbabadbb
....b
Shift by 4 (bmBc[a])
Comparison of last character of pattern to rightmost character in the window. It's a mismatch and shifted by 4 according to the value in pre-processing stage.
Attempt 2:
abbaabaabddbabadbb
A.d.B
Shift by 3 (bmBc[b])
Here last and first character of the pattern are matched but middle character is a mismatch. So the pattern is shifted according to the pre-processing stage.
Attempt 3:
abbaabaabddbabadbb
ABDDB
Shift by 3 (bmBc[b])
We found exact match here but the algorithm continues until it can't move further.
Attempt 4:
abbaabaABDDBabadbb
....b
Shift by 4 (bmBc[a])
At this stage, we need to shift by 4 and we can't move the pattern by 4. So, the algorithm terminates. Letters in capital letter are exact match of the pattern in the text.
== Complexity ==
Pre-processing stage takes O(m) time where "m" is the length of pattern "P".
Searching stage takes O(mn) time complexity where "n" is the length of text "T".
== See also ==
Boyer–Moore string-search algorithm
Boyer–Moore–Horspool algorithm
== References ==
== External links ==
Applet animation and Description for Raita Algorithm | Wikipedia/Raita_algorithm |
In computer science, the Boyer–Moore–Horspool algorithm or Horspool's algorithm is an algorithm for finding substrings in strings. It was published by Nigel Horspool in 1980 as SBM.
It is a simplification of the Boyer–Moore string-search algorithm which is related to the Knuth–Morris–Pratt algorithm. The algorithm trades space for time in order to obtain an average-case complexity of O(n) on random text, although it has O(nm) in the worst case, where the length of the pattern is m and the length of the search string is n.
== Description ==
Like Boyer–Moore, Boyer–Moore–Horspool preprocesses the pattern to produce a table containing, for each symbol in the alphabet, the number of characters that can safely be skipped. The preprocessing phase, in pseudocode, is as follows (for an alphabet of 256 symbols, i.e., bytes):
Pattern search proceeds as follows. The procedure search reports the index of the first occurrence of needle in haystack.
== Performance ==
The algorithm performs best with long needle strings, when it consistently hits a non-matching character at or near the final byte of the current position in the haystack and the final byte of the needle does not occur elsewhere within the needle. For instance a 32 byte needle ending in "z" searching through a 255 byte haystack which does not have a 'z' byte in it would take up to 224 byte comparisons.
The best case is the same as for the Boyer–Moore string-search algorithm in big O notation, although the constant overhead of initialization and for each loop is less.
The worst case behavior happens when the bad character skip is consistently low (with the lower limit of 1 byte movement) and a large portion of the needle matches the haystack. The bad character skip is only low, on a partial match, when the final character of the needle also occurs elsewhere within the needle, with 1 byte movement happening when the same byte is in both of the last two positions.
The canonical degenerate case similar to the above "best" case is a needle of an 'a' byte followed by 31 'z' bytes in a haystack consisting of 255 'z' bytes. This will do 31 successful byte comparisons, a 1 byte comparison that fails and then move forward 1 byte. This process will repeat 223 more times (255 − 32), bringing the total byte comparisons to 7,168 (32 × 224). (A different byte-comparison loop will have a different behavior.)
The worst case is significantly higher than for the Boyer–Moore string-search algorithm, although obviously this is hard to achieve in normal use cases. It is also worth noting that this worst case is also the worst case for the naive (but usual) memcmp() algorithm, although the implementation of that tends to be significantly optimized (and is more cache friendly).
=== Tuning the comparison loop ===
The original algorithm had a more sophisticated same() loop. It uses an extra pre-check before proceeding in the positive direction:
function same_orig(str1, str2, len)
i ← 0
if str1[len - 1] = str2[len - 1]
while str1[i] = str2[i]
if i = len - 2
return true
i ← i + 1
return false
A tuned version of the BMH algorithm is the Raita algorithm. It adds an additional precheck for the middle character, in the order of last-first-middle. The algorithm enters the full loop only when the check passes:
function same_raita(str1, str2, len)
i ← 0
mid ← len / 2
Three prechecks.
if len ≥ 3
if str[mid] != str2[mid]
return false
if len ≥ 1
if str[0] != str2[0]
return false
if len ≥ 2
if str[len - 1] != str2[len - 1]
return false
Any old comparison loop.
return len < 3 or SAME(&str1[1], &str2[1], len - 2)
It is unclear whether this 1992 tuning still holds its performance advantage on modern machines. The rationale by the authors is that actual text usually contains some patterns which can be effectively prefiltered by these three characters. It appears that Raita is not aware of the old last-character precheck (he believed that the backward-only same routine is the Horspool implementation), so readers are advised to take the results with a grain of salt.
On modern machines, library functions like memcmp tends to provide better throughput than any of the hand-written comparison loops. The behavior of an "SFC" loop (Horspool's terminology) both in libstdc++ and libc++ seems to suggest that a modern Raita implementation should not include any of the one-character shifts, since they have detrimental effects on data alignment. Also see String-searching algorithm which has detailed analysis of other string searching algorithms.
== References ==
== External links ==
Description of the algorithm
An implementation from V8 JavaScript engine written in C++ | Wikipedia/Boyer–Moore–Horspool_algorithm |
In computer science, the Aho–Corasick algorithm is a string-searching algorithm invented by Alfred V. Aho and Margaret J. Corasick in 1975. It is a kind of dictionary-matching algorithm that locates elements of a finite set of strings (the "dictionary") within an input text. It matches all strings simultaneously. The complexity of the algorithm is linear in the length of the strings plus the length of the searched text plus the number of output matches. Because all matches are found, multiple matches will be returned for one string location if multiple strings from the dictionary match at that location (e.g. dictionary = a, aa, aaa, aaaa and input string is aaaa).
Informally, the algorithm constructs a finite-state machine that resembles a trie with additional links between the various internal nodes. These extra internal links allow fast transitions between failed string matches (e.g. a search for cart in a trie that does not contain cart, but contains art, and thus would fail at the node prefixed by car), to other branches of the trie that share a common suffix (e.g., in the previous case, a branch for attribute might be the best lateral transition). This allows the automaton to transition between string matches without the need for backtracking.
When the string dictionary is known in advance (e.g. a computer virus database), the construction of the automaton can be performed once off-line and the compiled automaton stored for later use. In this case, its run time is linear in the length of the input plus the number of matched entries.
The Aho—Corasick string-matching algorithm formed the basis of the original Unix command fgrep.
== History ==
Like many inventions at Bell Labs at the time, the Aho–Corasick algorithm was created serendipitously with a conversation between the two after a seminar by Aho. Corasick was an information scientist who got her PhD a year earlier at Lehigh University. There, she did her dissertation on securing propretiary data within open systems, through the lens of both the commercial, legal, and government structures and the technical tools that were emerging at the time. In a similar realm, at Bell Labs, she was building a tool for researchers to learn about current work being done under government contractors by searching government-provided tapes of publications.
For this, she wrote a primitive keyword-by-keyword search program to find chosen keywords within the tapes. Such an algorithm scaled poorly with many keywords, and one of the bibliographers using her algorithm hit the $600 usage limit on the Bell Labs machines before their lengthy search even finished.
She ended up attending a seminar on algorithm design by Aho, and afterwards they got to speaking about her work and this problem. Aho suggested improving the efficiency of the program using the approach of the now Aho–Corasick algorithm, and Corasick designed a new program based on those insights. This lowered the running cost of that bibliographer's search from over $600 to just $25, and Aho–Corasick was born.
== Example ==
In this example, we will consider a dictionary consisting of the following words: {a, ab, bab, bc, bca, c, caa}.
The graph below is the Aho–Corasick data structure constructed from the specified dictionary, with each row in the table representing a node in the trie, with the column path indicating the (unique) sequence of characters from the root to the node.
The data structure has one node for every prefix of every string in the dictionary. So if (bca) is in the dictionary, then there will be nodes for (bca), (bc), (b), and (). If a node is in the dictionary then it is a blue node. Otherwise it is a grey node.
There is a black directed "child" arc from each node to a node whose name is found by appending one character. So there is a black arc from (bc) to (bca).
There is a blue directed "suffix" arc from each node to the node that is the longest possible strict suffix of it in the graph. For example, for node (caa), its strict suffixes are (aa) and (a) and (). The longest of these that exists in the graph is (a). So there is a blue arc from (caa) to (a). The blue arcs can be computed in linear time by performing a breadth-first search [potential suffix node will always be at lower level] starting from the root. The target for the blue arc of a visited node can be found by following its parent's blue arc to its longest suffix node and searching for a child of the suffix node whose character matches that of the visited node. If the character does not exist as a child, we can find the next longest suffix (following the blue arc again) and then search for the character. We can do this until we either find the character (as child of a node) or we reach the root (which will always be a suffix of every string).
There is a green "dictionary suffix" arc from each node to the next node in the dictionary that can be reached by following blue arcs. For example, there is a green arc from (bca) to (a) because (a) is the first node in the dictionary (i.e. a blue node) that is reached when following the blue arcs to (ca) and then on to (a). The green arcs can be computed in linear time by repeatedly traversing blue arcs until a blue node is found, and memoizing this information.
At each step, the current node is extended by finding its child, and if that doesn't exist, finding its suffix's child, and if that doesn't work, finding its suffix's suffix's child, and so on, finally ending in the root node if nothing's seen before.
When the algorithm reaches a node, it outputs all the dictionary entries that end at the current character position in the input text. This is done by printing every node reached by following the dictionary suffix links, starting from that node, and continuing until it reaches a node with no dictionary suffix link. In addition, the node itself is printed, if it is a dictionary entry.
Execution on input string abccab yields the following steps:
== Dynamic search list ==
The original Aho–Corasick algorithm assumes that the set of search strings is fixed. It does not directly apply to applications in which new search strings are added during application of the algorithm. An example is an interactive indexing program, in which the user goes through the text and highlights new words or phrases to index as they see them. Bertrand Meyer introduced an incremental version of the algorithm in which the search string set can be incrementally extended during the search, retaining the algorithmic complexity of the original.
== See also ==
Commentz-Walter algorithm
== References ==
== External links ==
Aho—Corasick in NIST's Dictionary of Algorithms and Data Structures (2019-07-15)
Aho-Corasick Algorithm Visualizer | Wikipedia/Aho–Corasick_algorithm |
The bitap algorithm (also known as the shift-or, shift-and or Baeza-Yates–Gonnet algorithm) is an approximate string matching algorithm. The algorithm tells whether a given text contains a substring which is "approximately equal" to a given pattern, where approximate equality is defined in terms of Levenshtein distance – if the substring and pattern are within a given distance k of each other, then the algorithm considers them equal. The algorithm begins by precomputing a set of bitmasks containing one bit for each element of the pattern. Then it is able to do most of the work with bitwise operations, which are extremely fast.
The bitap algorithm is perhaps best known as one of the underlying algorithms of the Unix utility agrep, written by Udi Manber, Sun Wu, and Burra Gopal. Manber and Wu's original paper gives extensions of the algorithm to deal with fuzzy matching of general regular expressions.
Due to the data structures required by the algorithm, it performs best on patterns less than a constant length (typically the word length of the machine in question), and also prefers inputs over a small alphabet. Once it has been implemented for a given alphabet and word length m, however, its running time is completely predictable – it runs in O(mn) operations, no matter the structure of the text or the pattern.
The bitap algorithm for exact string searching was invented by Bálint Dömölki in 1964[1][2] and extended by R. K. Shyamasundar in 1977[3], before being reinvented by Ricardo Baeza-Yates and Gaston Gonnet[4] in 1989 (one chapter of first author's PhD thesis[5]) which also extended it to handle classes of characters, wildcards, and mismatches. In 1991, it was extended by Manber and Wu [6][7] to handle also insertions and deletions (full fuzzy string searching). This algorithm was later improved by Baeza-Yates and Navarro in 1996.[8]
== Exact searching ==
The bitap algorithm for exact string searching, in full generality, looks like this in pseudocode:
algorithm bitap_search is
input: text as a string.
pattern as a string.
output: string
m := length(pattern)
if m = 0 then
return text
/* Initialize the bit array R. */
R := new array[m+1] of bit, initially all 0
R[0] := 1
for i := 0; i < length(text); i += 1 do
/* Update the bit array. */
for k := m; k ≥ 1; k -= 1 do
R[k] := R[k - 1] & (text[i] = pattern[k - 1])
if R[m] then
return (text + i - m) + 1
return null
Bitap distinguishes itself from other well-known string searching algorithms in its natural mapping onto simple bitwise operations, as in the following modification of the above program. Notice that in this implementation, counterintuitively, each bit with value zero indicates a match, and each bit with value 1 indicates a non-match. The same algorithm can be written with the intuitive semantics for 0 and 1, but in that case we must introduce another instruction into the inner loop to set R |= 1. In this implementation, we take advantage of the fact that left-shifting a value shifts in zeros on the right, which is precisely the behavior we need.
Notice also that we require CHAR_MAX additional bitmasks in order to convert the (text[i] == pattern[k-1]) condition in the general implementation into bitwise operations. Therefore, the bitap algorithm performs better when applied to inputs over smaller alphabets.
== Fuzzy searching ==
To perform fuzzy string searching using the bitap algorithm, it is necessary to extend the bit array R into a second dimension. Instead of having a single array R that changes over the length of the text, we now have k distinct arrays R1..k. Array Ri holds a representation of the prefixes of pattern that match any suffix of the current string with i or fewer errors. In this context, an "error" may be an insertion, deletion, or substitution; see Levenshtein distance for more information on these operations.
The implementation below performs fuzzy matching (returning the first match with up to k errors) using the fuzzy bitap algorithm. However, it only pays attention to substitutions, not to insertions or deletions – in other words, a Hamming distance of k. As before, the semantics of 0 and 1 are reversed from their conventional meanings.
== See also ==
agrep
TRE (computing)
== External links and references ==
^ Bálint Dömölki, An algorithm for syntactical analysis, Computational Linguistics 3, Hungarian Academy of Science pp. 29–46, 1964.
^ Bálint Dömölki, A universal compiler system based on production rules, BIT Numerical Mathematics, 8(4), pp 262–275, 1968. doi:10.1007/BF01933436
^ R. K. Shyamasundar, Precedence parsing using Dömölki's algorithm, International Journal of Computer Mathematics, 6(2)pp 105–114, 1977.
^ Ricardo Baeza-Yates. "Efficient Text Searching." PhD Thesis, University of Waterloo, Canada, May 1989.
^ Udi Manber, Sun Wu. "Fast text searching with errors." Technical Report TR-91-11. Department of Computer Science, University of Arizona, Tucson, June 1991. (gzipped PostScript)
^ Ricardo Baeza-Yates, Gastón H. Gonnet. "A New Approach to Text Searching." Communications of the ACM, 35(10): pp. 74–82, October 1992.
^ Udi Manber, Sun Wu. "Fast text search allowing errors." Communications of the ACM, 35(10): pp. 83–91, October 1992, doi:10.1145/135239.135244.
^ R. Baeza-Yates and G. Navarro. A faster algorithm for approximate string matching. In Dan Hirchsberg and Gene Myers, editors, Combinatorial Pattern Matching (CPM'96), LNCS 1075, pages 1–23, Irvine, CA, June 1996.
^ G. Myers. "A fast bit-vector algorithm for approximate string matching based on dynamic programming." Journal of the ACM 46 (3), May 1999, 395–415.
libbitap, a free implementation that shows how the algorithm can easily be extended for most regular expressions. Unlike the code above, it places no limit on the pattern length.
Ricardo Baeza-Yates, Berthier Ribeiro-Neto. Modern Information Retrieval. 1999. ISBN 0-201-39829-X.
bitap.py - Python implementation of Bitap algorithm with Wu-Manber modifications. | Wikipedia/Bitap_algorithm |
The Needleman–Wunsch algorithm is an algorithm used in bioinformatics to align protein or nucleotide sequences. It was one of the first applications of dynamic programming to compare biological sequences. The algorithm was developed by Saul B. Needleman and Christian D. Wunsch and published in 1970. The algorithm essentially divides a large problem (e.g. the full sequence) into a series of smaller problems, and it uses the solutions to the smaller problems to find an optimal solution to the larger problem. It is also sometimes referred to as the optimal matching algorithm and the global alignment technique. The Needleman–Wunsch algorithm is still widely used for optimal global alignment, particularly when the quality of the global alignment is of the utmost importance. The algorithm assigns a score to every possible alignment, and the purpose of the algorithm is to find all possible alignments having the highest score.
== Introduction ==
This algorithm can be used for any two strings. This guide will use two small DNA sequences as examples as shown in Figure 1:
GCATGCG
GATTACA
=== Constructing the grid ===
First construct a grid such as one shown in Figure 1 above. Start the first string in the top of the third column and start the other string at the start of the third row. Fill out the rest of the column and row headers as in Figure 1. There should be no numbers in the grid yet.
=== Choosing a scoring system ===
Next, decide how to score each individual pair of letters. Using the example above, one possible alignment candidate might be:
12345678
GCATG-CG
G-ATTACA
The letters may match, mismatch, or be matched to a gap (a deletion or insertion (indel)):
Match: The two letters at the current index are the same.
Mismatch: The two letters at the current index are different.
Indel (Insertion or Deletion): The best alignment involves one letter aligning to a gap in the other string.
Each of these scenarios is assigned a score and the sum of the scores of all the pairings is the score of the whole alignment candidate. Different systems exist for assigning scores; some have been outlined in the Scoring systems section below. For now, the system used by Needleman and Wunsch will be used:
Match: +1
Mismatch or Indel: −1
For the Example above, the score of the alignment would be 0:
GCATG-CG
G-ATTACA
+−++−−+− −> 1*4 + (−1)*4 = 0
=== Filling in the table ===
Start with a zero in the first row, first column (not including the cells containing nucleotides). Move through the cells row by row, calculating the score for each cell. The score is calculated by comparing the scores of the cells neighboring to the left, top or top-left (diagonal) of the cell and adding the appropriate score for match, mismatch or indel. Take the maximum of the candidate scores for each of the three possibilities:
The path from the top or left cell represents an indel pairing, so take the scores of the left and the top cell, and add the score for indel to each of them.
The diagonal path represents a match/mismatch, so take the score of the top-left diagonal cell and add the score for match if the corresponding bases (letters) in the row and column are matching or the score for mismatch if they do not.
The resulting score for the cell is the highest of the three candidate scores.
Given there is no 'top' or 'top-left' cells for the first row only the existing cell to the left can be used to calculate the score of each cell. Hence −1 is added for each shift to the right as this represents an indel from the previous score. This results in the first row being 0, −1, −2, −3, −4, −5, −6, −7. The same applies to the first column as only the existing score above each cell can be used. Thus the resulting table is:
The first case with existing scores in all 3 directions is the intersection of our first letters (in this case G and G). The surrounding cells are below:
This cell has three possible candidate sums:
The diagonal top-left neighbor has score 0. The pairing of G and G is a match, so add the score for match: 0+1 = 1
The top neighbor has score −1 and moving from there represents an indel, so add the score for indel: (−1) + (−1) = (−2)
The left neighbor also has score −1, represents an indel and also produces (−2).
The highest candidate is 1 and is entered into the cell:
The cell which gave the highest candidate score must also be recorded. In the completed diagram in figure 1 above, this is represented as an arrow from the cell in row and column 2 to the cell in row and column 1.
In the next example, the diagonal step for both X and Y represents a mismatch:
X:
Top: (−2)+(−1) = (−3)
Left: (+1)+(−1) = (0)
Top-Left: (−1)+(−1) = (−2)
Y:
Top: (1)+(−1) = (0)
Left: (−2)+(−1) = (−3)
Top-Left: (−1)+(−1) = (−2)
For both X and Y, the highest score is zero:
The highest candidate score may be reached by two of the neighboring cells:
Top: (1)+(−1) = (0)
Top-Left: (1)+(−1) = (0)
Left: (0)+(−1) = (−1)
In this case, all directions reaching the highest candidate score must be noted as possible origin cells in the finished diagram in figure 1, e.g. in the cell in row and column 6.
Filling in the table in this manner gives the scores of all possible alignment candidates, the score in the cell on the bottom right represents the alignment score for the best alignment.
=== Tracing arrows back to origin ===
Mark a path from the cell on the bottom right back to the cell on the top left by following the direction of the arrows. From this path, the sequence is constructed by these rules:
A diagonal arrow represents a match or mismatch, so the letter of the column and the letter of the row of the origin cell will align.
A horizontal or vertical arrow represents an indel. Vertical arrows will align a gap ("-") to the letter of the row (the "side" sequence), horizontal arrows will align a gap to the letter of the column (the "top" sequence).
If there are multiple arrows to choose from, they represent a branching of the alignments. If two or more branches all belong to paths from the bottom right to the top left cell, they are equally viable alignments. In this case, note the paths as separate alignment candidates.
Following these rules, the steps for one possible alignment candidate in figure 1 are:
G → CG → GCG → -GCG → T-GCG → AT-GCG → CAT-GCG → GCAT-GCG
A → CA → ACA → TACA → TTACA → ATTACA → -ATTACA → G-ATTACA
↓
(branch) → TGCG → -TGCG → ...
→ TACA → TTACA → ...
== Scoring systems ==
=== Basic scoring schemes ===
The simplest scoring schemes simply give a value for each match, mismatch and indel. The step-by-step guide above uses match = 1, mismatch = −1, indel = −1. Thus the lower the alignment score the larger the edit distance, for this scoring system one wants a high score. Another scoring system might be:
Match = 0
Indel = -1
Mismatch = -1
For this system the alignment score will represent the edit distance between the two strings.
Different scoring systems can be devised for different situations, for example if gaps are considered very bad for your alignment you may use a scoring system that penalises gaps heavily, such as:
Match = 1
Indel = -10
Mismatch = -1
=== Similarity matrix ===
More complicated scoring systems attribute values not only for the type of alteration, but also for the letters that are involved. For example, a match between A and A may be given 1, but a match between T and T may be given 4. Here (assuming the first scoring system) more importance is given to the Ts matching than the As, i.e. the Ts matching is assumed to be more significant to the alignment. This weighting based on letters also applies to mismatches.
In order to represent all the possible combinations of letters and their resulting scores a similarity matrix is used. The similarity matrix for the most basic system is represented as:
Each score represents a switch from one of the letters the cell matches to the other. Hence this represents all possible matches and mismatches (for an alphabet of ACGT). Note all the matches go along the diagonal, also not all the table needs to be filled, only this triangle because the scores are reciprocal.= (Score for A → C = Score for C → A). If implementing the T-T = 4 rule from above the following similarity matrix is produced:
Different scoring matrices have been statistically constructed which give weight to different actions appropriate to a particular scenario. Having weighted scoring matrices is particularly important in protein sequence alignment due to the varying frequency of the different amino acids. There are two broad families of scoring matrices, each with further alterations for specific scenarios:
PAM
BLOSUM
=== Gap penalty ===
When aligning sequences there are often gaps (i.e. indels), sometimes large ones. Biologically, a large gap is more likely to occur as one large deletion as opposed to multiple single deletions. Hence two small indels should have a worse score than one large one. The simple and common way to do this is via a large gap-start score for a new indel and a smaller gap-extension score for every letter which extends the indel. For example, new-indel may cost -5 and extend-indel may cost -1. In this way an alignment such as:
GAAAAAAT
G--A-A-T
which has multiple equal alignments, some with multiple small alignments will now align as:
GAAAAAAT
GAA----T
or any alignment with a 4 long gap in preference over multiple small gaps.
== Advanced presentation of algorithm ==
Scores for aligned characters are specified by a similarity matrix. Here, S(a, b) is the similarity of characters a and b. It uses a linear gap penalty, here called d.
For example, if the similarity matrix was
then the alignment:
AGACTAGTTAC
CGA---GACGT
with a gap penalty of −5, would have the following score:
S(A,C) + S(G,G) + S(A,A) + (3 × d) + S(G,G) + S(T,A) + S(T,C) + S(A,G) + S(C,T)
= −3 + 7 + 10 − (3 × 5) + 7 + (−4) + 0 + (−1) + 0 = 1
To find the alignment with the highest score, a two-dimensional array (or matrix) F is allocated. The entry in row i and column j is denoted here by
F
i
j
{\displaystyle F_{ij}}
. There is one row for each character in sequence A, and one column for each character in sequence B. Thus, if aligning sequences of sizes n and m, the amount of memory used is in
O
(
n
m
)
{\displaystyle O(nm)}
. Hirschberg's algorithm only holds a subset of the array in memory and uses
Θ
(
min
{
n
,
m
}
)
{\displaystyle \Theta (\min\{n,m\})}
space, but is otherwise similar to Needleman-Wunsch (and still requires
O
(
n
m
)
{\displaystyle O(nm)}
time).
As the algorithm progresses, the
F
i
j
{\displaystyle F_{ij}}
will be assigned to be the optimal score for the alignment of the first
i
=
0
,
…
,
n
{\displaystyle i=0,\dotsc ,n}
characters in A and the first
j
=
0
,
…
,
m
{\displaystyle j=0,\dotsc ,m}
characters in B. The principle of optimality is then applied as follows:
Basis:
F
0
j
=
d
∗
j
{\displaystyle F_{0j}=d*j}
F
i
0
=
d
∗
i
{\displaystyle F_{i0}=d*i}
Recursion, based on the principle of optimality:
F
i
j
=
max
(
F
i
−
1
,
j
−
1
+
S
(
A
i
,
B
j
)
,
F
i
,
j
−
1
+
d
,
F
i
−
1
,
j
+
d
)
{\displaystyle F_{ij}=\max(F_{i-1,j-1}+S(A_{i},B_{j}),\;F_{i,j-1}+d,\;F_{i-1,j}+d)}
The pseudo-code for the algorithm to compute the F matrix therefore looks like this:
d ← Gap penalty score
for i = 0 to length(A)
F(i,0) ← d * i
for j = 0 to length(B)
F(0,j) ← d * j
for i = 1 to length(A)
for j = 1 to length(B)
{
Match ← F(i−1, j−1) + S(Ai, Bj)
Delete ← F(i−1, j) + d
Insert ← F(i, j−1) + d
F(i,j) ← max(Match, Insert, Delete)
}
Once the F matrix is computed, the entry
F
n
m
{\displaystyle F_{nm}}
gives the maximum score among all possible alignments. To compute an alignment that actually gives this score, you start from the bottom right cell, and compare the value with the three possible sources (Match, Insert, and Delete above) to see which it came from. If Match, then
A
i
{\displaystyle A_{i}}
and
B
j
{\displaystyle B_{j}}
are aligned, if Delete, then
A
i
{\displaystyle A_{i}}
is aligned with a gap, and if Insert, then
B
j
{\displaystyle B_{j}}
is aligned with a gap. (In general, more than one choice may have the same value, leading to alternative optimal alignments.)
AlignmentA ← ""
AlignmentB ← ""
i ← length(A)
j ← length(B)
while (i > 0 or j > 0)
{
if (i > 0 and j > 0 and F(i, j) == F(i−1, j−1) + S(Ai, Bj))
{
AlignmentA ← Ai + AlignmentA
AlignmentB ← Bj + AlignmentB
i ← i − 1
j ← j − 1
}
else if (i > 0 and F(i, j) == F(i−1, j) + d)
{
AlignmentA ← Ai + AlignmentA
AlignmentB ← "−" + AlignmentB
i ← i − 1
}
else
{
AlignmentA ← "−" + AlignmentA
AlignmentB ← Bj + AlignmentB
j ← j − 1
}
}
== Complexity ==
Computing the score
F
i
j
{\displaystyle F_{ij}}
for each cell in the table is an
O
(
1
)
{\displaystyle O(1)}
operation. Thus the time complexity of the algorithm for two sequences of length
n
{\displaystyle n}
and
m
{\displaystyle m}
is
O
(
m
n
)
{\displaystyle O(mn)}
. It has been shown that it is possible to improve the running time to
O
(
m
n
/
log
n
)
{\displaystyle O(mn/\log n)}
using the Method of Four Russians. Since the algorithm fills an
n
×
m
{\displaystyle n\times m}
table the space complexity is
O
(
m
n
)
.
{\displaystyle O(mn).}
== Historical notes and algorithm development ==
The original purpose of the algorithm described by Needleman and Wunsch was to find similarities in the amino acid sequences of two proteins.
Needleman and Wunsch describe their algorithm explicitly for the case when the alignment is penalized solely by the matches and mismatches, and gaps have no penalty (d=0). The original publication from 1970 suggests the recursion
F
i
j
=
max
h
<
i
,
k
<
j
{
F
h
,
j
−
1
+
S
(
A
i
,
B
j
)
,
F
i
−
1
,
k
+
S
(
A
i
,
B
j
)
}
{\displaystyle F_{ij}=\max _{h<i,k<j}\{F_{h,j-1}+S(A_{i},B_{j}),F_{i-1,k}+S(A_{i},B_{j})\}}
.
The corresponding dynamic programming algorithm takes cubic time. The paper also points out that the recursion can accommodate arbitrary gap penalization formulas:
A penalty factor, a number subtracted for every gap made, may be assessed as a barrier to allowing the gap. The penalty factor could be a function of the size and/or direction of the gap. [page 444]
A better dynamic programming algorithm with quadratic running time for the same problem (no gap penalty) was introduced later by David Sankoff in 1972.
Similar quadratic-time algorithms were discovered independently
by T. K. Vintsyuk in 1968 for speech processing
("time warping"), and by Robert A. Wagner and Michael J. Fischer in 1974 for string matching.
Needleman and Wunsch formulated their problem in terms of maximizing similarity. Another possibility is to minimize the edit distance between sequences, introduced by Vladimir Levenshtein. Peter H. Sellers showed in 1974 that the two problems are equivalent.
The Needleman–Wunsch algorithm is still widely used for optimal global alignment, particularly when the quality of the global alignment is of the utmost importance. However, the algorithm is expensive with respect to time and space, proportional to the product of the length of two sequences and hence is not suitable for long sequences.
Recent development has focused on improving the time and space cost of the algorithm while maintaining quality. For example, in 2013, a Fast Optimal Global Sequence Alignment Algorithm (FOGSAA), suggested alignment of nucleotide/protein sequences faster than other optimal global alignment methods, including the Needleman–Wunsch algorithm. The paper claims that when compared to the Needleman–Wunsch algorithm, FOGSAA achieves a time gain of 70–90% for highly similar nucleotide sequences (with > 80% similarity), and 54–70% for sequences having 30–80% similarity.
== Applications outside bioinformatics ==
=== Computer stereo vision ===
Stereo matching is an essential step in the process of 3D reconstruction from a pair of stereo images. When images have been rectified, an analogy can be drawn between aligning nucleotide and protein sequences and matching pixels belonging to scan lines, since both tasks aim at establishing optimal correspondence between two strings of characters.
Although in many applications image rectification can be performed, e.g. by camera resectioning or calibration, it is sometimes impossible or impractical since the computational cost of accurate rectification models prohibit their usage in real-time applications. Moreover, none of these models is suitable when a camera lens displays unexpected distortions, such as those generated by raindrops, weatherproof covers or dust. By extending the Needleman–Wunsch algorithm, a line in the 'left' image can be associated to a curve in the 'right' image by finding the alignment with the highest score in a three-dimensional array (or matrix). Experiments demonstrated that such extension allows dense pixel matching between unrectified or distorted images.
== See also ==
Wagner–Fischer algorithm
Smith–Waterman algorithm
Sequence mining
Levenshtein distance
Dynamic time warping
Sequence alignment
== References ==
== External links ==
NW-align: A protein sequence-to-sequence alignment program by Needleman-Wunsch algorithm (online server and source code)
A live Javascript-based demo of Needleman–Wunsch
An interactive Javascript-based visual explanation of Needleman-Wunsch Algorithm
Sequence Alignment Techniques at Technology Blog
Biostrings R package implementing Needleman–Wunsch algorithm among others | Wikipedia/Needleman–Wunsch_algorithm |
In computer science, the Wagner–Fischer algorithm is a dynamic programming algorithm that computes the edit distance between two strings of characters.
== History ==
The Wagner–Fischer algorithm has a history of multiple invention. Navarro lists the following inventors of it, with date of publication, and acknowledges that the list is incomplete:: 43
Vintsyuk, 1968
Needleman and Wunsch, 1970
Sankoff, 1972
Sellers, 1974
Wagner and Fischer, 1974
Lowrance and Wagner, 1975
P. Pletyuhin, 1996
== Calculating distance ==
The Wagner–Fischer algorithm computes edit distance based on the observation that if we reserve a matrix to hold the edit distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix by flood filling the matrix, and thus find the distance between the two full strings as the last value computed.
A straightforward implementation, as pseudocode for a function Distance that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them, looks as follows. The input strings are one-indexed, while the matrix d is zero-indexed, and [i..k] is a closed range.
Two examples of the resulting matrix (hovering over an underlined number reveals the operation performed to get that number):
The invariant maintained throughout the algorithm is that we can transform the initial segment s[1..i] into t[1..j] using a minimum of d[i,j] operations. At the end, the bottom-right element of the array contains the answer.
=== Proof of correctness ===
As mentioned earlier, the invariant is that we can transform the initial segment s[1..i] into t[1..j] using a minimum of d[i,j] operations. This invariant holds since:
It is initially true on row and column 0 because s[1..i] can be transformed into the empty string t[1..0] by simply dropping all i characters. Similarly, we can transform s[1..0] to t[1..j] by simply adding all j characters.
If s[i] = t[j], and we can transform s[1..i-1] to t[1..j-1] in k operations, then we can do the same to s[1..i] and just leave the last character alone, giving k operations.
Otherwise, the distance is the minimum of the three possible ways to do the transformation:
If we can transform s[1..i] to t[1..j-1] in k operations, then we can simply add t[j] afterwards to get t[1..j] in k+1 operations (insertion).
If we can transform s[1..i-1] to t[1..j] in k operations, then we can remove s[i] and then do the same transformation, for a total of k+1 operations (deletion).
If we can transform s[1..i-1] to t[1..j-1] in k operations, then we can do the same to s[1..i], and exchange the original s[i] for t[j] afterwards, for a total of k+1 operations (substitution).
The operations required to transform s[1..n] into t[1..m] is of course the number required to transform all of s into all of t, and so d[n,m] holds our result.
This proof fails to validate that the number placed in d[i,j] is in fact minimal; this is more difficult to show, and involves an argument by contradiction in which we assume d[i,j] is smaller than the minimum of the three, and use this to show one of the three is not minimal.
=== Possible modifications ===
Possible modifications to this algorithm include:
We can adapt the algorithm to use less space, O(m) instead of O(mn), since it only requires that the previous row and current row be stored at any one time.
We can store the number of insertions, deletions, and substitutions separately, or even the positions at which they occur, which is always j.
We can normalize the distance to the interval [0,1].
If we are only interested in the distance if it is smaller than a threshold k, then it suffices to compute a diagonal stripe of width
2
k
+
1
{\displaystyle 2k+1}
in the matrix. In this way, the algorithm can be run in O(kl) time, where l is the length of the shortest string.
We can give different penalty costs to insertion, deletion and substitution. We can also give penalty costs that depend on which characters are inserted, deleted or substituted.
This algorithm parallelizes poorly, due to a large number of data dependencies. However, all the cost values can be computed in parallel, and the algorithm can be adapted to perform the minimum function in phases to eliminate dependencies.
By examining diagonals instead of rows, and by using lazy evaluation, we can find the Levenshtein distance in O(m (1 + d)) time (where d is the Levenshtein distance), which is much faster than the regular dynamic programming algorithm if the distance is small.
== Seller's variant for string search ==
By initializing the first row of the matrix with zeros, we obtain a variant of the Wagner–Fischer algorithm that can be used for fuzzy string search of a string in a text. This modification gives the end-position of matching substrings of the text. To determine the start-position of the matching substrings, the number of insertions and deletions can be stored separately and used to compute the start-position from the end-position.
The resulting algorithm is by no means efficient, but was at the time of its publication (1980) one of the first algorithms that performed approximate search.
== References == | Wikipedia/Wagner–Fischer_algorithm |
In computer science, Hirschberg's algorithm, named after its inventor, Dan Hirschberg, is a dynamic programming algorithm that finds the optimal sequence alignment between two strings. Optimality is measured with the Levenshtein distance, defined to be the sum of the costs of insertions, replacements, deletions, and null actions needed to change one string into the other. Hirschberg's algorithm is simply described as a more space-efficient version of the Needleman–Wunsch algorithm that uses dynamic programming. Hirschberg's algorithm is commonly used in computational biology to find maximal global alignments of DNA and protein sequences.
== Algorithm information ==
Hirschberg's algorithm is a generally applicable algorithm for optimal sequence alignment. BLAST and FASTA are suboptimal heuristics. If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are strings, where
length
(
X
)
=
n
{\displaystyle \operatorname {length} (X)=n}
and
length
(
Y
)
=
m
{\displaystyle \operatorname {length} (Y)=m}
, the Needleman–Wunsch algorithm finds an optimal alignment in
O
(
n
m
)
{\displaystyle O(nm)}
time, using
O
(
n
m
)
{\displaystyle O(nm)}
space. Hirschberg's algorithm is a clever modification of the Needleman–Wunsch Algorithm, which still takes
O
(
n
m
)
{\displaystyle O(nm)}
time, but needs only
O
(
min
{
n
,
m
}
)
{\displaystyle O(\min\{n,m\})}
space and is much faster in practice.
One application of the algorithm is finding sequence alignments of DNA or protein sequences. It is also a space-efficient way to calculate the longest common subsequence between two sets of data such as with the common diff tool.
The Hirschberg algorithm can be derived from the Needleman–Wunsch algorithm by observing that:
one can compute the optimal alignment score by only storing the current and previous row of the Needleman–Wunsch score matrix;
if
(
Z
,
W
)
=
NW
(
X
,
Y
)
{\displaystyle (Z,W)=\operatorname {NW} (X,Y)}
is the optimal alignment of
(
X
,
Y
)
{\displaystyle (X,Y)}
, and
X
=
X
l
+
X
r
{\displaystyle X=X^{l}+X^{r}}
is an arbitrary partition of
X
{\displaystyle X}
, there exists a partition
Y
l
+
Y
r
{\displaystyle Y^{l}+Y^{r}}
of
Y
{\displaystyle Y}
such that
NW
(
X
,
Y
)
=
NW
(
X
l
,
Y
l
)
+
NW
(
X
r
,
Y
r
)
{\displaystyle \operatorname {NW} (X,Y)=\operatorname {NW} (X^{l},Y^{l})+\operatorname {NW} (X^{r},Y^{r})}
.
== Algorithm description ==
X
i
{\displaystyle X_{i}}
denotes the i-th character of
X
{\displaystyle X}
, where
1
⩽
i
⩽
length
(
X
)
{\displaystyle 1\leqslant i\leqslant \operatorname {length} (X)}
.
X
i
:
j
{\displaystyle X_{i:j}}
denotes a substring of size
j
−
i
+
1
{\displaystyle j-i+1}
, ranging from the i-th to the j-th character of
X
{\displaystyle X}
.
rev
(
X
)
{\displaystyle \operatorname {rev} (X)}
is the reversed version of
X
{\displaystyle X}
.
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are sequences to be aligned. Let
x
{\displaystyle x}
be a character from
X
{\displaystyle X}
, and
y
{\displaystyle y}
be a character from
Y
{\displaystyle Y}
. We assume that
Del
(
x
)
{\displaystyle \operatorname {Del} (x)}
,
Ins
(
y
)
{\displaystyle \operatorname {Ins} (y)}
and
Sub
(
x
,
y
)
{\displaystyle \operatorname {Sub} (x,y)}
are well defined integer-valued functions. These functions represent the cost of deleting
x
{\displaystyle x}
, inserting
y
{\displaystyle y}
, and replacing
x
{\displaystyle x}
with
y
{\displaystyle y}
, respectively.
We define
NWScore
(
X
,
Y
)
{\displaystyle \operatorname {NWScore} (X,Y)}
, which returns the last line of the Needleman–Wunsch score matrix
S
c
o
r
e
(
i
,
j
)
{\displaystyle \mathrm {Score} (i,j)}
:
function NWScore(X, Y)
Score(0, 0) = 0 // 2 * (length(Y) + 1) array
for j = 1 to length(Y)
Score(0, j) = Score(0, j - 1) + Ins(Yj)
for i = 1 to length(X) // Init array
Score(1, 0) = Score(0, 0) + Del(Xi)
for j = 1 to length(Y)
scoreSub = Score(0, j - 1) + Sub(Xi, Yj)
scoreDel = Score(0, j) + Del(Xi)
scoreIns = Score(1, j - 1) + Ins(Yj)
Score(1, j) = max(scoreSub, scoreDel, scoreIns)
end
// Copy Score[1] to Score[0]
Score(0, :) = Score(1, :)
end
for j = 0 to length(Y)
LastLine(j) = Score(1, j)
return LastLine
Note that at any point,
NWScore
{\displaystyle \operatorname {NWScore} }
only requires the two most recent rows of the score matrix. Thus,
NWScore
{\displaystyle \operatorname {NWScore} }
is implemented in
O
(
min
{
length
(
X
)
,
length
(
Y
)
}
)
{\displaystyle O(\min\{\operatorname {length} (X),\operatorname {length} (Y)\})}
space.
The Hirschberg algorithm follows:
function Hirschberg(X, Y)
Z = ""
W = ""
if length(X) == 0
for i = 1 to length(Y)
Z = Z + '-'
W = W + Yi
end
else if length(Y) == 0
for i = 1 to length(X)
Z = Z + Xi
W = W + '-'
end
else if length(X) == 1 or length(Y) == 1
(Z, W) = NeedlemanWunsch(X, Y)
else
xlen = length(X)
xmid = length(X) / 2
ylen = length(Y)
ScoreL = NWScore(X1:xmid, Y)
ScoreR = NWScore(rev(Xxmid+1:xlen), rev(Y))
ymid = arg max ScoreL + rev(ScoreR)
(Z,W) = Hirschberg(X1:xmid, y1:ymid) + Hirschberg(Xxmid+1:xlen, Yymid+1:ylen)
end
return (Z, W)
In the context of observation (2), assume that
X
l
+
X
r
{\displaystyle X^{l}+X^{r}}
is a partition of
X
{\displaystyle X}
. Index
y
m
i
d
{\displaystyle \mathrm {ymid} }
is computed such that
Y
l
=
Y
1
:
y
m
i
d
{\displaystyle Y^{l}=Y_{1:\mathrm {ymid} }}
and
Y
r
=
Y
y
m
i
d
+
1
:
length
(
Y
)
{\displaystyle Y^{r}=Y_{\mathrm {ymid} +1:\operatorname {length} (Y)}}
.
== Example ==
Let
X
=
AGTACGCA
,
Y
=
TATGC
,
Del
(
x
)
=
−
2
,
Ins
(
y
)
=
−
2
,
Sub
(
x
,
y
)
=
{
+
2
,
if
x
=
y
−
1
,
if
x
≠
y
.
{\displaystyle {\begin{aligned}X&={\text{AGTACGCA}},\\Y&={\text{TATGC}},\\\operatorname {Del} (x)&=-2,\\\operatorname {Ins} (y)&=-2,\\\operatorname {Sub} (x,y)&={\begin{cases}+2,&{\text{if }}x=y\\-1,&{\text{if }}x\neq y.\end{cases}}\end{aligned}}}
The optimal alignment is given by
W = AGTACGCA
Z = --TATGC-
Indeed, this can be verified by backtracking its corresponding Needleman–Wunsch matrix:
T A T G C
0 -2 -4 -6 -8 -10
A -2 -1 0 -2 -4 -6
G -4 -3 -2 -1 0 -2
T -6 -2 -4 0 -2 -1
A -8 -4 0 -2 -1 -3
C -10 -6 -2 -1 -3 1
G -12 -8 -4 -3 1 -1
C -14 -10 -6 -5 -1 3
A -16 -12 -8 -7 -3 1
One starts with the top level call to
Hirschberg
(
AGTACGCA
,
TATGC
)
{\displaystyle \operatorname {Hirschberg} ({\text{AGTACGCA}},{\text{TATGC}})}
, which splits the first argument in half:
X
=
AGTA
+
CGCA
{\displaystyle X={\text{AGTA}}+{\text{CGCA}}}
. The call to
NWScore
(
AGTA
,
Y
)
{\displaystyle \operatorname {NWScore} ({\text{AGTA}},Y)}
produces the following matrix:
T A T G C
0 -2 -4 -6 -8 -10
A -2 -1 0 -2 -4 -6
G -4 -3 -2 -1 0 -2
T -6 -2 -4 0 -2 -1
A -8 -4 0 -2 -1 -3
Likewise,
NWScore
(
rev
(
CGCA
)
,
rev
(
Y
)
)
{\displaystyle \operatorname {NWScore} (\operatorname {rev} ({\text{CGCA}}),\operatorname {rev} (Y))}
generates the following matrix:
C G T A T
0 -2 -4 -6 -8 -10
A -2 -1 -3 -5 -4 -6
C -4 0 -2 -4 -6 -5
G -6 -2 2 0 -2 -4
C -8 -4 0 1 -1 -3
Their last lines (after reversing the latter) and sum of those are respectively
ScoreL = [ -8 -4 0 -2 -1 -3 ]
rev(ScoreR) = [ -3 -1 1 0 -4 -8 ]
Sum = [-11 -5 1 -2 -5 -11]
The maximum (shown in bold) appears at ymid = 2, producing the partition
Y
=
TA
+
TGC
{\displaystyle Y={\text{TA}}+{\text{TGC}}}
.
The entire Hirschberg recursion (which we omit for brevity) produces the following tree:
(AGTACGCA,TATGC)
/ \
(AGTA,TA) (CGCA,TGC)
/ \ / \
(AG, ) (TA,TA) (CG,TG) (CA,C)
/ \ / \
(T,T) (A,A) (C,T) (G,G)
The leaves of the tree contain the optimal alignment.
== See also ==
Longest common subsequence
== References == | Wikipedia/Hirschberg's_algorithm |
In object-oriented programming, a class defines the shared aspects of objects created from the class. The capabilities of a class differ between programming languages, but generally the shared aspects consist of state (variables) and behavior (methods) that are each either associated with a particular object or with all objects of that class.
Object state can differ between each instance of the class whereas the class state is shared by all of them. The object methods include access to the object state (via an implicit or explicit parameter that references the object) whereas class methods do not.
If the language supports inheritance, a class can be defined based on another class with all of its state and behavior plus additional state and behavior that further specializes the class. The specialized class is a sub-class, and the class it is based on is its superclass.
== Attributes ==
=== Object lifecycle ===
As an instance of a class, an object is constructed from a class via instantiation. Memory is allocated and initialized for the object state and a reference to the object is provided to consuming code. The object is usable until it is destroyed – its state memory is de-allocated.
Most languages allow for custom logic at lifecycle events via a constructor and a destructor.
=== Type ===
An object expresses data type as an interface – the type of each member variable and the signature of each member function (method). A class defines an implementation of an interface, and instantiating the class results in an object that exposes the implementation via the interface. In the terms of type theory, a class is an implementation—a concrete data structure and collection of subroutines—while a type is an interface. Different (concrete) classes can produce objects of the same (abstract) type (depending on type system). For example, the type (interface) Stack might be implemented by SmallStack that is fast for small stacks but scales poorly and ScalableStack that scales well but has high overhead for small stacks.
=== Structure ===
A class contains data field descriptions (or properties, fields, data members, or attributes). These are usually field types and names that will be associated with state variables at program run time; these state variables either belong to the class or specific instances of the class. In most languages, the structure defined by the class determines the layout of the memory used by its instances. Other implementations are possible: for example, objects in Python use associative key-value containers.
Some programming languages such as Eiffel support specification of invariants as part of the definition of the class, and enforce them through the type system. Encapsulation of state is necessary for being able to enforce the invariants of the class.
=== Behavior ===
The behavior of a class or its instances is defined using methods. Methods are subroutines with the ability to operate on objects or classes. These operations may alter the state of an object or simply provide ways of accessing it. Many kinds of methods exist, but support for them varies across languages. Some types of methods are created and called by programmer code, while other special methods—such as constructors, destructors, and conversion operators—are created and called by compiler-generated code. A language may also allow the programmer to define and call these special methods.
=== Class interface ===
Every class implements (or realizes) an interface by providing structure and behavior. Structure consists of data and state, and behavior consists of code that specifies how methods are implemented. There is a distinction between the definition of an interface and the implementation of that interface; however, this line is blurred in many programming languages because class declarations both define and implement an interface. Some languages, however, provide features that separate interface and implementation. For example, an abstract class can define an interface without providing an implementation.
Languages that support class inheritance also allow classes to inherit interfaces from the classes that they are derived from.
For example, if "class A" inherits from "class B" and if "class B" implements the interface "interface B" then "class A" also inherits the functionality(constants and methods declaration) provided by "interface B".
In languages that support access specifiers, the interface of a class is considered to be the set of public members of the class, including both methods and attributes (via implicit getter and setter methods); any private members or internal data structures are not intended to be depended on by external code and thus are not part of the interface.
Object-oriented programming methodology dictates that the operations of any interface of a class are to be independent of each other. It results in a layered design where clients of an interface use the methods declared in the interface. An interface places no requirements for clients to invoke the operations of one interface in any particular order. This approach has the benefit that client code can assume that the operations of an interface are available for use whenever the client has access to the object.
Class interface example
The buttons on the front of your television set are the interface between you and the electrical wiring on the other side of its plastic casing. You press the "power" button to toggle the television on and off. In this example, your particular television is the instance, each method is represented by a button, and all the buttons together compose the interface (other television sets that are the same model as yours would have the same interface). In its most common form, an interface is a specification of a group of related methods without any associated implementation of the methods.
A television set also has a myriad of attributes, such as size and whether it supports color, which together comprise its structure. A class represents the full description of a television, including its attributes (structure) and buttons (interface).
Getting the total number of televisions manufactured could be a static method of the television class. This method is associated with the class, yet is outside the domain of each instance of the class. A static method that finds a particular instance out of the set of all television objects is another example.
=== Member accessibility ===
The following is a common set of access specifiers:
Private (or class-private) restricts access to the class itself. Only methods that are part of the same class can access private members.
Protected (or class-protected) allows the class itself and all its subclasses to access the member.
Public means that any code can access the member by its name.
Although many object-oriented languages support the above access specifiers, their semantics may differ.
Object-oriented design uses the access specifiers in conjunction with careful design of public method implementations to enforce class invariants—constraints on the state of the objects. A common usage of access specifiers is to separate the internal data of a class from its interface: the internal structure is made private, while public accessor methods can be used to inspect or alter such private data.
Access specifiers do not necessarily control visibility, in that even private members may be visible to client external code. In some languages, an inaccessible but visible member may be referred to at runtime (for example, by a pointer returned from a member function), but an attempt to use it by referring to the name of the member from the client code will be prevented by the type checker.
The various object-oriented programming languages enforce member accessibility and visibility to various degrees, and depending on the language's type system and compilation policies, enforced at either compile time or runtime. For example, the Java language does not allow client code that accesses the private data of a class to compile. In the C++ language, private methods are visible, but not accessible in the interface; however, they may be made invisible by explicitly declaring fully abstract classes that represent the interfaces of the class.
Some languages feature other accessibility schemes:
Instance vs. class accessibility: Ruby supports instance-private and instance-protected access specifiers in lieu of class-private and class-protected, respectively. They differ in that they restrict access based on the instance itself, rather than the instance's class.
Friend: C++ supports a mechanism where a function explicitly declared as a friend function of the class may access the members designated as private or protected.
Path-based: Java supports restricting access to a member within a Java package, which is the logical path of the file. However, it is a common practice when extending a Java framework to implement classes in the same package as a framework class to access protected members. The source file may exist in a completely different location, and may be deployed to a different .jar file, yet still be in the same logical path as far as the JVM is concerned.
==== Inheritance ====
Conceptually, a superclass is a superset of its subclasses. For example, GraphicObject could be a superclass of Rectangle and Ellipse, while Square would be a subclass of Rectangle. These are all subset relations in set theory as well, i.e., all squares are rectangles but not all rectangles are squares.
A common conceptual error is to mistake a part of relation with a subclass. For example, a car and truck are both kinds of vehicles and it would be appropriate to model them as subclasses of a vehicle class. However, it would be an error to model the parts of the car as subclass relations. For example, a car is composed of an engine and body, but it would not be appropriate to model an engine or body as a subclass of a car.
In object-oriented modeling these kinds of relations are typically modeled as object properties. In this example, the Car class would have a property called parts. parts would be typed to hold a collection of objects, such as instances of Body, Engine, Tires, etc.
Object modeling languages such as UML include capabilities to model various aspects of "part of" and other kinds of relations – data such as the cardinality of the objects, constraints on input and output values, etc. This information can be utilized by developer tools to generate additional code besides the basic data definitions for the objects, such as error checking on get and set methods.
One important question when modeling and implementing a system of object classes is whether a class can have one or more superclasses. In the real world with actual sets, it would be rare to find sets that did not intersect with more than one other set. However, while some systems such as Flavors and CLOS provide a capability for more than one parent to do so at run time introduces complexity that many in the object-oriented community consider antithetical to the goals of using object classes in the first place. Understanding which class will be responsible for handling a message can get complex when dealing with more than one superclass. If used carelessly this feature can introduce some of the same system complexity and ambiguity classes were designed to avoid.
Most modern object-oriented languages such as Smalltalk and Java require single inheritance at run time. For these languages, multiple inheritance may be useful for modeling but not for an implementation.
However, semantic web application objects do have multiple superclasses. The volatility of the Internet requires this level of flexibility and the technology standards such as the Web Ontology Language (OWL) are designed to support it.
A similar issue is whether or not the class hierarchy can be modified at run time. Languages such as Flavors, CLOS, and Smalltalk all support this feature as part of their meta-object protocols. Since classes are themselves first-class objects, it is possible to have them dynamically alter their structure by sending them the appropriate messages. Other languages that focus more on strong typing such as Java and C++ do not allow the class hierarchy to be modified at run time. Semantic web objects have the capability for run time changes to classes. The rationale is similar to the justification for allowing multiple superclasses, that the Internet is so dynamic and flexible that dynamic changes to the hierarchy are required to manage this volatility.
Although many class-based languages support inheritance, inheritance is not an intrinsic aspect of classes. An object-based language (i.e. Classic Visual Basic) supports classes yet does not support inheritance.
=== Local and inner ===
In some languages, classes can be declared in scopes other than the global scope. There are various types of such classes.
An inner class is a class defined within another class. The relationship between an inner class and its containing class can also be treated as another type of class association. An inner class is typically neither associated with instances of the enclosing class nor instantiated along with its enclosing class. Depending on the language, it may or may not be possible to refer to the class from outside the enclosing class. A related concept is inner types, also known as inner data type or nested type, which is a generalization of the concept of inner classes. C++ is an example of a language that supports both inner classes and inner types (via typedef declarations).
A local class is a class defined within a procedure or function. Such structure limits references to the class name to within the scope where the class is declared. Depending on the semantic rules of the language, there may be additional restrictions on local classes compared to non-local ones. One common restriction is to disallow local class methods to access local variables of the enclosing function. For example, in C++, a local class may refer to static variables declared within its enclosing function, but may not access the function's automatic variables.
=== Metaclass ===
A metaclass is a class where instances are classes. A metaclass describes a common structure of a collection of classes and can implement a design pattern or describe particular kinds of classes. Metaclasses are often used to describe frameworks.
In some languages, such as Python, Ruby or Smalltalk, a class is also an object; thus each class is an instance of a unique metaclass that is built into the language.
The Common Lisp Object System (CLOS) provides metaobject protocols (MOPs) to implement those classes and metaclasses.
=== Sealed ===
A sealed class cannot be subclassed. It is basically the opposite of an abstract class, which must be derived to be used. A sealed class is implicitly concrete.
A class is declared as sealed via the keyword sealed in C# or final in Java or PHP.
For example, Java's String class is marked as final.
Sealed classes may allow a compiler to perform optimizations that are not available for classes that can be subclassed.
=== Open ===
An open class can be changed. Typically, an executable program cannot be changed by customers. Developers can often change some classes, but typically cannot change standard or built-in ones. In Ruby, all classes are open. In Python, classes can be created at runtime, and all can be modified afterward. Objective-C categories permit the programmer to add methods to an existing class without the need to recompile that class or even have access to its source code.
=== Mixin ===
Some languages have special support for mixins, though, in any language with multiple inheritance, a mixin is simply a class that does not represent an is-a-type-of relationship. Mixins are typically used to add the same methods to multiple classes; for example, a class UnicodeConversionMixin might provide a method called unicode_to_ascii when included in classes FileReader and WebPageScraper that do not share a common parent.
=== Partial ===
In languages supporting the feature, a partial class is a class whose definition may be split into multiple pieces, within a single source-code file or across multiple files. The pieces are merged at compile time, making compiler output the same as for a non-partial class.
The primary motivation for the introduction of partial classes is to facilitate the implementation of code generators, such as visual designers. It is otherwise a challenge or compromise to develop code generators that can manage the generated code when it is interleaved within developer-written code. Using partial classes, a code generator can process a separate file or coarse-grained partial class within a file, and is thus alleviated from intricately interjecting generated code via extensive parsing, increasing compiler efficiency and eliminating the potential risk of corrupting developer code. In a simple implementation of partial classes, the compiler can perform a phase of precompilation where it "unifies" all the parts of a partial class. Then, compilation can proceed as usual.
Other benefits and effects of the partial class feature include:
Enables separation of a class's interface and implementation code in a unique way.
Eases navigation through large classes within an editor.
Enables separation of concerns, in a way similar to aspect-oriented programming but without using any extra tools.
Enables multiple developers to work on a single class concurrently without the need to merge individual code into one file at a later time.
Partial classes have existed in Smalltalk under the name of Class Extensions for considerable time. With the arrival of the .NET framework 2, Microsoft introduced partial classes, supported in both C# 2.0 and Visual Basic 2005. WinRT also supports partial classes.
=== Uninstantiable ===
Uninstantiable classes allow programmers to group together per-class fields and methods that are accessible at runtime without an instance of the class. Indeed, instantiation is prohibited for this kind of class.
For example, in C#, a class marked "static" can not be instantiated, can only have static members (fields, methods, other), may not have instance constructors, and is sealed.
=== Unnamed ===
An unnamed class or anonymous class is not bound to a name or identifier upon definition. This is analogous to named versus unnamed functions.
== Benefits ==
The benefits of organizing software into object classes fall into three categories:
Rapid development
Ease of maintenance
Reuse of code and designs
Object classes facilitate rapid development because they lessen the semantic gap between the code and the users. System analysts can talk to both developers and users using essentially the same vocabulary, talking about accounts, customers, bills, etc. Object classes often facilitate rapid development because most object-oriented environments come with powerful debugging and testing tools. Instances of classes can be inspected at run time to verify that the system is performing as expected. Also, rather than get dumps of core memory, most object-oriented environments have interpreted debugging capabilities so that the developer can analyze exactly where in the program the error occurred and can see which methods were called to which arguments and with what arguments.
Object classes facilitate ease of maintenance via encapsulation. When developers need to change the behavior of an object they can localize the change to just that object and its component parts. This reduces the potential for unwanted side effects from maintenance enhancements.
Software reuse is also a major benefit of using Object classes. Classes facilitate re-use via inheritance and interfaces. When a new behavior is required it can often be achieved by creating a new class and having that class inherit the default behaviors and data of its superclass and then tailoring some aspect of the behavior or data accordingly. Re-use via interfaces (also known as methods) occurs when another object wants to invoke (rather than create a new kind of) some object class. This method for re-use removes many of the common errors that can make their way into software when one program re-uses code from another.
== Runtime representation ==
As a data type, a class is usually considered as a compile time construct. A language or library may also support prototype or factory metaobjects that represent runtime information about classes, or even represent metadata that provides access to reflective programming (reflection) facilities and ability to manipulate data structure formats at runtime. Many languages distinguish this kind of run-time type information about classes from a class on the basis that the information is not needed at runtime. Some dynamic languages do not make strict distinctions between runtime and compile time constructs, and therefore may not distinguish between metaobjects and classes.
For example, if Human is a metaobject representing the class Person, then instances of class Person can be created by using the facilities of the Human metaobject.
== Prototype-based programming ==
In contrast to creating an object from a class, some programming contexts support object creation by copying (cloning) a prototype object.
== See also ==
Class diagram – Diagram that describes the static structure of a software system
Class variable – Variable defined in a class whose objects all possess the same copy
Instance variable – Member variable of a class that all its objects possess a copy of
List of object-oriented programming languages
Trait (computer programming) – Set of methods that extend the functionality of a class
== Notes ==
== References ==
== Further reading ==
Abadi; Cardelli: A Theory of Objects
ISO/IEC 14882:2003 Programming Language C++, International standard
Class Warfare: Classes vs. Prototypes, by Brian Foote
Meyer, B.: "Object-oriented software construction", 2nd edition, Prentice Hall, 1997, ISBN 0-13-629155-4
Rumbaugh et al.: "Object-oriented modeling and design", Prentice Hall, 1991, ISBN 0-13-630054-5 | Wikipedia/Classes_(computer_science) |
In computer science, the two-way string-matching algorithm is a string-searching algorithm, discovered by Maxime Crochemore and Dominique Perrin in 1991. It takes a pattern of size m, called a “needle”, preprocesses it in linear time O(m), producing information that can then be used to search for the needle in any “haystack” string, taking only linear time O(n) with n being the haystack's length.
The two-way algorithm can be viewed as a combination of the forward-going Knuth–Morris–Pratt algorithm (KMP) and the backward-running Boyer–Moore string-search algorithm (BM).
Like those two, the 2-way algorithm preprocesses the pattern to find partially repeating periods and computes “shifts” based on them, indicating what offset to “jump” to in the haystack when a given character is encountered.
Unlike BM and KMP, it uses only O(log m) additional space to store information about those partial repeats: the search pattern is split into two parts (its critical factorization), represented only by the position of that split. Being a number less than m, it can be represented in ⌈log₂ m⌉ bits. This is sometimes treated as "close enough to O(1) in practice", as the needle's size is limited by the size of addressable memory; the overhead is a number that can be stored in a single register, and treating it as O(1) is like treating the size of a loop counter as O(1) rather than log of the number of iterations.
The actual matching operation performs at most 2n − m comparisons.
Breslauer later published two improved variants performing fewer comparisons, at the cost of storing additional data about the preprocessed needle:
The first one performs at most n + ⌊(n − m)/2⌋ comparisons, ⌈(n − m)/2⌉ fewer than the original. It must however store ⌈log
φ
{\displaystyle \varphi }
m⌉ additional offsets in the needle, using O(log2 m) space.
The second adapts it to only store a constant number of such offsets, denoted c, but must perform n + ⌊(1⁄2 + ε) * (n − m)⌋ comparisons, with ε = 1⁄2(Fc+2 − 1)−1 = O(
φ
{\displaystyle \varphi }
−c) going to zero exponentially quickly as c increases.
The algorithm is considered fairly efficient in practice, being cache-friendly and using several operations that can be implemented in well-optimized subroutines. It is used by the C standard libraries glibc, newlib, and musl, to implement the memmem and strstr family of substring functions. As with most advanced string-search algorithms, the naïve implementation may be more efficient on small-enough instances; this is especially so if the needle isn't searched in multiple haystacks, which would amortize the preprocessing cost.
== Critical factorization ==
Before we define critical factorization, we should define:
A factorization is a partition
(
u
,
v
)
{\displaystyle (u,v)}
of a string x. For example, ("Wiki","pedia") is a factorization of "Wikipedia".
A period of a string x is an integer p such that all characters p-distance apart are equal. More precisely, x[i] = x[i + p] holds for any integer 0 < i ≤ len(x) − p. This definition is allowed to be vacuously true, so that any word of length n has a period of n. To illustrate, the 8-letter word "educated" has period 6 in addition to the trivial periods of 8 and above. The minimum period of x is denoted as
p
(
x
)
{\displaystyle p(x)}
.
A repetition w in
(
u
,
v
)
{\displaystyle (u,v)}
is a non-empty string such that:
w is a suffix of u or u is a suffix of w;
w is a prefix of v or v is a prefix of w;
In other words, w occurs on both sides of the cut with a possible overflow on either side. Examples include "an" for ("ban","ana") and "voca" for ("a","vocado"). Each factorization trivially has at least one repetition: the string vu.
A local period is the length of a repetition in
(
u
,
v
)
{\displaystyle (u,v)}
. The smallest local period in
(
u
,
v
)
{\displaystyle (u,v)}
is denoted as
r
(
u
,
v
)
{\displaystyle r(u,v)}
. Because the trivial repetition vu is guaranteed to exist and has the same length as x, we see that
1
≤
r
(
u
,
v
)
≤
l
e
n
(
x
)
{\displaystyle 1\leq r(u,v)\leq \mathrm {len} (x)}
.
A critical factorization is a factorization
(
u
,
v
)
{\displaystyle (u,v)}
of x such that
r
(
u
,
v
)
=
p
(
x
)
{\displaystyle r(u,v)=p(x)}
. The existence of a critical factorization is provably guaranteed. For a needle of length m in an ordered alphabet, it can be computed in 2m comparisons, by computing the lexicographically larger of two ordered maximal suffixes, defined for order ≤ and ≥.
== The algorithm ==
The algorithm starts by critical factorization of the needle as the preprocessing step. This step produces the index (starting point) of the periodic right-half, and the period of this stretch. The suffix computation here follows the authors' formulation. It can alternatively be computed using the Duval's algorithm, which is simpler and still linear time but slower in practice.
Shorthand for inversion.
function cmp(a, b)
if a > b return 1
if a = b return 0
if a < b return -1
function maxsuf(n, rev)
l ← len(n)
p ← 1 currently known period.
k ← 1 index for period testing, 0 < k <= p.
j ← 0 index for maxsuf testing. greater than maxs.
i ← -1 the proposed starting index of maxsuf
while j + k < l
cmpv ← cmp(n[j + k], n[i + k])
if rev
cmpv ← -cmpv invert the comparison
if cmpv < 0
Suffix (j+k) is smaller. Period is the entire prefix so far.
j ← j + k
k ← 1
p ← j - i
else if cmpv = 0
They are the same - we should go on.
if k = p
We are done checking this stretch of p. reset k.
j ← j + p
k ← 1
else
k ← k + 1
else
Suffix is larger. Start over from here.
i ← j
j ← j + 1
p ← 1
k ← 1
return [i, p]
function crit_fact(n)
[idx1, per1] ← maxsuf(n, false)
[idx2, per2] ← maxsuf(n, true)
if idx1 > idx2
return [idx1, per1]
else
return [idx2, per2]
The comparison proceeds by first matching for the right-hand-side, and then for the left-hand-side if it matches. Linear-time skipping is done using the period.
function match(n, h)
nl ← len(n)
hl ← len(h)
[l, p] ← crit_fact(n)
P ← {} set of matches.
Match the suffix.
Use a library function like memcmp, or write your own loop.
if n[0] ... n[l] = n[l+1] ... n[l+p]
P ← {}
pos ← 0
s ← 0
TODO. At least put the skip in.
== References == | Wikipedia/Two-way_string-matching_algorithm |
In computer science, the Commentz-Walter algorithm is a string searching algorithm invented by Beate Commentz-Walter. Like the Aho–Corasick string matching algorithm, it can search for multiple patterns at once. It combines ideas from Aho–Corasick with the fast matching of the Boyer–Moore string-search algorithm. For a text of length n and maximum pattern length of m, its worst-case running time is O(mn), though the average case is often much better.
GNU grep once implemented a string matching algorithm very similar to Commentz-Walter.
== History ==
The paper on the algorithm was first published by Beate Commentz-Walter in 1979 through the Saarland University and typed by "R. Scherner". The paper detailed two differing algorithms she claimed combined the idea of the Aho-Corasick and Boyer-Moore algorithms, which she called algorithms B and B1. The paper mostly focuses on algorithm B, however.
== How the Algorithm Works ==
The Commentz-Walter algorithm combines two known algorithms in order to attempt to better address the multi-pattern matching problem. These two algorithms are the Boyer-Moore, which addresses single pattern matching using filtering, and the Aho-Corasick. To do this, the algorithm implements a suffix automaton to search through patterns within an input string, while also using reverse patterns, unlike in the Aho-Corasick.
Commentz-Walter has two phases it must go through, these being a pre-computing phase and a matching phase. For the first phase, the Commentz-Walter algorithm uses a reversed pattern to build a pattern tree, this is considered the pre-computing phase. The second phase, known as the matching phase, takes into account the other two algorithms. Using the Boyer-Moore’s technique of shifting and the Aho-Corasick's technique of finite automata, the Commentz-Walter algorithm can begin matching.
The Commentz-Walter algorithm will scan backwards throughout an input string, checking for a mismatch. If and when the algorithm does find a mismatch, the algorithm will already know some of the characters that are matches, and then use this information as an index. Using the index, the algorithm checks the pre-computed table to find a distance that it must shift, after this, the algorithm once more begins another matching attempt.
== Time Complexity ==
Comparing the Aho-Corasick to the Commentz-Walter Algorithm yields results with the idea of time complexity. Aho-Corasick is considered linear O(m+n+k) where k is the number of matches. Commentz-Walter may be considered quadratic O(mn). The reason for this lies in the fact that Commentz-Walter was developed by adding the shifts within the Boyer–Moore string-search algorithm to the Aho-Corasick, thus moving its complexity from linear to quadratic.
According to a study done in “The Journal of National Science Foundation of Sri Lanka 46” Commentz-Walter seems to be generally faster than the Aho–Corasick string matching algorithm. This, according to the journal, only exists when using long patterns. However, the journal does state that there is no critical analysis on this statement and that there is a lack of general agreement on the performance of the algorithm.
As seen in a visualization of the algorithm’s running time done in a study by “The International Journal of Advanced Computer Science and Information Technology” the performance of the algorithm increased linearly as the shortest pattern within the pattern set increased.
== Alternative Algorithm ==
In the original Commentz-Walter paper, an alternative algorithm was also created. This algorithm, known as B1, operates similarly to the main Commentz-Walter algorithm with the only difference being in the way the pattern tree is used during the scanning phase.
The paper also claims this algorithm performs better at the cost of increasing the running time and space of both the preprocessing phase and search phase. This algorithm has not been formally tested in other studies however, so its actual performance is unknown.
== References == | Wikipedia/Commentz-Walter_algorithm |
In computer science, the Apostolico–Giancarlo algorithm is a variant of the Boyer–Moore string-search algorithm, the basic application of which is searching for occurrences of a pattern P in a text T. As with other comparison-based string searches, this is done by aligning P to a certain index of T and checking whether a match occurs at that index. P is then shifted relative to T according to the rules of the Boyer–Moore algorithm, and the process repeats until the end of T has been reached. Application of the Boyer–Moore shift rules often results in large chunks of the text being skipped entirely.
With regard to the shift operation, Apostolico–Giancarlo is exactly equivalent in functionality to Boyer–Moore. The utility of Apostolico–Giancarlo is to speed up the match-checking operation at any index. With Boyer–Moore, finding an occurrence of P in T requires that all n characters of P be explicitly matched. For certain patterns and texts, this is very inefficient – a simple example is when both pattern and text consist of the same repeated character, in which case Boyer–Moore runs in O(nm), where m is the length in characters of T. Apostolico–Giancarlo speeds this up by recording the number of characters matched at the alignments of T in a table, which is combined with data gathered during the pre-processing of P to avoid redundant equality checking for sequences of characters that are known to match. It can be seen as a generalization of the Galil rule.
== References ==
Apostolico, Alberto; Giancarlo, Raffaele (1986). "The Boyer–Moore–Galil String Searching Strategies Revisited". SIAM Journal on Computing. 15: 98–105. doi:10.1137/0215007.
Crochemore, Maxime; Lecroq, Thierry (1997). "Tight bounds on the complexity of the Apostolico-Giancarlo algorithm" (PDF). Information Processing Letters. 63 (4): 195–203. doi:10.1016/S0020-0190(97)00107-5.
Crochemore, M.; Rytter, W. (1994). Text Algorithms. Oxford University Press.
Gusfield, D. (1997). Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. Cambridge University Press. ISBN 0-521-58519-8.
Lecroq, T. (1992). Recherches de Mots (Ph. D. Thesis). University of Orléans.
Lecroq, Thierry (1995). "Experimental results on string matching algorithms". Software: Practice and Experience. 25 (7): 727–765. doi:10.1002/spe.4380250703. S2CID 15253073. | Wikipedia/Apostolico–Giancarlo_algorithm |
Relational models theory (RMT) is a theory of interpersonal relationships, authored by anthropologist Alan Fiske and initially developed from his fieldwork in Burkina Faso. RMT proposes that all human interactions can be described in terms of just four "relational models", or elementary forms of human relations: communal sharing, authority ranking, equality matching and market pricing (to these are added the limiting cases of asocial and null interactions, whereby people do not coordinate with reference to any shared principle).
RMT influenced Jonathan Haidt's moral foundations theory and Steven Pinker's theory of indirect speech.
== The theory ==
First proposed in Fiske's doctoral dissertation in 1985, relational models theory proposes four relational models which are each argued to be innate, intrinsically motivated, and culturally universal (though with culture-specific implementations) ways of cooperating and coordinating social interactions.
=== The four relational models ===
The four relational models are as follows:
Communal sharing (CS) relationships are the most basic form of relationship where some bounded group of people are conceived as equivalent, undifferentiated and interchangeable such that distinct individual identities are disregarded and commonalities are emphasized, with intimate and kinship relations being prototypical examples of CS relationship. Common indicators of CS relationships include body markings or modifications, synchronous movement, rituals, sharing of food, or physical intimacy.
Authority ranking (AR) relationships describe asymmetric relationships where people are linearly ordered along some hierarchical social dimension. The primary feature of an AR relationship is whether a person ranks above or below each other person. Those higher in rank hold greater authority, prestige and privileges, while subordinates are entitled to guidance and protection. Military ranks are a prototypical example of an AR relationship.
Equality matching (EM) relationships are those characterized by various forms of one-for-one correspondence, such as turn taking, in-kind reciprocity, tit-for-tat retaliation, or eye-for-an-eye revenge. Parties in EM relationships are primarily concerned with ensuring the relationship is in a balanced state. Non-intimate acquaintances are a prototypical example.
Market pricing (MP) relationships revolve around a model of proportionality where people attend to ratios and rates and relevant features are typically reduced to a single value or utility metric that allows the comparison (e.g., the price of a sale). Monetary transactions are a prototypical example of MP relationships.
=== Meta-relational models ===
The four elementary relationships can be combined to form more complex configurations of relationships called meta-relational models. Meta-relational models typically take the form of entailments or prohibitions, which imply certain obligations, behaviors or relationships between multiple dyads within a particular configuration (e.g., within a triad with members A, B and C, A being in a CS relationship with B prohibits B from being in a CS relationship with A's enemy, C). Examples of meta-relational models include the compadrazgo relationship, describing the entailment of relationships between the parents and godparents of a child, and the incest taboo, describing the prohibition of relationships among certain members of the same family.
=== Relational models as an explanation of interpersonal conflict ===
According to RMT, mis-matching of relational models is a common cause of interpersonal conflict, given that different relational models will often imply different behaviors in the same situation. Taking two housemates sharing dishwashing as a simple example, Fiske suggests that if housemate A assumes dishwashing is governed by a CS framework and housemate B assumes an EM framework, A will expect both of them to wash dishes whenever they can, and B will expect them to take turns. If A is busy and B is not, A will expect B to wash the dishes, but if B washed the dishes last, they'll assume it's A's turn, and conflict will ensue because of A and B's mis-matched relational models.
=== Correspondence between relational models and Stevens's levels of measurement ===
Fiske proposed that the four discrete types of relationships correspond to Stevens's four levels of measurement. CS relationships resemble the categorical (nominal) scales of measurement in that all members of the relationship are equivalent. AR resembles an ordinal scale given that members of the relationship are placed in a linear ordering. EM relationships resemble interval measurement given that they are kept in balance by addition and subtraction. Finally, MP relationships resemble a ratio scale (whose origin corresponds, for example, to a price of zero) given that they involve proportions, multiplication and division and the distributive law.
== Influence ==
The two main, original publications on relational models theory have received over 5000 citations combined.
=== In moral psychology ===
Relational models theory has had wide-ranging influence throughout the field of moral psychology. This influence includes an extension of the original theory to explain moral judgments in the context of interpersonal relationships in the form of relationship regulation theory, which describes the way in which people will judge and react to similar actions differently, depending on the relational context in which the act occurs. Relational models theory has also been used to explain interpersonal violence in the form of virtuous violence theory, describing the moral motivations behind phenomena such as honor killing and blood feuds. The theory has also been used as a building block of one of the more prominent theories in moral psychology, moral foundations theory, and to provide insights into phenomena such as moral emotions, trust and ethical leadership.
=== In other areas ===
RMT has been influential in the development of Steven Pinker's theory of indirect speech, and folk-psychological studies of groups. Additionally, RMT has also been used to help explain the positive social emotion of "Kama Muta", typically described as the experience of "being moved" (also related to the emotion elevation, and the concept of empathic concern). According to this view, "Kama Muta" is triggered by witnessing the sudden intensification of a communal sharing relationship.
== References ==
== External links ==
Alan Page Fiske: Overview of Relational Models Theory. A lecture by Alan Fiske in 2015.
Relational Models Theory. An overview and bibliography of Relational Models Theory.
Language as a Window into Human Nature. A video explaining Steven Pinker's theory of indirect speech, drawing on Relational Models Theory. | Wikipedia/Relational_models_theory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.