text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Thirty-sixth_government_of_Israel#May_recommendations] | [TOKENS: 5624] |
Contents Thirty-sixth government of Israel The thirty-sixth government of Israel, or the Bennett–Lapid government, was the cabinet of Israel that was formed on 13 June 2021 after the 2021 Knesset elections. On 2 June 2021 a coalition agreement was signed between Yesh Atid, Blue and White, Yamina, the Labor Party, Yisrael Beiteinu, New Hope, Meretz, and the United Arab List. The cabinet was succeeded by the thirty-seventh government of Israel, led by Benjamin Netanyahu, on 29 December 2022. The government had two prime ministers during its existence. Namely, under a rotation agreement, Naftali Bennett of Yamina initially served as Prime Minister but ultimately ceded the position to Yair Lapid of Yesh Atid, after the coalition fell on June 30, 2022. Lapid became Prime Minister on July 1, 2022. Due to the collapse of the government, Lapid served as caretaker Prime Minister until elections were held on November 1, 2022. Yamina and Yesh Atid became the fourth and fifth parties, respectively, to lead an Israeli government – following Mapai/Labor Party (1948–1977; 1984–1986; 1992–1996; 1999–2001), Herut/Likud (1977–1984; 1986–1992; 1996–1999; 2001–2005; 2009–2021), and Kadima (2005–2009). The government was the first to include an independent Arab Israeli party as an official member of the governing coalition. It was Israel's second government, after the Netanyahu-Gantz rotation government, to function under an automatic and legally-binding system of rotation in the position of prime minister. The investiture vote in the Knesset was held on 13 June 2021. The Bennett-Lapid government was confirmed by a vote of 60 to 59, with one MK from the United Arab List abstaining. Bennett was thus sworn in as Israel's 13th prime minister, with Lapid serving as alternate prime minister. Background President Reuven Rivlin met with all elected parties and received their recommendations for prime minister on 5 April 2021, and gave Prime Minister Benjamin Netanyahu the mandate on 6 April. This was the first time in Israel's history where there were recommendations for three candidates (Netanyahu, Yair Lapid, and Naftali Bennett), rather than two, after the legislative election. Netanyahu was given a mandate to form a new government by the end of 4 May, but he failed to form a government by the deadline. Benjamin Netanyahu Likud No government formed and negotiations continue.Netanyahu remains prime minister ad interim. Benjamin Netanyahu Likud Yair Lapid (Yesh Atid) was tasked with forming the government.The Prime Minister appointed was Naftali Bennett (Yamina). Government formation The 2021 election once more produced a hung Knesset, with neither the pro-Netanyahu right-wing bloc (Likud, Shas, United Torah Judaism, and Religious Zionist Party; with 52 seats) nor the ideologically diverse non-Arab anti-Netanyahu opposition (Yesh Atid, Blue and White, Labor Party, Yisrael Beiteinu, New Hope, and Meretz; with 51 seats) winning enough seats to form a new government outright (61 seats are needed for an absolute majority). This left the conservative Yamina and the two Arab political factions, the Joint List and United Arab List, as the potential kingmakers for a Knesset majority. Yamina leader Naftali Bennett stated that his first preference was the formation of a right-wing government, while his second choice would be the establishment of a broader "national unity government", to avoid a fifth election in just over two years time. Even with the potential addition of Yamina, the pro-Netanyahu bloc would still have been short of an absolute majority by two seats. Namely, Gideon Sa'ar of the right-wing New Hope party had refused to enter into any coalition in which Netanyahu would remain prime minister, thus hindering the formation of a pure right-wing majority government. In addition, while Netanyahu had been open to limited forms of cooperation with the United Arab List to put an end to the deadlock, the Religious Zionist Party ruled out being a part of any right-wing majority which would rely on the UAL's outside support. On 6 April 2021, Benjamin Netanyahu was given the first mandate to establish a coalition by the president Reuven Rivlin. During his efforts to form a government, he offered both his allies and his rivals several proposals for rotation agreements, which would see the prime ministership change hands between parties mid-term. Namely, in a bid to appease Bennett and Sa'ar – either of whom was foreseen to take part in a potential rotation, he offered the post to fellow Likud member and Knesset speaker Yariv Levin. Thereafter, Netanyahu also unsuccessfully tried to include Aryeh Deri of Shas, and then the incumbent Alternate Prime Minister, Benny Gantz of Blue and White, into similar agreements. Netanyahu's mandate to form a government formally expired at midnight on 4 May 2021. While Netanyahu held the formal mandate to lead talks on a new government, simultaneous discussions were taking place between the parties of the "change bloc". Namely, Lapid stated his willingness to enter into a rotation government with Naftali Bennett, and even offered him the opportunity to hold the post of prime minister for the first half of the Knesset term. These efforts to establish a broad anti-Netanyahu government, which would span the whole political spectrum, were challenged by questions relating to the distribution of ministerial portfolios between the three main blocs (right, centre, and left), as well as to the possibility of including a mutual veto into any coalition agreement among the parties. Another concern was that such an ideologically diverse, seven-party, coalition would still not hold an absolute parliamentary majority – thus requiring outside support to govern effectively. Parties that were mentioned as hypothetical partners in such a cooperation agreement included the United Arab List and United Torah Judaism (especially its Degel HaTorah faction). On 5 May 2021, Yesh Atid leader Yair Lapid was officially tasked with the formation of a government by president Rivlin. He had been recommended for the prime ministership by 56 Knesset members from seven factions – including New Hope and almost all of the Joint List. On the same day, one of Yamina's Knesset members, Amichai Chikli, voiced his open opposition to any agreement on a new government which would include Meretz or the Joint List, thus potentially lowering the bloc's plurality from 58 to 57 seats. On 7 May 2021, several rounds of talks took place between different parties in the bloc, with Yisrael Beiteinu also issuing a formal list of conditions for its entrance into any potential coalition. They included the adoption of laws regarding the following issues: a two-term limit for the prime minister, compulsory voting in elections, compulsory conscription for male ultra-Orthodox Jews, putting more focus on secular studies in the ultra-Orthodox Jewish community, a penalty of life imprisonment for the rape of a minor, allowing local authorities to decide on the closure of businesses on Saturdays, the introduction of civil law marriage, as well as a new, two-year state budget. The party's chairman, Avigdor Lieberman, also promised to set up two state commissions of inquiry, to investigate the Netanyahu administration's possible culpability in the circumstances which led to a deadly stampede on Mount Meron on 30 April 2021, as well as its overall handling of the COVID-19 pandemic in the country. The proposal to allow civil marriages drew strong opposition from the United Arab List, which views it as a "violation of family values". On 8 May 2021, media sources reported that the parties had allegedly agreed on several general mechanisms and basic principles, which were to form the basis of their cooperation. The leaders of the parties still disagreed over the distribution of many ministerial positions within the potential government, specifically the portfolios of justice (demanded by both Yamina, for Ayelet Shaked; and New Hope, for Sa'ar), education (sought by Yamina, New Hope, for Yifat Shasha-Biton; and Meretz, for Nitzan Horowitz) and defense (desired by both Blue and White, for Benny Gantz; and New Hope, for Sa'ar). In addition to the justice and education ministries, Yamina had also asked for the portfolios of public security and of religious affairs, all of which are considered to be "ideological" in nature. Despite these disputes, several points of agreement were reached, including on the matter that Naftali Bennett and Yair Lapid would each serve as prime minister for two years. In such an arrangement Lapid would have served as foreign minister under Bennett, while Avigdor Lieberman would have held the position of finance minister. The parties of the potential coalition also discussed a rotation agreement regarding the position of Knesset speaker, which could have been held by Meir Cohen of Yesh Atid and Ze'ev Elkin of New Hope. In exchange for giving its outside support to a "change bloc" and Yamina coalition government, the United Arab List supposedly asked for the chairmanship of an important Knesset committee (either internal or economic affairs), as well as for several measures aimed at improving the living conditions of the Israeli Arabs – a five-year economic plan for the community, a plan on how to decrease violent crimes among Arabs, the annulment of a 2019 law banning illegal construction, and solutions to problems facing the Negev Bedouins. According to media sources, by 10 May 2021 the discussions between the six-party "change bloc", Yamina and the United Arab List were progressing toward a successful conclusion. Namely, both the "change bloc" and Yamina had agreed to the UAL's demands – including the legal recognition of three Negev Bedouin communities and a budget to tackle violent crime in the Arab community, thereby gaining its crucial backing – which would have taken the form of a confidence and supply arrangement rather than that of a formal coalition deal. Therefore, if such a UAL-backed minority coalition had indeed been formed, it would have become the first government in Israel to have had the active support of an Israeli Arab political party. In addition to having made progress on an agreement with the UAL, the "change bloc" and Yamina had also reached several other mutual conclusions. These included a policy of avoiding major discussions on matters of religion – unless it was shown that there could be potential broad support for a certain decision in that area; a decision to continue Netanyahu's policies regarding the West Bank – that is, to maintain existing Israeli settlements there, and allow further construction within them, but also to not allow any new settlements or to annex any new areas of the West Bank; and a possibility of setting up a state commission of inquiry to investigate the circumstances of the 30 April stampede at a Jewish pilgrimage site on Mount Meron. The parties were considering introducing legislation which would have limited the prime minister to two full terms in office (which, without a grandfather clause, could have possibly barred Benjamin Netanyahu from ever returning to the position) and which would have also prohibited anyone who is indicted from receiving the mandate to form a government from the president, as well as from becoming either the prime minister or the leader of the opposition in the Knesset. The New Hope party sought the division of the post of attorney general into two separate offices, thereby creating the position of a general prosecutor as well. With regard to the distribution of cabinet seats, the justice portfolio likely would have gone to New Hope's Sa'ar, while Blue and White's Gantz would probably have continued in the role of defense minister. In addition, Meretz's Horowitz and Tamar Zandberg were tapped for health and environmental protection minister, respectively. Shaked, on the other hand, had been given the choice of serving as either interior, public security, education or transportation minister. The interior ministry is also being sought by Labor's Merav Michaeli, while education is being contested by both Meretz (for Horowitz) and New Hope (for Yifat Shasha-Biton). Blue and White and Yisrael Beiteinu were also said to be making claims for certain key portfolios. Had Meretz or the Labor Party not met their demands regarding cabinet positions, they had supposedly been offered the chairmanships of some Knesset committees as a form of compensation. Local sources thus reported that Lapid had intended to inform president Rivlin of his success in building a government by the end of the second week of May, with the swearing-in of Naftali Bennett as Israel's 13th prime minister then due to take place some time after the Shavuot festival (which fell between 16 and 18 May 2021). On the afternoon of 10 May 2021, the clashes in Jerusalem between Palestinian protesters and Israeli police – which had been ongoing since 6 May, escalated with the firing of over 400 rockets at Israel by both Hamas and the Islamic Jihad Movement in Palestine. This prompted a response by Israel, who initiated airstrikes in the Gaza Strip. Both attacks were lethal and also caused hundreds of injuries. As a result of these incidents, the talks on the formation of a new government were frozen indefinitely, with the United Arab List refusing to continue the then-nearly finalized negotiations with the "change bloc" and Yamina until the security situation is resolved. On 12 May, UAL leader Mansour Abbas once more clearly expressed his commitment to the continuation of talks with the parties of the prospective new governing coalition, once the widespread violence is brought under control. During the evening of 13 May and the morning of 14 May 2021, Yamina formally abandoned the coalition talks with the "change bloc", with Naftali Bennett stating that he no longer favoured establishing a government which would rely on the support of the United Arab List. In his announcement, Bennett cited the ongoing conflict between Palestinian militant groups and the Israeli military, as well as the riots and looting taking place in Israel, as the reasons for his decision. Namely, Bennett emphasized that he felt that a UAL-backed "change bloc" government would not have the capacity to deal with the security challenges facing the country, nor could it implement the necessary legal consequences which would need to befall those who took part in the unrest. Bennett thus declared that he would once again initiate talks with Likud, and would aim to form a broad "unity government", which could possibly also include Yesh Atid, Blue and White, and New Hope. Bennett has agreed to a proposal which would turn the prime ministership into a directly elected position (as it was from 1996 until 2001), which is something that also has the support of Likud. The United Arab List's Mansour Abbas stated that he would be willing to support the decision as well, provided that several conditions – relating to the improvement of the quality of life of Israeli Arabs, were met. Meanwhile, government ministers from Blue and White rejected the possibility of entering a new government with Netanyahu, as did the representatives of New Hope. On the morning of 14 May, it was reported that Likud and Yamina had reached a formal agreement on a new government. According to the deal, the latter party will be given eight reserved places among the coalition's top 40 candidates in the next Knesset election, while Bennett and Shaked will also serve as Netanyahu's defense and foreign ministers, respectively. Lapid refused to give up on forming a "change bloc" government, stating that he will keep the presidential mandate until it expires, and also expressed a willingness to face a new election if all other options fail to resolve the deadlock. Bennett again changed his position, and indicated on 23 May that he would be open to serving with Lapid. On 25 May, Meretz agreed to a coalition government, and thus, Lapid announced that he will be handing them the ministries of health, environment, and regional cooperation if the coalition government gets formed, while the ministry of treasury will be handed to Avigdor Lieberman's Yisrael Beiteinu. Lapid and Bennett met on 27 May, in a move that was not made known to any other members of Yamina. On 28 May, Lapid secured the approval of a change government by the Labor Party. Two days later, Bennett announced he was in favor of a unity government, where he would serve as prime minister until September 2023, at which point Lapid would take over. On the afternoon of 1 June 2021, as talks between the various parties were reported to be coming to a conclusion, Mansour Abbas announced that the United Arab List intends to be a full-fledged member of the "change government" coalition. The party was said to be asking for the position of Deputy Interior Minister, who would most likely serve under Ayelet Shaked of Yamina. Meanwhile, four out of six Knesset members from the Joint List – which had formally stayed outside of both the pro-Netanyahu bloc and the "change bloc", stated that they would not support the new Bennett-Lapid government in a possible confidence vote. These MKs are from the Hadash (which includes the Israeli Communist Party and independent Ayman Odeh) and Balad factions of the coalition. On the other hand, the two MKs from the Ta'al faction remain undecided. During the evening of 1 June, the negotiations stalled once more. Yamina's Shaked demanded to be named to the Knesset's Judicial Appointments Committee, despite Lapid having already promised the seat to Labor's Michaeli. On the following morning, the negotiations with the United Arab List also encountered new challenges. Specifically, media sources reported that Prime Minister Netanyahu had agreed to repeal a controversial piece of legislation which addresses illegal housing construction – popularly dubbed the "Kaminitz law", and had thereby indirectly encouraged the UAL's MKs to heighten their demands in their talks with the "change bloc". These new demands had the effect of alienating the pro-"change" right-wing parties, Yamina and New Hope, which openly oppose a repeal of the law. Negotiations on the possible "change government" also took place on the sidelines of a Knesset session called to elect the new president of Israel. Isaac Herzog was easily elected as the 11th president of the country, and achieved the support of a broad range of parties. He received 87 votes, to just 26 for his only rival, Miriam Peretz. On the night of 2 June, just hours before the expiration of Lapid's mandate to form a government, Yamina made a formal offer to Labor, with the aim of resolving the issue surrounding the membership of the Judicial Appointments Committee. Namely, the proposal consisted of a rotation agreement, according to which Shaked would serve on the body as the government's representative during Bennett's tenure as prime minister, while a Labor MK would represent the coalition parties themselves. Then, when Bennett cedes the prime ministership to Lapid in 2023, Shaked would in turn cede her seat to Michaeli, while the Labor party MK would be replaced by an MK from Yamina.[citation needed] In response to this, Michaeli accepted the concept of a rotation, but insisted that she should serve during Bennett's tenure as prime minister, to ensure an ideological balance between the party blocs. Shaked rejected Michaeli's offer. Ultimately, the original proposal was adopted as a compromise solution. At the same time, Lapid, Bennett, and Abbas held a private meeting, with the aim of reaching a last-ditch consensus agreement. The "change bloc" released details of an agreement that had been reached between the various parties regarding the distribution of ministerial portfolios during Lapid's possible tenure as prime minister. Under the deal, Bennett would serve as interior minister (in addition to being Alternate Prime Minister), Shaked would become justice minister, and Sa'ar would take over as foreign minister. Only three hours before the deadline, the Southern Islamic Movement's shura council authorized the United Arab List to give its conditional support to the Bennett-Lapid government. The backing entailed a promise that negotiations would continue on the issues of the so-called "Kaminitz law" and the legalization of Bedouin villages in the Negev desert. Therefore, less than two hours before Lapid's mandate expired, the UAL formally joined the coalition. A new problem for the potential government arose when one of Yamina's MKs, Nir Orbach, stated that he was considering voting against the government in the Knesset. Such a move would deprive the "change government" of an absolute majority, and would tie it with the opposition at 60 seats each. During the negotiations at the Kfar Maccabiah Hotel, rival protests were being held outside by supporters of the Bennett-Lapid deal and by right-wing activists who oppose the "change government". One of the protesters who opposed the possible new government addressed the others and called for Orbach's house in Petah Tikva to be burned, should he vote in favor of the coalition deal. On the night of 2 June 2021, and less than an hour before his mandate was due to expire, Lapid informed outgoing president Reuven Rivlin that he can form a new government. Had Lapid not succeeded in assembling a coalition by 2 June, he could have been given a one-week extension, but only as long as he claimed to have a government ready to be voted on. If this had not been the case, the mandate would have passed back to the president, who would then in turn have passed the mandate onto the Knesset itself, which would then have had 21 days to nominate one of its members for prime minister. If the Knesset could not agree on a new nominee (by an absolute majority of 61 members), the Knesset would have automatically dissolved, and a new election would have been called. On 11 June 2021, Bennett's Yamina party became the last opposition faction to sign a coalition agreement with Lapid's Yesh Atid party. This set the stage for a new Israeli government to be sworn in on 13 June. On 12 June 2021, media sources released the content of the final coalition agreements signed between the "change bloc" parties in the preceding days. The basic guidelines of the new government's policy would be the following: a bill to set term limits for the prime minister and allow only one re-election (though the details of the bill are not yet laid out), the setting up of a commission of inquiry to investigate the April 2021 Mount Meron disaster, the abolition of four existing government ministries – Cyber and National Digital Matters, National Infrastructures, Energy and Water Resources, Community Empowerment and Advancement, and Strategic Affairs; the increasing of the old-age income supplement to 70% of the minimum wage, the adoption of a reform package to benefit disabled war veterans, the creation of more competition among the providers of kashrut services and the introduction of consistent standards in that area, as well as the changing of the body which selects the chief rabbi (so as to allow for the election of a Zionist to the position), and the opening up of the possibility for gentiles to convert to Judaism by applying with municipal rabbinical authorities. The agreement includes creating binding regulations for reduce GHG emissions. Members of government The following are the ministers of the 36th Israeli government in its final makeup. Investiture vote The outgoing speaker of the Knesset, Yariv Levin, a member of prime minister Netanyahu's Likud party, had indicated that he would attempt to delay the holding of an investiture vote for the proposed Bennett-Lapid government for as long as legally possible. Thus, on 3 June 2021, some of the parties of the "change bloc" initiated a motion of no-confidence in Levin, with the aim of replacing him with Mickey Levy of Yesh Atid and bringing the investiture vote forward. Immediately thereafter, Yamina MK Nir Orbach indicated that he would oppose the move, and it was then instead unilaterally endorsed by all six MKs from the Joint List. Yesh Atid and Yamina rejected the Joint List's offer of support, and Yamina even indicated that it would only support the election of a new speaker when the new government itself had already been sworn in. On 13 June 2021, the incoming government nominated Levy for the post of speaker, while Shas in turn nominated its MK Yaakov Margi. Levy defeated Margi by a margin of 67–52 and immediately took over the presiding of the Knesset session from Levin. The investiture vote for the government was also held on 13 June 2021. While the eight parties of the so-called "change bloc" coalition formally had an absolute majority consisting of 62 Knesset members, one Yamina MK, Amichai Chikli, had already ruled out supporting the incoming government. As of the proposed government's announcement on 2 June 2021, the only opposition MKs who had not yet declared how they would vote were the two members of the Arab Israeli Ta'al faction within the Joint List. On 13 June 2021, Ta'al's MKs indicated that they would be ready to abstain during the investiture vote if the proposed government lost its own majority through defections. This would likely have allowed the Bennett-Lapid government to be confirmed to office by a plurality of MKs. Ultimately, the new government was successfully voted in by a margin of 60 to 59, with the only abstention being that of United Arab List MK Said al-Harumi. Dissolution On 20 June 2022, following several legislative defeats for the governing coalition in the Knesset, Bennett and Lapid jointly announced the introduction of a bill to dissolve the Knesset, stating that Lapid would become the interim prime minister following the dissolution. The coalition collapsed when several mostly right-wing nationalist MKs withdrew their support, after the government failed to muster enough votes to renew legal protections to Jewish settlers in the West Bank. The bill to dissolve the Knesset passed its first reading on 28 June. It then passed its third reading on 29 June, and the date for elections was set for 1 November 2022. Bennett opted to retire from politics and not seek re-election; he resigned as the leader of Yamina on 29 June, and was succeeded by Ayelet Shaked. On 30 June, in accordance with the coalition agreement, Lapid succeeded Bennett as the caretaker Prime Minister until a new government is formed. Bennett turned down the position of acting Foreign Minister, although he formally remained alternate Prime Minister. Lapid thus continued to hold the position of Foreign Minister. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File:Benjamin_Netanyahu_2018.jpg] | [TOKENS: 104] |
File:Benjamin Netanyahu 2018.jpg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 11 pages use this file: Global file usage The following other wikis use this file: View more global usage of this file. Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/NGC_1762] | [TOKENS: 283] |
Contents NGC 1762 NGC 1762 is a spiral galaxy in the constellation of Orion. Its velocity with respect to the cosmic microwave background is 4,739±2 km/s, which corresponds to a Hubble distance of 228.0 ± 15.9 Mly (69.90 ± 4.89 Mpc). However, 10 non-redshift measurements give a farther mean distance of 275.44 ± 22.19 Mly (84.450 ± 6.805 Mpc). It was discovered by German-British astronomer William Herschel on 8 October 1785. NGC 1762 has a possible active galactic nucleus, i.e. it has a compact region at the center of a galaxy that emits a significant amount of energy across the electromagnetic spectrum, with characteristics indicating that this luminosity is not produced by the stars. NGC 1762 group NGC 1762 is a member the NGC 1762 group (also known as LGG 120), which contains at least 27 galaxies, including NGC 1590, NGC 1633, NGC 1642, NGC 1691, NGC 1713, NGC 1719, and IC 392. Supernova One supernova has been observed in NGC 1762: See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Negation] | [TOKENS: 2878] |
Contents Negation In logic, negation, also called the logical not or logical complement, is an operation that takes a proposition P {\displaystyle P} to another proposition "not P {\displaystyle P} ", written ¬ P {\displaystyle \neg P} , ∼ P {\displaystyle {\mathord {\sim }}P} , P ′ {\displaystyle P^{\prime }} or P ¯ {\displaystyle {\overline {P}}} . It is interpreted intuitively as being true when P {\displaystyle P} is false, and false when P {\displaystyle P} is true. For example, if P {\displaystyle P} is "The dog runs", then "not P {\displaystyle P} " is "The dog does not run". An operand of a negation is called a negand or negatum. Negation is a unary logical connective. It may furthermore be applied not only to propositions, but also to notions, truth values, or semantic values more generally. In classical logic, negation is normally identified with the truth function that takes truth to falsity (and vice versa). In intuitionistic logic, according to the Brouwer–Heyting–Kolmogorov interpretation, the negation of a proposition P {\displaystyle P} is the proposition whose proofs are the refutations of P {\displaystyle P} . Definition Classical negation is an operation on one logical value, typically the value of a proposition, that produces a value of true when its operand is false, and a value of false when its operand is true. Thus if statement P {\displaystyle P} is true, then ¬ P {\displaystyle \neg P} (pronounced "not P") would then be false; and conversely, if ¬ P {\displaystyle \neg P} is true, then P {\displaystyle P} would be false. The truth table of ¬ P {\displaystyle \neg P} is as follows: Negation can be defined in terms of other logical operations. For example, ¬ P {\displaystyle \neg P} can be defined as P → ⊥ {\displaystyle P\rightarrow \bot } (where → {\displaystyle \rightarrow } is logical consequence and ⊥ {\displaystyle \bot } is absolute falsehood). Conversely, one can define ⊥ {\displaystyle \bot } as Q ∧ ¬ Q {\displaystyle Q\land \neg Q} for any proposition Q (where ∧ {\displaystyle \land } is logical conjunction). The idea here is that any contradiction is false, and while these ideas work in both classical and intuitionistic logic, they do not work in paraconsistent logic, where contradictions are not necessarily false. As a further example, negation can be defined in terms of NAND and can also be defined in terms of NOR. Algebraically, classical negation corresponds to complementation in a Boolean algebra, and intuitionistic negation to pseudocomplementation in a Heyting algebra. These algebras provide a semantics for classical and intuitionistic logic. Notation The negation of a proposition p is notated in different ways, in various contexts of discussion and fields of application. The following table documents some of these variants: The notation N p {\displaystyle Np} is Polish notation. In set theory, ∖ {\displaystyle \setminus } is also used to indicate 'not in the set of': U ∖ A {\displaystyle U\setminus A} is the set of all members of U that are not members of A. Regardless how it is notated or symbolized, the negation ¬ P {\displaystyle \neg P} can be read as "it is not the case that P", "not that P", or usually more simply as "not P". As a way of reducing the number of necessary parentheses, one may introduce precedence rules: ¬ has higher precedence than ∧, ∧ higher than ∨, and ∨ higher than →. So for example, P ∨ Q ∧ ¬ R → S {\displaystyle P\vee Q\wedge {\neg R}\rightarrow S} is short for ( P ∨ ( Q ∧ ( ¬ R ) ) ) → S . {\displaystyle (P\vee (Q\wedge (\neg R)))\rightarrow S.} Here is a table that shows a commonly used precedence of logical operators. Properties Within a system of classical logic, double negation, that is, the negation of the negation of a proposition P {\displaystyle P} , is logically equivalent to P {\displaystyle P} . Expressed in symbolic terms, ¬ ¬ P ≡ P {\displaystyle \neg \neg P\equiv P} . In intuitionistic logic, a proposition implies its double negation, but not conversely. This marks one important difference between classical and intuitionistic negation. Algebraically, classical negation is called an involution of period two. However, in intuitionistic logic, the weaker equivalence ¬ ¬ ¬ P ≡ ¬ P {\displaystyle \neg \neg \neg P\equiv \neg P} does hold. This is because in intuitionistic logic, ¬ P {\displaystyle \neg P} is just a shorthand for P → ⊥ {\displaystyle P\rightarrow \bot } , and we also have P → ¬ ¬ P {\displaystyle P\rightarrow \neg \neg P} . Composing that last implication with triple negation ¬ ¬ P → ⊥ {\displaystyle \neg \neg P\rightarrow \bot } implies that P → ⊥ {\displaystyle P\rightarrow \bot } . As a result, in the propositional case, a sentence is classically provable if its double negation is intuitionistically provable. This result is known as Glivenko's theorem. De Morgan's laws provide a way of distributing negation over disjunction and conjunction: Let ⊕ {\displaystyle \oplus } denote the logical xor operation. In Boolean algebra, a linear function is one such that: If there exists a 0 , a 1 , … , a n ∈ { 0 , 1 } {\displaystyle a_{0},a_{1},\dots ,a_{n}\in \{0,1\}} , f ( b 1 , b 2 , … , b n ) = a 0 ⊕ ( a 1 ∧ b 1 ) ⊕ ⋯ ⊕ ( a n ∧ b n ) {\displaystyle f(b_{1},b_{2},\dots ,b_{n})=a_{0}\oplus (a_{1}\land b_{1})\oplus \dots \oplus (a_{n}\land b_{n})} , for all b 1 , b 2 , … , b n ∈ { 0 , 1 } {\displaystyle b_{1},b_{2},\dots ,b_{n}\in \{0,1\}} . Another way to express this is that each variable always makes a difference in the truth-value of the operation, or it never makes a difference. Negation is a linear logical operator. In Boolean algebra, a self dual function is a function such that: f ( a 1 , … , a n ) = ¬ f ( ¬ a 1 , … , ¬ a n ) {\displaystyle f(a_{1},\dots ,a_{n})=\neg f(\neg a_{1},\dots ,\neg a_{n})} for all a 1 , … , a n ∈ { 0 , 1 } {\displaystyle a_{1},\dots ,a_{n}\in \{0,1\}} . Negation is a self dual logical operator. In first-order logic, there are two quantifiers, one is the universal quantifier ∀ {\displaystyle \forall } (means "for all") and the other is the existential quantifier ∃ {\displaystyle \exists } (means "there exists"). The negation of one quantifier is the other quantifier ( ¬ ∀ x P ( x ) ≡ ∃ x ¬ P ( x ) {\displaystyle \neg \forall xP(x)\equiv \exists x\neg P(x)} and ¬ ∃ x P ( x ) ≡ ∀ x ¬ P ( x ) {\displaystyle \neg \exists xP(x)\equiv \forall x\neg P(x)} ). For example, with the predicate P as "x is mortal" and the domain of x as the collection of all humans, ∀ x P ( x ) {\displaystyle \forall xP(x)} means "a person x in all humans is mortal" or "all humans are mortal". The negation of it is ¬ ∀ x P ( x ) ≡ ∃ x ¬ P ( x ) {\displaystyle \neg \forall xP(x)\equiv \exists x\neg P(x)} , meaning "there exists a person x in all humans who is not mortal", or "there exists someone who lives forever". Rules of inference There are a number of equivalent ways to formulate rules for negation. One usual way to formulate classical negation in a natural deduction setting is to take as primitive rules of inference negation introduction (from a derivation of P {\displaystyle P} to both Q {\displaystyle Q} and ¬ Q {\displaystyle \neg Q} , infer ¬ P {\displaystyle \neg P} ; this rule also being called reductio ad absurdum), negation elimination (from P {\displaystyle P} and ¬ P {\displaystyle \neg P} infer Q {\displaystyle Q} ; this rule also being called ex falso quodlibet), and double negation elimination (from ¬ ¬ P {\displaystyle \neg \neg P} infer P {\displaystyle P} ). One obtains the rules for intuitionistic negation the same way but by excluding double negation elimination. Negation introduction states that if an absurdity can be drawn as conclusion from P {\displaystyle P} then P {\displaystyle P} must not be the case (i.e. P {\displaystyle P} is false (classically) or refutable (intuitionistically) or etc.). Negation elimination states that anything follows from an absurdity. Sometimes negation elimination is formulated using a primitive absurdity sign ⊥ {\displaystyle \bot } . In this case the rule says that from P {\displaystyle P} and ¬ P {\displaystyle \neg P} follows an absurdity. Together with double negation elimination one may infer our originally formulated rule, namely that anything follows from an absurdity. Typically the intuitionistic negation ¬ P {\displaystyle \neg P} of P {\displaystyle P} is defined as P → ⊥ {\displaystyle P\rightarrow \bot } . Then negation introduction and elimination are just special cases of implication introduction (conditional proof) and elimination (modus ponens). In this case one must also add as a primitive rule ex falso quodlibet. Programming language and ordinary language As in mathematics, negation is used in computer science to construct logical statements. The exclamation mark "!" signifies logical NOT in B, C, and languages with a C-inspired syntax such as C++, Java, JavaScript, Perl, and PHP. "NOT" is the operator used in ALGOL 60, BASIC, and languages with an ALGOL- or BASIC-inspired syntax such as Pascal, Ada, and Eiffel. Some languages (C++, Perl, etc.) provide more than one operator for negation. A few languages like PL/I and Ratfor use ¬ for negation. Most modern languages allow the above statement to be shortened from if (!(r == t)) to if (r != t), which allows sometimes, when the compiler/interpreter is not able to optimize it, faster programs. In computer science there is also bitwise negation. This takes the value given and switches all the binary 1s to 0s and 0s to 1s. This is often used to create ones' complement (or "~" in C or C++) and two's complement (just simplified to "-" or the negative sign, as this is equivalent to taking the arithmetic negation of the number). To get the absolute (positive equivalent) value of a given integer the following would work as the "-" changes it from negative to positive (it is negative because "x < 0" yields true) To demonstrate logical negation: Inverting the condition and reversing the outcomes produces code that is logically equivalent to the original code, i.e. will have identical results for any input (depending on the compiler used, the actual instructions performed by the computer may differ). In C (and some other languages descended from C), double negation (!!x) is used as an idiom to convert x to a canonical Boolean, ie. an integer with a value of either 0 or 1 and no other. Although any integer other than 0 is logically true in C and 1 is not special in this regard, it is sometimes important to ensure that a canonical value is used, for example for printing or if the number is subsequently used for arithmetic operations. The convention of using ! to signify negation occasionally surfaces in colloquial language, as computer-related slang for not. For example, the phrase !clue is used as a synonym for "no-clue" or "clueless". Another example is the expression !vote which means "not a vote". In this context, the exclamation mark is used at Wikipedia to survey opinions while negating "majority rule", in order "to have a consensus-building discussion, where the proper course is determined by the strength of the respective arguments." Kripke semantics In Kripke semantics where the semantic values of formulae are sets of possible worlds, negation can be taken to mean set-theoretic complementation[citation needed] (see also possible world semantics for more). See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ministry_of_Education_(Israel)#cite_ref-3] | [TOKENS: 543] |
Contents Ministry of Education (Israel) The Ministry of Education (Hebrew: מִשְׂרָד הַחִנּוּךְ, translit. Misrad HaHinukh; Arabic: وزارة التربية والتعليم) is the branch of the Israeli government charged with overseeing public education institutions in Israel. The department is headed by the Minister of Education, who is a member of the cabinet. The ministry has previously included culture and sport, although this is now covered by the Ministry of Culture and Sport. History In the first decade of statehood, the education system was faced with the task of establishing a network of kindergartens and schools for a rapidly growing student population. In 1949, there were 80,000 elementary school students. By 1950, there were 120,000 - an increase of 50 percent within the span of one year. Israel also took over responsibility for the education of Arab schoolchildren. The first minister of education was Zalman Shazar, later president of the State of Israel. Since 2002, the Ministry of Education has awarded a National Education Award to five top localities in recognizing excellence in investing substantial resources in the educational system. In 2012, first place was awarded to the Shomron Regional Council and followed by Or Yehuda, Tiberias, Eilat and Beersheba. The prize has been awarded to a variety of educational institutions including kindergartens and elementary schools. In 2013–2014, the Ministry of Education promoted the regulation of the activities of external parties within the state schools, in a dialogue between the Ministry, the local government, parents' representatives, the business sector and philanthropic parties, as part of what was called "the intersectoral round table in the Ministry of Education". As part of the regulation, the Ministry compiled a database of external programs that have some kind of partnership with a representative from the Ministry of Education's headquarters. In 2019, a petition was filed by pluralist Jewish organizations against the Ministry of Education due to a procedure that reduces by tens of thousands of shekels the support for the activities of these organizations in schools. In April 2021, the High Court invalidated the procedure in question, and even emphasized the importance of implementing the principles of the Shanhar Committee report on the teaching of Judaism in state education. In November 2021 it was announced that the Ministry of Education is not implementing the High Court ruling and that the damage to those organizations continues. List of ministers References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:Computer_programming] | [TOKENS: 54] |
Category:Computer programming Contents Subcategories This category has the following 36 subcategories, out of 36 total. Pages in category "Computer programming" The following 140 pages are in this category, out of 140 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Judea] | [TOKENS: 4603] |
Contents Judea Judea or Judaea (/dʒuːˈdiːə, dʒuːˈdeɪə/; Hebrew: יהודה, Modern: Yəhūda, Tiberian: Yehūḏā; Arabic: يهودا, Yahūdā; Greek: Ἰουδαία, Ioudaía; Latin: Iudaea) is a mountainous region of the Levant. Traditionally dominated by the city of Jerusalem, it is now part of Israel and the West Bank. The name is derived from the Hebrew name Yehudah, and was used during the Babylonian, Persian, Hellenistic, and Roman periods. Under the Hasmoneans, the Herodians, and the Romans, the term was applied to an area larger than the Judea of earlier periods. In the aftermath of the Bar Kokhba revolt (c. 132–136 CE), the Roman province of Judaea was renamed Syria Palaestina. The term Judea was used by English speakers for the hilly internal part of Mandatory Palestine. Judea roughly corresponds to the southern part of the West Bank (Arabic: الضِفَّة الغَرْبِيَّة, romanized: aḍ-ḍiffa al-gharbiya), a territory Israel has occupied since 1967 and administered as the "Judea and Samaria Area"(מחוז יהודה ושומרון, Makhoz Yehuda VeShomron). Usage of the term "Judea and Samaria" is associated with the right wing in Israeli politics. Etymology The name Judea is a Greek and Roman adaptation of the Hebrew name Yehudah (Hebrew יהודה), one of the Twelve Tribes of Israel[citation needed] and later used as the name for the ancient Kingdom of Judah. Nimrud Tablet K.3751, dated c. 733 BCE, is the earliest known extra-biblical record of the name Judah (written in Assyrian cuneiform as Yaudaya or KUR.ia-ú-da-a-a). Related nomenclature continued to be used under the rule of the Babylonians (the Yehud province), the Persians (the Yehud province), during the Hellenistic period (Hasmonean Judea), and under the Romans (Provincia Iudaea, or Province of Judaea). There is no consensus among linguists and historians as to the origins of the name Judah (Yehudah). The Book of Genesis presents a folk etymology deriving it from a legendary founder Judah, the son of Jacob, offering a wordplay This etymology is generally regarded as dubious by modern scholars because of its strong similarity to other eponymic legends from antiquity that were later proven to be false, and because Judah does not appear as a given name until late in the post-Exilic period. Several scholars have suggested Judah may be a theophoric name referencing the tetragrammaton (y-h-w), a shortening of Yehuda-el ("praise be to El"), or an otherwise-unattested god y-h-w-d. However, there is little to no agreement on the specifics of such an origin, and these theories have fallen out of favor. A more likely explanation suggests Yehuda is cognate with the Arabic wahda ("ravine" or "gorge"), though such a term is not directly attested in the Old Testament. Judea was sometimes used as the name for the entire region, including parts beyond the river Jordan. In 200 CE Sextus Julius Africanus, cited by Eusebius (Church History 1.7.14), described "Nazara" (Nazareth) as a village in Judea. The King James Version of the Bible refers to the region as "Jewry". 'Judean' was not exclusively used as ethnic identifier; Ptolemy of Ascalon, for example, quoted in a work by Ammonius of Alexandria, distinguishes between Judeans originating within the land of Judea, "and forcefully circumcised Idumeans (of Syrian or Phoenician origin, in his view) who could likewise be designated 'Judeans.'" Under the Hasmoneans, the Herodians, and the Romans, the term was applied to an area larger than the Judea of earlier periods. In the aftermath of the Bar Kokhba revolt (c. 132–136 CE), the Roman province of Judaea was renamed Syria Palaestina. 'Judea' was a name used by English speakers for the hilly internal part of Mandatory Palestine until the Jordanian rule of the area in 1948. For example, the borders of the two states to be established according to the UN's 1947 partition scheme were officially described using the terms 'Judea' and 'Samaria' and in its reports to the League of Nations Mandatory Committee, as in 1937, the geographical terms employed were 'Samaria and Judea.' Jordan called the area aḍ-ḍiffa al-gharbiya (الضِفَّة الغَرْبِيَّة translated into English as 'the West Bank'). 'Yehuda' (יהודה) is the Hebrew term used for the area in modern Israel since the region was captured and occupied by Israel in the 1967 Six Day War. According to Britannica, referring to this region as 'Judea and Samaria' (יהודה ושומרון, Yehuda VeShomron) has been associated with the right wing in Israeli politics, which does not support a two state solution to the Israeli–Palestinian conflict. The term 'West Bank' is what appears on international treaties such as the Oslo Accords established between the Palestine Liberation Organization and the Israeli government. The names "West Bank" (הַגָּדָה הַמַּעֲרָבִית, HaGadah HaMaʽaravit) or, alternatively, "the Territories" (השטחים, HaShtahim) are also current in Israeli usage. Generally, preference for one term over the other indicates the speaker's position on the Israeli political spectrum. Historical boundaries The first century Roman-Jewish historian Josephus wrote (The Jewish War 3.3.5): In the limits of Samaria and Judea lies the village Anuath, which is also named Borceos. This is the northern boundary of Judea. The southern parts of Judea, if they be measured lengthways, are bounded by a village adjoining to the confines of Arabia; the Jews that dwell there call it Jordan. However, its breadth is extended from the river Jordan to Joppa. The city Jerusalem is situated in the very middle; on which account some have, with sagacity enough, called that city the Navel of the country. Nor indeed is Judea destitute of such delights as come from the sea, since its maritime places extend as far as Ptolemais: it was parted into eleven portions, of which the royal city Jerusalem was the supreme, and presided over all the neighboring country, as the head does over the body. As to the other cities that were inferior to it, they presided over their several toparchies; Gophna was the second of those cities, and next to that Acrabatta, after them Thamna, and Lydda, and Emmaus, and Pella, and Idumea, and Engaddi, and Herodium, and Jericho; and after them came Jamnia and Joppa, as presiding over the neighboring people; and besides these there was the region of Gamala, and Gaulonitis, and Batanea, and Trachonitis, which are also parts of the kingdom of Agrippa. This [last] country begins at Mount Libanus, and the fountains of Jordan, and reaches breadthways to Lake Tiberias; and in length is extended from a village called Arpha, as far as Julias. Its inhabitants are a mixture of Jews and Syrians. And thus have I, with all possible brevity, described the country of Judea, and those that lie round about it. Elsewhere, Josephus wrote that "Arabia is a country that borders on Judea." The first century Roman historian Tacitus defined Judaea as bordered by Arabia to the east, Egypt to the south, Phoenicia and the Mediterranean Sea to the west, and Syria to the north. His conception, presented in Histories 5.6, mirrors a conventional understanding of Judaea as the territory where Jews predominated from the Hasmonaean era onward, standing apart from the provincial borders of the province of Judaea in his own period. Geography Judea is a mountainous region, part of which is considered a desert. It varies greatly in height, rising to an altitude of 1,020 metres (3,350 ft) in the south at the Hebron Hills, 30 km (19 mi) southwest of Jerusalem, and descending to as much as 400 metres (1,300 ft) below sea level in the east of the region. It also varies in rainfall, starting with about 400–500 millimetres (16–20 in) in the western hills, rising to 600 millimetres (24 in) around western Jerusalem (in central Judea), falling back to 400 millimetres (16 in) in eastern Jerusalem and dropping to around 100 millimetres (3.9 in) in the eastern parts, due to a rain shadow: this is the Judaean Desert. The climate, accordingly, moves between Mediterranean in the west and desert climate in the east, with a strip of semi-arid climate in the middle. Major urban areas in the region include Jerusalem, Bethlehem, Gush Etzion, Jericho and Hebron. Geographers divide Judea into several regions: the Hebron hills, the Jerusalem saddle, the Bethel hills and the Judaean Desert east of Jerusalem, which descends in a series of steps to the Dead Sea. The hills are distinct for their anticline structure. In ancient times the hills were forested, and the Bible records agriculture and sheep farming being practiced in the area. Animals are still grazed today, with shepherds moving them between the low ground to the hilltops as summer approaches, while the slopes are still layered with centuries-old stone terracing. The Jewish Revolt against the Romans ended in the devastation of vast areas of the Judean countryside. Mount Hazor marks the geographical boundary between Samaria to its north and Judea to its south. History According to the biblical story of the Patriarchs, Abraham came to the Land of Canaan as commanded by God and moved around in the hill country (Judaea and Samaria) and the Negev. The country is described as populated by Canaanites, Hittites, Jebusites and other population groups. This pattern continued with his son Isaac, his son Jacob and his 12 sons and daughter, Dina and their families. The Patriarchs Sarai, Abraham, Isaac, Rebecca and Jacob were buried at Hebron in the Tomb of the Patriarchs. according to Genesis and Exodus. After the Conquest of Joshua the Israelite tribes conquered and lived in most of the land west of the river Jordan and in the northern part east of that river for close to 400 years. The biblical account in the Books of Kings describes how King Saul and later King David and his son Solomon (Shlomo) succeeded in fighting the last remnants of non-Israelite populations and unified the tribes into one united monarchy. According to our[who?] understanding of the text as well as recent archeological findings, this was to a large degree possible through the Israelite adaption of Iron Age technologies. Scholarship has been divided as to the historical veracity of the existence and extension of a kingdom that unified Judea and Samaria, but archeological excavations of the last 30 years[when?] have time and again found solid evidence that confirms the bibilcal descriptions. Regardless, the Northern Kingdom was conquered by the Neo-Assyrian Empire in 720 BCE and parts of the population of the 10 northern tribes exiled. The northern Kingdom of Judah remained nominally independent, but paid tribute to the Assyrian Empire from 715 and throughout the first half of the 7th century BCE, regaining its independence as the Assyrian Empire declined after 640 BCE, but after 609 again fell under the sway of imperial rule, this time paying tribute at first to the Egyptians and after 601 BCE to the Neo-Babylonian Empire, until 586 BCE, when it was finally conquered by Babylonia, the temple in Jerusalem destroyed and many of the inhabitants of Judea exiled to Babylonia. The Babylonian Empire fell to the conquests of Cyrus the Great in 539 BCE. Judea remained under Persian rule until the conquest of Alexander the Great in 332 BCE, eventually falling under the rule of the Hellenistic Seleucid Empire until the revolt of Judas Maccabeus resulted in the Hasmonean dynasty of kings who ruled in Judea for over a century. Judea lost its independence to the Romans in the 1st century BCE, becoming first a tributary kingdom, then a province, of the Roman Empire. The Romans had allied themselves to the Maccabees and interfered in 63 BCE, at the end of the Third Mithridatic War, when the proconsul Pompey ("Pompey the Great") stayed behind to make the area secure for Rome, including his siege of Jerusalem in 63 BCE. Queen Salome Alexandra had recently died, and a civil war broke out between her sons, Hyrcanus II and Aristobulus II. Pompeius restored Hyrcanus, but political rule soon passed to the Herodian dynasty, who ruled as client kings. In 6 CE, Judea came under direct Roman rule as the southern part of the province of Judaea, although Jews living there still maintained some form of independence and could judge offenders by their own laws, including capital offences, until c. 28 CE. The Hashmonean kingdom, after Pompey's conquest, was divided in 57 BCE by Gabinius, the governor of Syria, into five administrative districts (synedria or toparchies), as mentioned by Josephus, later on the region of historical Judaea proper being further divided; the exact number of Judaean districts (in the end ten or eleven according to Josephus and Pliny) and their location is disputed, Schürer amending the ancient authors' list as follows: Jerusalem in the centre, later becoming the district of Orine ("Orine Judaea", 'mountainous [region of] Judaea'); Gophna, Akrabatta north of it; Thamna and Lydda to the northwest; Emmaus (possibly future Nicopolis/Imwas, although other towns in the region also bore that name) to the west; Bethleptepha (rather than Josephus' Pella) to the southwest; Idumaea to the south; Engaddi and Herodeion to the southeast; and Jericho to the east. Schürer dismisses Pliny's listing of "Jopica" (Joppa) and Josephus' of Pella, as these were, in his opinion, independent cities not included in Judaea proper. Other regions outside Judaea proper, which had belonged to the Hasmonean and Herodian kingdoms and came under Roman dominance and then direct rule, remained or became also split into districts with regional capitals, these being Galilee (with the capital at Sepphoris and later Tiberias), and Perea in Transjordan (with Amathus); however, a district administered from a certain Gadara is also mentioned, which can be in three different locations - either in Perea (at or near Al-Salt), in the Decapolis at Umm Qais, or - which is relevant for Judaea - at biblical Gezer in the foothills of the Judaean Mountains, mentioned by Josephus under a Hellenised form of its Semitic name, Gadara, edited to "Gazara" in the Loeb edition). In 66 CE, the Jewish population rose against Roman rule in a revolt that was unsuccessful. Jerusalem was besieged in 70 CE. The city was razed, the Second Temple was destroyed, and much of the population was killed or enslaved. In 132 CE, the Bar Kokhba revolt (132–136 CE) broke out. After an initial string of victories, rebel leader Simeon Bar Kokhba was able to form an independent Jewish state that lasted several years and included most of the district of Judea, including the Judean Mountains, the Judean Desert, and northern Negev desert, but probably not other sections of the country. When the Romans finally put an end to the uprising, most of the Jews in Judea were killed or displaced, and a sizable number of captives were sold into slavery, leaving the district mostly depopulated. Jews were expelled from the area surrounding Jerusalem. No village in the district of Judea whose remains have been excavated so far has not been destroyed during the revolt. Roman emperor Hadrian, determined to root out Jewish nationalism, changed the name of the province from Judaea to Syria Palaestina. The province's Jewish population was now mainly concentrated in Galilee, the coastal plain (especially in Lydda, Joppa, and Caesarea), and smaller Jewish communities continued to live in the Beit She'an Valley, the Carmel, and Judea's northern and southern frontiers, including the southern Hebron Hills and along the shores of the Dead Sea. The suppression of the Bar Kokhba revolt led to widespread destruction and displacement throughout Judea, and the district saw a decline in population. The Roman colony of Aelia Capitolina, which was built on the ruins of Jerusalem, remained a backwater for the duration of its existence. The villages around the city were depopulated, and arable lands in the region were confiscated by the Romans. Having no alternative population to fill the empty villages led the authorities to establish imperial or legionary estates and monasteries on confiscated village lands to benefit the elites and, later, the church. This also initiated a process of romanization that took place during the Late Roman period, with pagan populations penetrating the region and settling alongside Roman veterans. There was only a revival of village settlement on the eastern edges of Jerusalem's hinterland, on the transition between the arable highlands and the Judaean Desert. Those settlements grew on marginal lands with vague ownership and unenforced state land dominion. Judea's decline only came to an end in the fifth century CE, when it developed into a monastic center, and Jerusalem became a major Christian pilgrimage and ecclesiastical hub. Under Byzantine rule, the regional population, composed of pagan populations who had migrated there after Jews were driven out following the Bar Kokhba revolt, gradually converted to Christianity. The Byzantines redrew the borders of the land of Palestine. The various Roman provinces (Syria Palaestina, Samaria, Galilee, and Peraea) were reorganized into three dioceses of Palaestina, reverting to the name first used by Greek historian Herodotus in the mid-5th century BCE: Palaestina Prima, Secunda, and Tertia or Salutaris (First, Second, and Third Palestine), part of the Diocese of the East. Palaestina Prima consisted of Judea, Samaria, the Paralia, and Peraea with the governor residing in Caesarea. Palaestina Secunda consisted of Galilee, the lower Jezreel Valley, the regions east of Galilee, and the western part of the former Decapolis with the seat of government at Scythopolis. Palaestina Tertia included the Negev, southern Jordan—once part of Arabia—and most of Sinai, with Petra as the usual residence of the governor. Palestina Tertia was also known as Palaestina Salutaris. According to historian H.H. Ben-Sasson, this reorganisation took place under Diocletian (284–305), although other scholars suggest this change occurred later, in 390. The mostly French army of the First Crusade conquered Jerusalem from the Seljuks in 1099 and expanded the territory they held in the following years. According to Ellenblum, the Franks tended to settle in the southern half of the region between Jerusalem and Nablus since there was a sizable Christian population there. Most of the people living in the northern portion of Judea in the late 16th century were Muslims; some of them resided in towns that today have significant Christian populations. According to the 1596–1597 Ottoman census, Birzeit and Jifna, for instance, were wholly Muslim villages, while Taybeh had 63 Muslim families and 23 Christian families. There were 71 Christian families and 9 Muslim families in Ramallah, although the Christians there were recent arrivals who had moved from the Kerak area only a few years previously. According to Ehrlich, the region's Christian population decreased as a result of a combination of factors including impoverishment, oppression, marginalization, and persecution. Sufi activity took place in Jerusalem and the surrounding area, which most likely pushed Christian villagers in the region to convert to Islam. Timeline Selected towns and cities Judea, in the generic sense, also incorporates places in Galilee and in Samaria. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Monster_High] | [TOKENS: 4236] |
Contents Monster High Monster High is an American multimedia-supported fashion doll franchise created by toy designer Garrett Sander and launched by Mattel in 2010. Aimed at children ages 7–16, the franchise features characters inspired by monster movies, sci-fi horror, thriller fiction, cryptids, folklore, myths, fairy tales, and popular culture, centering around the adventures of the teenage children of monsters and other mythical creatures attending a high school of the same name. Though the fashion dolls are the main focus of the franchise, a 2D-animated web series and 15 animated TV specials/films were released to accompany them, as well as video games, a series of young adult novels written by Lisi Harrison, and other forms of merchandise. The franchise quickly became very popular among children and was extremely successful in terms of earnings for Mattel; it was worth $1 billion in its third year of existence with more than $500 million in sales annually, and was at one point the second best-selling doll brand in North America. Two spin-off toy lines were launched as companions to Monster High: Ever After High in 2013 based on fairy tales and fables, and Enchantimals in 2017 featuring human-animal hybrids. However, sales declined in 2016, prompting Mattel to reboot the franchise with a revamped aesthetic and a new fictional universe. The reboot was a commercial failure, eventually leading to the discontinuation of the franchise in 2018. Monster High relaunched a second time in 2020 with the release of new dolls representing horror/goth film cults, culminating with the 2021 announcement of an animated TV series and a live-action musical film, both produced by Mattel Television and premiered on Nickelodeon in October 2022. Premise In the fictional American town of New Salem, the teenage children of famous monsters (and other mythical creatures) attend a high school called Monster High. The school is renowned for allowing all species of monsters to enroll in it: this is in contrast with other schools that exist in the franchise's fantasy world, which are reserved for one type of monster only (for example, a vampire-exclusive school). The characters' stories were told through the TV series, web series, films, the official website, as well as through diaries (booklets) included with the dolls. Since the franchise's beginnings in the late 2000s and early 2010s, Monster High has valued diversity among its characters and their visual appearance, personalities, abilities, and cultural backgrounds. Monster High features a variety of fictional characters, many of them being students at the titular high school. The female characters are called "ghouls", and the male characters are called "mansters". When the franchise was first introduced, the characters were generally the sons and daughters of monsters that have been popularized in fiction; in later years, it expanded to also feature characters inspired by other various types of mythical creatures, such as figures from folklore, mythology, and pop culture. The franchise's official website at the time listed characters in four categories: "original" – the main characters who were introduced the earliest, "ghouls" – the female characters, "mansters" – the male characters, and "Frightmares" – characters who are half-centaur and half-monster. The original characters are: Conception and development Mattel began conceptualizing the Monster High franchise in 2007; the company filed for a trademark of the name "Monster High" in October of that year. Garrett Sander—then a packaging designer at Mattel—and his twin brother Darren went shopping with young girls one day, where they noticed that the young girls were into goth fashion. This served as inspiration for creating a toy brand with a dark aesthetic. Darren was involved with the early concepts for the brand; he came up with the slogan "(Where) Freaky Just Got Fabulous!". He also remarked that because the characters were monsters, they had more freedom to do things that ordinary kids could not do. Other inspirations for the brand include children's interests in Tim Burton and Lady Gaga. Merchandise Fashion dolls were the first franchise product to be released, with the media and other merchandise following soon after. The first line, which included the original six characters, was released in 2010. Mattel was experimenting with a new business strategy which consisted of launching a new franchise by releasing the toy first—without a "traditional entertainment property first"—and then following up with the media and entertainment. The original packaging boxes were designed by Garrett Sander himself. According to a social media post made by Sander in 2020, the first prototypes of the dolls during its development were made using head molds from another Mattel doll line that was never officially released, bodies from Barbie collector dolls, and with some accessories from My Scene dolls. A good amount of the initial design remained unchanged, but the actual dolls ended up looking drastically different. Over 750 different dolls have been released since its 2010 launch. They vary in size, features, materials used, type of packaging, types of accessories they come with, country of manufacture, etc. Most of them are about 10.5 in (270 mm) tall. Some dolls, particularly the ones which were released a long time ago or in limited quantity, are rare, collectible, and therefore expensive. Most Monster High dolls were marketed to children as toys to play with, but some "collector's edition" dolls, priced higher and aimed at an older audience, were also made. In 2016, Monster High underwent a reboot, which was likely an attempt to make the brand appeal to a younger age category. The sales were low that year, and the line was eventually quietly discontinued in 2018. In 2020, however, the franchise made its comeback when two new premium-priced collector dolls—dubbed "Skullector" and inspired by characters from the horror movies It and The Shining—were made available for purchase just in time for that year's Halloween. In 2021, a new set of two Skullector dolls inspired by characters from the movie Beetlejuice was launched exclusively through the "Mattel Creations" section of Mattel's website alongside a doll inspired by the film Gremlins 2: The New Batch. In 2022, Mattel presented a new Monster High line called "Haunt Couture" (wordplay on "haute couture") which consisted of five new collector dolls: the five main characters of Frankie Stein, Clawdeen Wolf, Draculaura, Cleo de Nile and Lagoona Blue. They featured details such as rooted eyelashes and were priced at $75, and similarly were only available through the website. On Friday, May 13, 2022, Mattel released a new "Booriginal Creeproductions" line of Monster High dolls which were a tribute to the original 2010 line. It featured four of the main characters dressed in their original outfits and packaged in boxes that took heavy inspiration from the original packaging. They were priced at $25 each and at first available exclusively at Walmart outlets in the United States, and then also worldwide through the "Mattel Creations" section of the Mattel website. They were aimed at older consumers who grew up with the original dolls prior to their 2018 discontinuation. Since then, Mattel has continued to periodically release new Monster High dolls, aimed at consumers and collectors alike. Various other Monster-High-branded products have been released: they include collectible vinyl figurines, Halloween costumes, plushies, stationery, children's clothing, accessories, and makeup, perfume, and more. In February 2022, American fashion designer Maisie Wilen collaborated with Mattel to create a pair of earrings inspired by one of the main Monster High characters' style; they were available for $50 exclusively through the "Mattel Creations" section of the Mattel website. In April 2022, Mattel collaborated with Hot Topic on a clothing collection inspired by the aesthetics of the franchise. Media Launched in the digital media era, Monster High began adaptation into a web series which had its debut on YouTube on 5 May 2010, followed by a 23-minute TV special, Monster High: New Ghoul at School on October 30 that same year which premiered on Nickelodeon in the United States. The aforementioned New Ghoul at School and the next TV special, Fright On! were 2D-animated, with the following films animated in computer-generated imagery: "Why Do Ghouls Fall in Love", "Escape from Skull Shores", "Friday Night Frights", "Scaris: City of Frights", "Ghouls Rule", "13 Wishes", "Freaky Fusion", "Haunted", "Boo York, Boo York", Great Scarrier Reef, Welcome to Monster High and Electrified. Other films were reported to be in development until the first franchise reboot and the discontinuation of Ever After High in 2016. Starting with Fright On! in 2011, the specials and films were released in direct-to-video home video formats by Universal Pictures Home Entertainment. The films ranked Monster High as the second in the list of children's direct-to-video franchises that year, according to online magazines and publications. The films and specials have also appeared on streaming services/platforms like Netflix and Amazon Prime Video. In the 2015 film "Boo York, Boo York", a character known as Astranova makes contact with Apple White and Raven Queen from Ever After High suggesting a crossover in the future. However, the first franchise reboot and the discontinuation of Ever After High derailed and cancelled those plans (which also included more based-on films than the 16 indicated); brief storyboard animatics were instead released on the official Monster High YouTube channel under the title The Lost Movie and early designs for the EAH characters intended for the crossover have been released online. In 2021, it was announced that Mattel Television would produce a live-action musical film and an animated TV series for Nickelodeon, which premiered in October 2022. Both projects feature more gender diversity and LGBT characters. Monster High: Kowa Ike Girls (Japanese: モンスター・ハイ こわイケガールズ, romanized: Monsutā Hai Kowa Ike Gāruzu; Monsutā Hai and Gāruzu being transliteration of "Monster High" and "Girls", respectively) is an 8-episode series of 3-minute Japanese animated shorts produced by Shougakukan Music & Digital Entertainment [ja], and animated at Picona Creative Studio. The shorts were broadcast as a part of TXN's morning children's television programming block Oha Suta beginning on October 22, 2014. Mattel Japan's official YouTube account later released the shorts online. The theme song, simply titled "Monster High" (Japanese: モンスター・ハイ, romanized: Monsutā Hai), was sung by Japanese teen idol girl band Amorecarina, featuring Kaede (from another idol girl band, Chu-Z) as a rapper. It was included in Amorecarina's debut single of the same name, along with an instrumental version. The Kowa Ike Girls shorts were released in Japanese only. Video games based on the franchise were released to accompany the audiovisual media. The first game released was Monster High: Ghoul Spirit, available for the Nintendo DS and the Wii consoles on 25 October 2011. This release featured a special "Ghoulify" function for the Nintendo DSi. The game revolves around the player being the new 'ghoul' in school and must work their way through activities and social situations to finally be crowned 'Scream Queen'. Another video game for Nintendo DS, Nintendo 3DS and Wii titled Monster High: Skultimate Roller Maze was released in November 2012. This game allowed players to experience the Monster High sport - Skultimate Roller Maze. Teams compete their way through a hazardous maze of obstacles. The third video game for the Wii, Wii U, Nintendo DS, and Nintendo 3DS named after Monster High: 13 Wishes was released in October 2013. In this game, players take on the role of Frankie Stein who must free her friends from a magical lantern by collecting thirteen shards of a magic mirror. Mobile apps Ghoul Box and Sweet 1600 are available on iTunes for the iPad and iPhone devices. The Monster High website has also released a series of catacomb-themed web games: "trick or trance", "phantom roller" and "scary sweet memories". In November 2015, Monster High: New Ghoul in School was released for the Xbox 360, PC, PlayStation 3, Wii, 3DS, and Wii U. The PC version was de-listed on Steam in 2017. Lisi Harrison, a Canadian author known for writing popular book series; The Clique and The Alphas, wrote some young adult novels based on the franchise using a different fictional universe than the web series and deal with the Regular-Attribute Dodgers (RADs) and their struggles with love, social life, school and not to be outed as monsters to humans. Mattel released Harrison's first Monster High novel on 26 September 2010. The book revolves around Frankie Stein and Melody Carver. The second book in the series, The Ghoul Next Door, was released at the end of March 2011 and features chapters on Cleo de Nile. The third book featuring Clawdeen Wolf is titled Where There's a Wolf, There's a Way and was released on 29 September 2011. The fourth novel titled Back And Deader Than Ever was released on May 1, 2012 and features Draculaura. Another Monster High book called Drop Dead Diary was released on January 19, 2011; it was written by a pseudonymous author Abaghoul Harris. Author Gitty Daneshvari has written a Ghoulfriends series focusing on Monster High characters Venus McFlytrap, Robecca Steam, and Rochelle Goyle. The four books include: Ghoulfriends Forever, Ghoulfriends Just Want To Have Fun, Who's That Ghoulfriend? and Ghoulfriends 'Til the End . A book series by Nessi Monstrata was released covering five of the main franchise characters. A book series by Misty Von Spooks was released that featured the Generation 2 franchise characters. In 2024, IDW Publishing began releasing a line of Monster High comics including the limited series Monster High: New Scaremester and the one-shots Monster High Pride 2024, Monster High: Halloween Special, Monster High: Howliday Haunt, and Monster High: Bull's Eye. Two songs titled "Fright Song" (2010), by Windy Wagner, and "We Are Monster High" (2013), by Madison Beer, were released digitally along with live-action music videos on YouTube. Ewa Farna additionally released Polish and Czech versions of "Fright Song" called "Monster High". In 2025, girl group Katseye covered "Fright Song". Numerous soundtracks have also been released. Spin-offs With the popularity of Monster High, companion doll lines were launched. Ever After High (abbreviated EAH) launched in July 2013 and features the children of characters of well-known fairy tales and fables. The franchise mainly focuses on Apple White, daughter of Snow White, and Raven Queen, daughter of the Evil Queen, also from Snow White in lead roles. Both represent the main conflict of its associated web series originally released on YouTube: the Royals, which is composed of students like Apple White who "want to follow their predetermined fairy tale story", versus the Rebels, which composed of students like Raven Queen who "wish to 'rewrite' their story/tale". The C. A. Cupid character from Monster High began featuring in its corresponding series from the 4th webisode onward where she is an exchange student there. The second companion line was launched on July 18, 2017, as Enchantimals, featuring animal-inspired humanoid characters with a corresponding animal companion each as their pets. This was in response to the growth of the My Little Pony: Friendship Is Magic fandom.[citation needed] It was originally tied to Ever After High, but fully branched off with EAH's declining sales. Reception Monster High was a massive financial success for Mattel, becoming a billion-dollar brand in just three years and surpassing executives' expectations. During the first few years, the dolls' quickly rising popularity caused the sales of Mattel's own Barbie dolls to decline; in 2013, while Barbie remained the best-selling doll brand, Monster High became the second best-selling doll brand, with more than $500 million in annual sales. In 2010, shortly after the dolls launched for the first time, they were so popular it was sometimes hard to find them in stores due to scarcity and they were selling out quickly. The line's success was partially thanks to its appeal to younger children who were choosing to play with toys which were "a little bit edgier" than traditional fashion dolls like Barbie, its "anti-bullying message" which encouraged children to be themselves and embrace their own flaws and differences, and the "deep engagement" of fans with the franchise which was maintained through media and merchandise. It was built on a "trans-media storytelling [business] model, since it did not start with a traditional entertainment property first", which also contributed to its success. Even though the franchise experienced a lot of growth in its first few years, especially during 2012 and 2013, sales started declining in 2014. In 2016, the sales were weak. Ultimately, the line was discontinued in 2018, then brought back two years later. On May 16, 2022, when a new doll line featuring reproductions of the original 2010 dolls was made available online through the "Mattel Creations" section of the Mattel website, demand was high: the dolls sold out in less than one day. The franchise has received positive recognition for its promotion of diversity among the characters, especially in comparison with other toy brands with similar levels of popularity. This diversity continues to be a major selling point in Mattel's marketing of the franchise. In 2022, during the rollout of a new doll line, Lisa McKnight—Executive Vice President of Barbie & Dolls at Mattel—said: "We've been waiting for the right moment to reignite the Monster High brand to connect with [...] issues that are core to our purpose, like inclusion, diversity and community [...] with the updated franchise focused on being authentic, true to yourself and celebrating differences." Monster High has some controversy and criticism, citing that the dolls' unrealistic bodies, often revealing outfits, and characters' focus on romantic relationships were a bad influence on young children. They were criticized for being "hyper-sexualized" and reinforcing gender stereotypes about women; it was even implied that children could develop low self-esteem and eating disorders due to the presentation of unattainable body types. Peggy Orenstein similarly criticized Monster High in her 2011 book Cinderella Ate My Daughter. Inspired by the commercial success of Monster High, other toy manufacturers — namely some of the biggest competitors in the toy industry which is the field of franchise owner Mattel — launched their own toy lines with a similar premise and/or aesthetic. In 2012, MGA Entertainment launched Bratzillaz (House of Witchez), a spin-off of the Bratz brand; it featured a similar theme centered around the paranormal, and was seen as MGA's attempt at capitalizing off of the success of Monster High. The same year, MGA also launched Novi Stars, a sci-fi-themed line of fashion dolls that featured extraterrestrial humanoids. In 2013, The Bridge Direct launched Pinkie Cooper, which featured a humanoid Cocker Spaniel of the same name; in an interview with CNN Money, analyst Gerrick Johnson named both Monster High and Novi Stars as "competitors that come closest" to the dog-headed fashion doll. Also in 2013, Hasbro launched My Little Pony: Equestria Girls as an anthropomorphized spin-off of the 2010 incarnation of the main My Little Pony franchise; it featured the counterparts of My Little Pony characters in human-like silhouettes with non-human skin colors; it was regarded as Hasbro's take on Monster High. Notes Bibliography References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File:Yair_Lapid_(D1237-011)_(cropped).jpg] | [TOKENS: 132] |
File:Yair Lapid (D1237-011) (cropped).jpg Summary العربيَّة | English | עברית | македонски | +/− Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 3 pages use this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_ref-Introduction_for_the_French_Reader_26-0] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_ref-26] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ever_After_High] | [TOKENS: 990] |
Contents Ever After High Ever After High is a fashion doll franchise released by Mattel in July 2013. It is a companion line to the Monster High dolls, with the characters being based upon characters from well-known fairy tales and fantasy stories instead of monsters and mythical creatures. As with Monster High and Barbie: Life in the Dreamhouse, the line varies in different countries and varies in languages. It has spawned a web series, a film, and a five book series. Premise Ever After High is a boarding school located in the Fairy Tale World. It is attended by the teenage children of fairy tale characters. The main characters are Raven Queen, who does not want to be evil like her mother the Evil Queen, and Apple White, the daughter of Snow White who wishes to live "happily ever after". Raven prefers to be free to create her own destiny, while Apple, to protect her and others' own destiny, believes that Raven should become the next Evil Queen. The students are generally divided into two groups. The "Royals" are the students who side with Apple in embracing their destinies and following in their parents' footsteps. The "Rebels" are the students who side with Raven in wanting to create their own destinies. Many of the stories are about the students' regular interactions as teens, but there is an underlying story arc where, according to the school's headmaster, if the students do not follow their individual destinies, their stories will cease to exist and they will disappear forever. Characters Ever After High has a number of characters from its various media. The characters listed below are profiled at the franchise's website and most have featured dolls. These students are content with following their destinies as listed in their fairy tale: Some of the following students do not agree with their destinies and want their own destiny: The following students have website profiles, but it is not clear what faction they are part of: The following students are supporting characters that have yet to have a doll or a website profile. Some are not tied to either faction. The following are the faculty members of Ever After High: Development and release Building upon the success of Monster High, in July 2013, Mattel announced plans to launch a spin-off line of dolls. The company estimated only about 10–20 million was budgeted for the development. Mattel's 2013 annual report noted that visitors of the website had spent an average of 20 minutes viewing content, and that it had contributed to gross sales. In October 2013, Mattel launched Ever After High globally, reaching 14 countries, with plans to reach 30 territories in 2014. Six fashion dolls were initially released, and related social media such as a website, YouTube channel, global Facebook, and an interactive music video directed by Wayne Isham. Media Ever After High has a series of animated shorts on YouTube. In June 2014, Netflix announced it was developing a series of episodes based on the webisodes, which was released on February 6, 2015. The theme song was composed by Gabriel Mann and Allison Bloom, and was performed by Keeley Bumford. A live-action music video was released on October 15, 2013, featuring Stevie Dore as a high school senior and four younger girls dancing at a campus. Little, Brown Books for Young Readers (LBYR) has developed a book series for Ever After High. The company had worked on the Monster High book series, which had sold over two million copies. LBYR vice-president Erin Stein said in an interview for Publishers Weekly that when he was first exposed to the Ever After High franchise, "immediately dozens of ideas were spinning in our heads...How could we resist publishing this? It was a no-brainer." The first book, The Storybook of Legends was written by Shannon Hale, who had worked on other titles such as Princess Academy and The Goose Girl. It debuted in October 2013, with the first print of 300,000 copies. Hale has remarked that she likes that the Ever After High books can reach kids who are not regular readers. The first book reached number 7 on the New York Times bestseller list for Children's Middle Grade. In addition to the trilogy, Hale was written a collection of short stories that were compiled in Once Upon a Time: A Story Collection. LBYR has also released a second Ever After High book series by Suzanne Selfors. The first novel, Next Top Villain, was on the Publishers Weekly best-sellers list of Children's Frontlist Fiction for ten weeks. LBYR released a third book series called The Secret Diaries. LBYR released a fourth book series called Once Upon a Twist. Parragon Books has licensed Ever After High for activity packs, novelty-and-book sets, and gift box sets. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Dionysus] | [TOKENS: 21385] |
Contents Dionysus In ancient Greek religion and myth, Dionysus (/daɪ.əˈnaɪ.səs/ ⓘ; Ancient Greek: Διόνυσος Diónysos) is the god of wine-making, orchards and fruit, vegetation, fertility, festivity, insanity, ritual madness, religious ecstasy, and theatre. He was also known as Bacchus (/ˈbækəs/ or /ˈbɑːkəs/; Ancient Greek: Βάκχος Bacchos) by the Greeks (a name later adopted by the Romans) for a frenzy he is said to induce called baccheia. His wine, music, and ecstatic dance were considered to free his followers from self-conscious fear and care, and subvert the oppressive restraints of the powerful. His thyrsus, a fennel-stem sceptre, sometimes wound with ivy and dripping with honey, is both a beneficent wand and a weapon used to destroy those who oppose his cult and the freedoms he represents. Those who partook of his mysteries were believed to become possessed and empowered by the god himself. His origins are uncertain, and his cults took many forms. Traditionally the cult of Dionysus has been said to have originated from Asia Minor and introduced to the Greeks via Thracian: 165 . But, his name is mentioned in Linear B tablets from Pylos; so, he might be of Mycenaean origin: 165 . In Orphism, he was variously a son of Zeus and Persephone; a chthonic or underworld aspect of Zeus; or the twice-born son of Zeus and the mortal Semele: 163 . The Eleusinian Mysteries identify him with Iacchus, the son or husband of Demeter. Most accounts say he was born in Thrace, traveled abroad, and arrived in Greece as a foreigner. His attribute of "foreignness" as an arriving outsider-god may be inherent and essential to his cults, as he is a god of epiphany, sometimes called "the god who comes". Wine was a religious focus in the cult of Dionysus and was his earthly incarnation. Wine could ease suffering, bring joy, and inspire divine madness. Festivals of Dionysus included the performance of sacred dramas enacting his myths, the initial driving force behind the development of theatre in Western culture. The cult of Dionysus is also a "cult of the souls"; his maenads feed the dead through blood-offerings, and he acts as a divine communicant between the living and the dead. He is sometimes categorised as a dying-and-rising god. Scholars note parallels between Dionysus and Jesus as dying-and-rising gods, though key differences and contexts complicate direct comparisons. Romans identified Bacchus with their own Liber Pater, "the free Father" of the Liberalia festival, patron of viniculture, wine and male fertility, and guardian of the traditions, rituals and freedoms attached to coming of age and citizenship. However, the Roman state treated independent, popular festivals of Bacchus (Bacchanalia) as subversive, partly because their free mixing of classes and genders transgressed traditional social and moral constraints[citation needed]. Celebration of the Bacchanalia was made a capital offence, except in the toned-down forms and greatly diminished congregations approved and supervised by the State[citation needed]. Festivals of Bacchus were merged with those of Liber and Dionysus. Name The dio- prefix in Ancient Greek Διόνυσος (Diónūsos; [di.ó.nyː.sos]) has been associated since antiquity with Zeus (genitive Dios), and the variants of the name seem to point to an original *Dios-nysos. The earliest attestation is the Mycenaean Greek dative form 𐀇𐀺𐀝𐀰 (di-wo-nu-so), featured on two tablets that had been found at Mycenaean Pylos and dated to the twelfth or thirteenth century BC. At that time, there could be no certainty on whether this was indeed a theonym, but the 1989–90 Greek-Swedish Excavations at Kastelli Hill, Chania, unearthed, inter alia, four artefacts bearing Linear B inscriptions; among them, the inscription on item KH Gq 5 is thought to confirm Dionysus's early worship. In Mycenaean Greek the form of Zeus is di-wo. The second element -nūsos is of unknown origin. It is perhaps associated with Mount Nysa, the birthplace of the god in Greek mythology, where he was nursed by nymphs (the Nysiads), although Pherecydes of Syros had postulated nũsa as an archaic word for "tree" by the sixth century BC. On a vase of Sophilos the Nysiads are named νύσαι (nusae). Kretschmer asserted that νύση (nusē) is a Thracian word that has the same meaning as νύμφη (nýmphē), a word similar with νυός (nuos) (daughter in law, or bride, I-E *snusós, Sanskr. snusā). He suggested that the male form is νῦσος (nūsos) and this would make Dionysus the "son of Zeus". Jane Ellen Harrison believed that the name Dionysus means "young Zeus". Robert S. P. Beekes has suggested a Pre-Greek origin of the name, since all attempts to find an Indo-European etymology are doubtful. Later variants include Dionūsos and Diōnūsos in Boeotia; Dien(n)ūsos in Thessaly; Deonūsos and Deunūsos in Ionia; and Dinnūsos in Aeolia, besides other variants. A Dio- prefix is found in other names, such as that of the Dioscures, and may derive from Dios, the genitive of the name of Zeus. Nonnus, in his Dionysiaca, writes that the name Dionysus means "Zeus-limp" and that Hermes named the new born Dionysus this, "because Zeus while he carried his burden lifted one foot with a limp from the weight of his thigh, and nysos in Syracusan language means limping". In his note to these lines, W. H. D. Rouse writes "It need hardly be said that these etymologies are wrong". The Suda, a Byzantine encyclopedia based on classical sources, states that Dionysus was so named "from accomplishing [διανύειν] for each of those who live the wild life. Or from providing [διανοεῖν] everything for those who live the wild life." Origins Academics in the nineteenth century, using study of philology and comparative mythology, often regarded Dionysus as a foreign deity who was only reluctantly accepted into the standard Greek pantheon at a relatively late date, based on his myths which often involve this theme—a god who spends much of his time on earth abroad, and struggles for acceptance when he returns to Greece. However, more recent evidence has shown that Dionysus was in fact one of the earliest gods attested in mainland Greek culture. The earliest written records of Dionysus worship come from Mycenaean Greece, specifically in and around the Palace of Nestor in Pylos, dated to around 1300 BC. The details of any religion surrounding Dionysus in this period are scant, and most evidence comes in the form only of his name, written as di-wo-nu-su-jo ("Dionysoio" = 'of Dionysus') in Linear B, preserved on fragments of clay tablets that indicate a connection to offerings or payments of wine, which was described as being "of Dionysus". References have also been uncovered to "women of Oinoa", the "place of wine", who may correspond to the Dionysian women of later periods. Some 19th-century classicists such as Matthew Arnold identified Phanes as a prototype for Dionysus, and argued that he was originally venerated not as a bringer of wine and revelry but as a "custodian of borrowed identities". According to this theory, early Dionysian rites were believed to allow worshippers to temporarily take on another persona – ancestor, animal, and even enemies – supposedly explaining why Dionysus is uniquely associated with theatre, madness, and foreigners. Other Mycenaean records from Pylos record the worship of a god named Eleuther, who was the son of Zeus, and to whom oxen were sacrificed. The link to both Zeus and oxen, as well as etymological links between the name Eleuther or Eleutheros with the Latin name Liber Pater, indicates that this may have been another name for Dionysus. According to Károly Kerényi, these clues suggest that even in the thirteenth century BC, the core religion of Dionysus was in place, as were his important myths. At Knossos in Minoan Crete, men were often given the name "Pentheus", who is a figure in later Dionysian myth and which also means "suffering". Kerényi argued that to give such a name to one's child implies a strong religious connection, potentially not the separate character of Pentheus who suffers at the hands of Dionysus's followers in later myths, but as an epithet of Dionysus himself, whose mythology describes a god who must endure suffering before triumphing over it. According to Kerényi, the title of "man who suffers" likely originally referred to the god himself, only being applied to distinct characters as the myth developed. The oldest known image of Dionysus accompanied by his name is found on a dinos by the Attic potter Sophilos around 570 BC and is located in the British Museum. By the seventh century, iconography found on pottery shows that Dionysus was already worshiped as more than just a god associated with wine. He was associated with weddings, death, sacrifice, and sexuality, and his retinue of satyrs and dancers was already established. A common theme in these early depictions was the metamorphosis, at the hand of the god, of his followers into hybrid creatures, usually represented by both tame and wild satyrs, representing the transition from civilised life back to nature as a means of escape. A Mycenaean variant of Bacchus was thought to have been "a divine child" abandoned by his mother and eventually raised by "nymphs, goddesses, or even animals." Epithets Dionysus was variably known with the following epithets: Acratophorus, Ἀκρατοφόρος ("giver of unmixed wine"), at Phigaleia in Arcadia. Acroreites at Sicyon. Adoneus, a rare archaism in Roman literature, a Latinised form of Adonis, used as epithet for Bacchus. Aegobolus Αἰγοβόλος ("goat-shooter") at Potniae, in Boeotia. Aesymnetes Αἰσυμνήτης ("ruler" or "lord") at Aroë and Patrae in Achaea. Agrios Ἄγριος ("wild"), in Macedonia. Androgynos Ἀνδρόγυνος ("androgynous"), refers to the god assuming both the active, masculine and passive, feminine role during intercourse with male lovers. Anthroporraistes, Ἀνθρωπορραίστης ("man-destroyer"), a title of Dionysus at Tenedos. Bassareus, Βασσαρεύς a Thracian name for Dionysus, which derives from bassaris or "fox-skin", which item was worn by his cultists in their mysteries. Bougenes, Βουγενής or Βοηγενής ("borne by a cow"), in the Mysteries of Lerna. Braetes, Βραίτης ("related to beer") at Thrace. Brisaeus, Βρισαῖος, a surname of Dionysus, derived either from mount Brisa in Lesbos or from a nymph Brisa, who was said to have brought up the god. Briseus, Βρῑσεύς ("he who prevails") in Smyrna. Bromios Βρόμιος ("roaring", as of the wind, primarily relating to the central death/resurrection element of the myth, but also the god's transformations into lion and bull, and the boisterousness of those who drink alcohol. Also cognate with the "roar of thunder", which refers to Dionysus's father, Zeus "the thunderer".) Choiropsalas χοιροψάλας ("pig-plucker": Greek χοῖρος = "pig", also used as a slang term for the female genitalia). A reference to Dionysus's role as a fertility deity. Chthonios Χθόνιος ("the subterranean") Cistophorus Κιστοφόρος ("basket-bearer, ivy-bearer"), Alludes To baskets being sacred to the god. Dasyllius Δασύλλιος ("frequenting the woods") at Megara. Dimetor Διμήτωρ ("twice-born") Refers to Dionysus's two births. Dendrites Δενδρίτης ("of the trees"), as a fertility god. Dithyrambos, Διθύραμβος used at his festivals, referring to his premature birth. Eleuthereus Ἐλευθερεύς ("of Eleutherae"). Endendros ("he in the tree"). Enorches ("with balls"), with reference to his fertility, or "in the testicles" in reference to Zeus's sewing the baby Dionysus "into his thigh", understood to mean his testicles). Used at Samos according to Hesyichius, or Lesbos according to the scholiast on Lycophron's Alexandra. Eridromos ("good-running"), in Nonnus's Dionysiaca. Erikryptos Ἐρίκρυπτος ("completely hidden"), in Macedonia. Euaster (Εὐαστήρ), from the cry "euae". Euius (Euios), from the cry "euae" in lyric passages, and in Euripides's play, The Bacchae. Iacchus, Ἴακχος a possible epithet of Dionysus, associated with the Eleusinian Mysteries. In Eleusis, he is known as a son of Zeus and Demeter. The name "Iacchus" may come from the Ιακχος (Iakchos), a hymn sung in honor of Dionysus. Indoletes, Ἰνδολέτης, meaning slayer/killer of Indians. Due to his campaign against the Indians. Isodaetes, Ισοδαίτης, meaning "he who distributes equal portions", cult epithet also shared with Helios. Kemilius, Κεμήλιος (kemas: "young deer, pricket"). Liknites ("he of the winnowing fan"), as a fertility god connected with mystery religions. A winnowing fan was used to separate the chaff from the grain. Lenaius, Ληναῖος ("god of the wine-press") Lyaeus, or Lyaios (Λυαῖος, "deliverer", literally "loosener"), one who releases from care and anxiety. Lysius, Λύσιος ("delivering, releasing"). At Thebes there was a temple of Dionysus Lysius. Melanaigis Μελάναιγις ("of the black goatskin") at the Apaturia festival. Morychus Μόρυχος ("smeared"); in Sicily, because his icon was smeared with wine lees at the vintage. Mystes Μύστης ("of the mysteries") at Korythio in Arcadia. Nysian Nύσιος, according to Philostratus, he was called like this by the ancient Indians. Most probably, because according to legend he founded the city of Nysa. Oeneus, Οἰνεύς ("wine-dark") as god of the wine press. Omadios, Ωμάδιος ("eating raw flesh"); Eusebius writes in Preparation for the Gospel that Euelpis of Carystus states that in Chios and Tenedos they did human sacrifice to Dionysus Omadios. Patroos, Πατρῷος ("paternal") at Megara. Phallen , Φαλλήν (probably "related to the phallus"), at Lesbos. Phleus ("related to the bloοm of a plant"). Pseudanor, Ψευδάνωρ (literally "false man", referring to his feminine qualities), in Macedonia. Psilax, an epithet of Dionysus in Amyclae, derived from "psila" (ψίλα), the Doric word for wings, since wine lifts men's hearts as wings lift birds. Pericionius, Περικιόνιος ("climbing the column (ivy)", a name of Dionysus at Thebes. Semeleios (Semeleius or Semeleus), an obscure epithet meaning 'He of the Earth', 'son of Semele'. Also appears in the expression Semeleios Iakchus plutodotas ("Son of Semele, Iakchus, wealth-giver"). Skyllitas, Σκυλλίτας ("related to the vine-branch") at Kos. Sykites, Συκίτης ("related to figs"), at Laconia. Taurophagus, Ταυροφάγος ("bull eating"). Tauros Ταῦρος ("a bull"), occurs as a surname of Dionysus. Theoinus, Θέοινος (wine-god of a festival in Attica). Τhyiοn, Θυίων ("from the festival of Dionysus 'Thyia' (Θυῐα) at Elis"). Thyllophorus, Θυλλοφόρος ("bearing leaves"), at Kos. In the Greek pantheon, Dionysus (along with Zeus) absorbs the role of Sabazios, a Thracian/Phrygian deity. In the Roman pantheon, Sabazius became an alternative name for Bacchus. Worship and festivals in Greece The worship of Dionysus had become firmly established by the seventh century BC. He may have been worshiped as early as c. 1500–1100 BC by Mycenaean Greeks; and traces of Dionysian-type cult have also been found in ancient Minoan Crete. The Dionysia, Haloa, Ascolia and Lenaia festivals were dedicated to Dionysus. The Rural Dionysia (or Lesser Dionysia) was one of the oldest festivals dedicated to Dionysus, begun in Attica, and probably celebrated the cultivation of wines. It was held during the winter month of Poseideon (the time surrounding the winter solstice, modern December or January). The Rural Dionysia centered on a procession, during which participants carried phalluses, long loaves of bread, jars of water and wine as well as other offerings, and young girls carried baskets. The procession was followed by a series of dramatic performances and drama competitions. The City Dionysia (or Greater Dionysia) took place in urban centers such as Athens and Eleusis, and was a later development, probably beginning during the sixth century BC. Held three months after the Rural Dionysia, the Greater festival fell near the spring equinox in the month of Elaphebolion (modern March or April). The procession of the City Dionysia was similar to that of the rural celebrations, but more elaborate, and led by participants carrying a wooden statue of Dionysus, and including sacrificial bulls and ornately dressed choruses. The dramatic competitions of the Greater Dionysia also featured more noteworthy poets and playwrights, and prizes for both dramatists and actors in multiple categories. The Anthesteria (Ἀνθεστήρια) was an Athenian festival that celebrated the beginning of spring. It spanned three days: Pithoigia (Πιθοίγια, "Jar-Opening"), Choes (Χοαί, "The Pouring") and Chythroi (Χύτροι "The Pots"). It was said the dead arose from the underworld during the span of the festival. Along with the souls of the dead, the Keres also wandered through the city and had to be banished when the festival ended. On the first day, wine vats were opened. The wine was opened and mixed in honour of the god. The rooms and the drinking vessels were adorned with flowers along with children over three years of age. On the second day, a solemn ritual for Dionysus occurred along with drinking. People dressed up, sometimes as members of Dionysus's entourage, and visited others. Choes was also the occasion of a solemn and secret ceremony in one of the sanctuaries of Dionysus in the Lenaeum, which was closed for the rest of the year. The basilissa (or basilinna), wife of the basileus, underwent a symbolic ceremonial marriage to the god, possibly representing a Hieros gamos. The basilissa was assisted by fourteen Athenian matrons (called Gerarai) who were chosen by the basileus and sworn to secrecy. The last day was dedicated to the dead. Offerings were also offered to Hermes, due to his connection to the underworld. It was considered a day of merrymaking. Some poured libations on the tombs of deceased relatives. Chythroi ended with a ritual cry intended to order the souls of the dead to return to the underworld. Keres were also banished from the festival on the last day. To protect themselves from evil, people chewed leaves of whitethorn and smeared their doors with tar to protect themselves. The festival also allowed servants and slaves to participate in the festivities. The central religious cult of Dionysus is known as the Bacchic or Dionysian Mysteries. The exact origin of this religion is unknown, though Orpheus was said to have invented the mysteries of Dionysus. Evidence suggests that many sources and rituals typically considered to be part of the similar Orphic Mysteries actually belong to Dionysian mysteries. Some scholars have suggested that, additionally, there is no difference between the Dionysian mysteries and the mysteries of Persephone, but that these were all facets of the same mystery religion, and that Dionysus and Persephone both had important roles in it. Previously considered to have been a primarily rural and fringe part of Greek religion, the major urban center of Athens played an important role in the development and spread of the Bacchic mysteries. The Bacchic mysteries served an important role in creating ritual traditions for transitions in people's lives; originally primarily for men and male sexuality, but later also created space for ritualising women's changing roles and celebrating changes of status in a woman's life. This was often symbolised by a meeting with the gods who rule over death and change, such as Hades and Persephone, but also with Dionysus's mother Semele, who probably served a role related to initiation into the mysteries. The religion of Dionysus often included rituals involving the sacrifice of goats or bulls, and at least some participants and dancers wore wooden masks associated with the god. In some instances, records show the god participating in the ritual via a masked and clothed pillar, pole, or tree, while his worshipers eat bread and drink wine. The significance of masks and goats to the worship of Dionysus seems to date back to the earliest days of his worship, and these symbols have been found together at a Minoan tomb near Phaistos in Crete. As early as the fifth century BC, Dionysus became identified with Iacchus, a minor deity from the tradition of the Eleusinian Mysteries. This association may have arisen because of the homophony of the names Iacchus and Bacchus. Two black-figure lekythoi (c. 500 BC), possibly represent the earliest evidence for such an association. The nearly-identical vases, one in Berlin, the other in Rome, depict Dionysus, along with the inscription IAKXNE, a possible miswriting of IAKXE. More early evidence can be found in the works of the fifth-century BC Athenian tragedians Sophocles and Euripides. In Sophocles's Antigone (c. 441 BC), an ode to Dionysus begins by addressing Dionysus as the "God of many names" (πολυώνυμε), who rules over the glens of Demeter's Eleusis, and ends by identifying him with "Iacchus the Giver", who leads "the chorus of the stars whose breath is fire" and whose "attendant Thyiads" dance in "night-long frenzy". And in a fragment from a lost play, Sophocles describes Nysa, Dionysus's traditional place of nurture: "From here I caught sight of Nysa, haunt of Bacchus, famed among mortals, which Iacchus of the bull's horns counts as his beloved nurse". In Euripides's Bacchae (c. 405 BC), a messenger, describing the Bacchic revelries on mount Cithaeron, associates Iacchus with Bromius, another of the names of Dionysus, saying, they "began to wave the thyrsos ... calling on Iacchus, the son of Zeus, Bromius, with united voice." An inscription found on a stone stele (c. 340 BC), found at Delphi, contains a paean to Dionysus, which describes his travels. From Thebes, where he was born, he first went to Delphi where he displayed his "starry body", and with "Delphian girls" took his "place on the folds of Parnassus", then next to Eleusis, where he is called "Iacchus": Strabo, says that Greeks "give the name 'Iacchus' not only to Dionysus but also to the leader-in-chief of the mysteries". In particular, Iacchus was identified with the Orphic Dionysus, who was a son of Persephone. Sophocles mentions "Iacchus of the bull's horns", and according to the first-century BC historian Diodorus Siculus, it was this older Dionysus who was represented in paintings and sculptures with horns, because he "excelled in sagacity and was the first to attempt the yoking of oxen and by their aid to effect the sowing of the seed". Arrian, the second-century Greek historian, wrote that it was to this Dionysus, the son of Zeus and Persephone, "not the Theban Dionysus, that the mystic chant 'Iacchus' is sung". The second-century poet Lucian also referred to the "dismemberment of Iacchus". The fourth- or fifth-century poet Nonnus associated the name Iacchus with the "third" Dionysus. He described the Athenian celebrations given to the first Dionysus Zagreus, son of Persephone, the second Dionysus Bromios, son of Semele, and the third Dionysus Iacchus: By some accounts, Iacchus was the husband of Demeter. Several other sources identify Iacchus as Demeter's son. The earliest such source, a fourth-century BC vase fragment at Oxford, shows Demeter holding the child Dionysus on her lap. By the first-century BC, Demeter suckling Iacchus had become such a common motif, that the Latin poet Lucretius could use it as an apparently recognisable example of a lover's euphemism. A scholiast on the second-century AD Aristides, explicitly names Demeter as Iacchus's mother. In the Orphic tradition, the "first Dionysus" was the son of Zeus and Persephone, and was dismembered by the Titans before being reborn. Dionysus was the patron god of the Orphics, who they connected to death and immortality, and he symbolised the one who guides the process of reincarnation. This Orphic Dionysus is sometimes referred to with the alternate name Zagreus (Ancient Greek: Ζαγρεύς). The earliest mentions of this name in literature describe him as a partner of Gaia and call him the highest god. Aeschylus linked Zagreus with Hades, as either Hades's son or Hades himself. Noting "Hades' identity as Zeus' katachthonios alter ego", Timothy Gantz thought it likely that Zagreus, originally, perhaps, the son of Hades and Persephone, later merged with the Orphic Dionysus, the son of Zeus and Persephone. However, no known Orphic sources use the name "Zagreus" to refer to the Orphic Dionysus. It is possible that the association between the two was known by the third century BC, when the poet Callimachus may have written about it in a now-lost source. Callimachus, as well as his contemporary Euphorion, told the story of the dismemberment of the infant Dionysus, and Byzantine sources quote Callimachus as referring to the birth of a "Dionysos Zagreus", explaining that Zagreus was the poets' name for the chthonic aspect of Dionysus. The earliest definitive reference to the belief that Zagreus is another name for the Orphic Dionysus is found in the late first century writings of Plutarch. The fifth century Greek poet Nonnus's Dionysiaca tells the story of this Orphic Dionysus, in which Nonnus calls him the "older Dionysos ... illfated Zagreus", "Zagreus the horned baby", "Zagreus, the first Dionysos", "Zagreus the ancient Dionysos", and "Dionysos Zagreus". Worship and festivals in Rome Bacchus was most often known by that name in Rome and other locales in the Republic and Empire, although many "often called him Dionysus." The mystery cult of Bacchus was brought to Rome from the Greek culture of southern Italy or by way of Greek-influenced Etruria. It was established around 200 BC in the Aventine grove of Stimula by a priestess from Campania, near the temple where Liber Pater ("the Free Father") had a State-sanctioned, popular cult. Liber was a native Roman god of wine, fertility, and prophecy, patron of Rome's plebeians (citizen-commoners), and one of the members of the Aventine Triad, along with his mother Ceres and sister or consort Libera. A temple to the Triad was erected on the Aventine Hill in 493 BC, along with the institution of celebrating the festival of Liberalia. The worship of the Triad gradually took on more and more Greek influence, and by 205 BC, Liber and Libera had been formally identified with Bacchus and Proserpina. Liber was often interchangeably identified with Dionysus and his mythology, though this identification was not universally accepted. Cicero insisted on the "non-identity of Liber and Dionysus" and described Liber and Libera as children of Ceres. Liber, like his Aventine companions, carried various aspects of his older cults into official Roman religion. He protected various aspects of agriculture and fertility, including the vine and the "soft seed" of its grapes, wine and wine vessels, and male fertility and virility. Pliny called Liber "the first to establish the practice of buying and selling; he also invented the diadem, the emblem of royalty, and the triumphal procession." Roman mosaics and sarcophagi attest to various representations of a Dionysus-like exotic triumphal procession. In Roman and Greek literary sources from the late Republic and Imperial era, several notable triumphs feature similar, distinctively "Bacchic" processional elements, recalling the supposedly historic "Triumph of Liber". Liber and Dionysus may have had a connection that predated Classical Greece and Rome, in the form of the Mycenaean god Eleutheros, who shared the lineage and iconography of Dionysus but whose name has the same meaning as Liber. Before the importation of the Greek cults, Liber was already strongly associated with Bacchic symbols and values, including wine and uninhibited freedom, as well as the subversion of the powerful. Several depictions from the late Republic era feature processions, depicting the "Triumph of Liber". In Rome, the most well-known festivals of Bacchus were the Bacchanalia, based on the earlier Greek Dionysia festivals. These Bacchic rituals were said to have included omophagic practices, such as pulling live animals apart and eating the whole of them raw. This practice served not only as a reenactment of the infant death and rebirth of Bacchus, but also as a means by which Bacchic practitioners produced "enthusiasm": etymologically, to let a god enter the practitioner's body or to have her become one with Bacchus. In Livy's account (late 1st century BC), the Bacchic mysteries were a novelty at Rome; originally restricted to women and held only three times a year, they were corrupted by an Etruscan-Greek version, and thereafter drunken, disinhibited men and women of all ages and social classes cavorted in a sexual free-for-all five times a month. Livy relates their various outrages against Rome's civil and religious laws and traditional morality (mos maiorum); a secretive, subversive and potentially revolutionary counter-culture. Livy's sources, and his own account of the cult, probably drew heavily on the Roman dramatic genre known as "Satyr plays", based on Greek originals. The cult was suppressed by the State with great ferocity; of the 7,000 arrested, most were executed. Modern scholarship treats much of Livy's account with skepticism; more certainly, a Senatorial edict, the Senatus consultum de Bacchanalibus (186 BC) was distributed throughout Roman and allied Italy. It banned the former Bacchic cult organisations. Each meeting must seek prior senatorial approval through a praetor. No more than three women and two men were allowed at any one meeting, and those who defied the edict risked the death penalty. Bacchus was conscripted into the official Roman pantheon as an aspect of Liber, and his festival was inserted into the Liberalia. In Roman culture, Liber, Bacchus and Dionysus became virtually interchangeable equivalents. Thanks to his mythology involving travels and struggles on earth, Bacchus became euhemerised as a historical hero, conqueror, and founder of cities. He was a patron deity and founding hero at Leptis Magna, birthplace of the emperor Septimius Severus, who promoted his cult. In some Roman sources, the ritual procession of Bacchus in a tiger-drawn chariot, surrounded by maenads, satyrs and drunkards, commemorates the god's triumphant return from the conquest of India. Pliny believed this to be the historical prototype for the Roman Triumph. Post-classical worship In the Neoplatonist philosophy and religion of Late Antiquity, the Olympian gods were sometimes considered to number 12 based on their spheres of influence. For example, according to Sallustius, "Jupiter, Neptune, and Vulcan fabricate the world; Ceres, Juno, and Diana animate it; Mercury, Venus, and Apollo harmonise it; and, lastly, Vesta, Minerva, and Mars preside over it with a guarding power." The multitude of other gods, in this belief system, subsist within the primary gods, and Sallustius taught that Bacchus subsisted in Jupiter. In the Orphic tradition, a saying was supposedly given by an oracle of Apollo that stated "Zeus, Hades, [and] Helios-Dionysus" were "three gods in one godhead". This statement apparently conflated Dionysus not only with Hades, but also his father Zeus, and implied a particularly close identification with the sun-god Helios. When quoting this in his Hymn to King Helios, Emperor Julian substituted Dionysus's name with that of Serapis, whose Egyptian counterpart Osiris was also identified with Dionysus. Three centuries after the reign of Theodosius I which saw the outlawing of pagan worship across the Roman Empire, the 692 Quinisext Council in Constantinople felt it necessary to warn Christians against participating in persisting rural worship of Dionysus, specifically mentioning and prohibiting the feast day Brumalia, "the public dances of women", ritual cross-dressing, the wearing of Dionysiac masks, and the invoking of Bacchus's name when "squeez[ing] out the wine in the presses" or "when pouring out wine into jars". According to the Lanercost chronicle, during Easter in 1282 in Scotland, the parish priest of Inverkeithing led young women in a dance in honor of Priapus and Father Liber, commonly identified with Dionysus. The priest danced and sang at the front, carrying a representation of the phallus on a pole. He was killed by a Christian mob later that year. Historian C. S. Watkins believes that Richard of Durham, the author of the chronicle, identified an occurrence of apotropaic magic (by making use of his knowledge of ancient Greek religion), rather than recording an actual case of the survival of a pagan ritual. In the eighteenth century, Hellfire Clubs appeared in Britain and Ireland. Though activities varied between the clubs, some of them were very pagan, and included shrines and sacrifices. Dionysus was one of the most popular deities, alongside deities like Venus and Flora. Today one can still see the statue of Dionysus left behind in the Hellfire Caves. In 1820, Ephraim Lyon founded the Church of Bacchus in Eastford, Connecticut. He declared himself High Priest, and added local drunks to the list of membership. He maintained that those who died as members would go to a Bacchanalia for their afterlife. Modern pagan and polytheist groups often include worship of Dionysus in their traditions and practices, most prominently groups which have sought to revive Hellenic polytheism, such as the Supreme Council of Ethnic Hellenes (YSEE). In addition to libations of wine, modern worshipers of Dionysus offer the god grape vines, ivy, and various forms of incense, particularly styrax. They may also celebrate Roman festivals such as the Liberalia (17 March, close to the Spring Equinox) or Bacchanalia (Various dates), and various Greek festivals such as the Anthesteria, Lenaia, and the Greater and Lesser Dionysias, the dates of which are calculated by the lunar calendar. Identification with other gods In the Greek interpretation of the Egyptian pantheon, Dionysus was often identified with Osiris. Stories of the dismembering of Osiris and his re-assembly and resurrection by Isis closely parallel those of the Orphic Dionysus and Demeter. According to Diodorus Siculus, as early as the fifth century BC, the two gods had been syncretised as a single deity known as Dionysus-Osiris. The most notable record of this belief is found in Herodotus's 'Histories'. Plutarch was of the same opinion, recording his belief that Osiris and Dionysus were identical and stating that anyone familiar with the secret rituals associated with the two gods would recognise obvious parallels between them, noting that the myths of their dismembering and their associated public symbols constituted sufficient additional evidence to prove that they were, in fact the same god worshiped by the two cultures under different names. Other syncretic Greco-Egyptian deities arose out of this conflation, including with the gods Serapis and Hermanubis. Serapis was believed to be both Hades and Osiris, and the Roman Emperor Julian considered him the same as Dionysus as well. Dionysus-Osiris was particularly popular in Ptolemaic Egypt, as the Ptolemies claimed descent from Dionysus, and as Pharaohs they had claim to the lineage of Osiris. This association was most notable during a deification ceremony where Mark Antony became Dionysus-Osiris, alongside Cleopatra as Isis-Aphrodite. Egyptian myths about Priapus said that the Titans conspired against Osiris, killed him, divided his body into equal parts, and "slipped them secretly out of the house". All but Osiris's penis, which since none of them "was willing to take it with him", they threw into the river. Isis, Osiris's wife, hunted down and killed the Titans, reassembled Osiris's body parts "into the shape of a human figure", and gave them "to the priests with orders that they pay Osiris the honours of a god". But since she was unable to recover the penis she ordered the priests "to pay to it the honours of a god and to set it up in their temples in an erect position." The fifth–fourth century BC philosopher Heraclitus, unifying opposites, declared that Hades and Dionysus, the very essence of indestructible life (zoë), are the same god. Among other evidence, Karl Kerényi notes in his book that the Homeric Hymn "To Demeter", votive marble images and epithets all link Hades to being Dionysus. He also notes that the grieving goddess Demeter refused to drink wine, as she states that it would be against themis (the very nature of order and justice) for her to drink wine, which is the gift of Dionysus, after Persephone's abduction because of this association; indicating that Hades may in fact have been a "cover name" for the underworld Dionysus. He suggests that this dual identity may have been familiar to those who came into contact with the Mysteries. One of the epithets of Dionysus was "Chthonios", meaning "the subterranean". Evidence for a cult connection is quite extensive, particularly in southern Italy, especially when considering the heavy involvement of death symbolism included in Dionysian worship. Statues of Dionysus found in the Ploutonion at Eleusis give further evidence as the statues found bear a striking resemblance to the statue of Eubouleus, also called Aides Kyanochaites (Hades of the flowing dark hair), known as the youthful depiction of the Lord of the Underworld. The statue of Eubouleus is described as being radiant but disclosing a strange inner darkness. Ancient portrayals show Dionysus holding in his hand the kantharos, a wine-jar with large handles, and occupying the place where one would expect to see Hades. Archaic artist Xenocles portrayed on one side of a vase, Zeus, Poseidon and Hades, each with his emblems of power; with Hades's head turned back to front and, on the other side, Dionysus striding forward to meet his bride Persephone, with the kantharos in his hand, against a background of grapes. Dionysus also shared several epithets with Hades such as Chthonios, Eubouleus and Euclius. Both Hades and Dionysus were associated with a divine tripartite deity with Zeus. Zeus, like Dionysus, was occasionally believed to have an underworld form, closely identified with Hades, to the point that they were occasionally thought of as the same god. According to Marguerite Rigoglioso, Hades is Dionysus, and this dual god was believed by the Eleusinian tradition to have impregnated Persephone. This would bring the Eleusinian in harmony with the myth in which Zeus, not Hades, impregnated Persephone to bear the first Dionysus. Rigoglioso argues that taken together, these myths suggest a belief that is that, with Persephone, Zeus/Hades/Dionysus created (in terms quoted from Kerényi) "a second, a little Dionysus", who is also a "subterranean Zeus". The unification of Hades, Zeus, and Dionysus as a single tripartite god was used to represent the birth, death and resurrection of a deity and to unify the 'shining' realm of Zeus and the dark underworld realm of Hades. According to Rosemarie Taylor-Perry, it is often mentioned that Zeus, Hades and Dionysus were all attributed to being the exact same god ... Being a tripartite deity Hades is also Zeus, doubling as being the Sky God or Zeus, Hades abducts his 'daughter' and paramour Persephone. The taking of Kore by Hades is the act which allows the conception and birth of a second integrating force: Iacchos (Zagreus-Dionysus), also known as Liknites, the helpless infant form of that Deity who is the unifier of the dark underworld (chthonic) realm of Hades and the Olympian ("Shining") one of Zeus. The Phrygian god Sabazios was alternately identified with Zeus or with Dionysus. The Byzantine Greek encyclopedia, Suda (c. tenth century), stated: Sabazios ... is the same as Dionysos. He acquired this form of address from the rite pertaining to him; for the barbarians call the bacchic cry "sabazein". Hence some of the Greeks too follow suit and call the cry "sabasmos"; thereby Dionysos [becomes] Sabazios. They also used to call "saboi" those places that had been dedicated to him and his Bacchantes ... Demosthenes [in the speech] "On Behalf of Ktesiphon" [mentions them]. Some say that Saboi is the term for those who are dedicated to Sabazios, that is to Dionysos, just as those [dedicated] to Bakkhos [are] Bakkhoi. They say that Sabazios and Dionysos are the same. Thus some also say that the Greeks call the Bakkhoi Saboi. Strabo, in the first century, linked Sabazios with Zagreus among Phrygian ministers and attendants of the sacred rites of Rhea and Dionysos. Strabo's Sicilian contemporary, Diodorus Siculus, conflated Sabazios with the secret Dionysus, born of Zeus and Persephone, However, this connection is not supported by any surviving inscriptions, which are entirely to Zeus Sabazios. Several ancient sources record an apparently widespread belief in the classical world that the god worshiped by the Jewish people, Yahweh, was identifiable as Dionysus or Liber via his identification with Sabazios. Tacitus, Lydus, Cornelius Labeo, and Plutarch all either made this association, or discussed it as an extant belief (though some, like Tacitus, specifically brought it up in order to reject it). According to Plutarch, one of the reasons for the identification is that Jews were reported to hail their god with the words "Euoe" and "Sabi", a cry typically associated with the worship of Sabazius. According to scholar Sean M. McDonough, it is possible that Plutarch's sources had confused the cry of "Iao Sabaoth" (typically used by Greek speakers in reference to Yahweh) with the Sabazian cry of "Euoe Saboe", originating the confusion and conflation of the two deities. The cry of "Sabi" could also have been conflated with the Jewish term "sabbath", adding to the evidence the ancients saw that Yahweh and Dionysus/Sabazius were the same deity. Further bolstering this connection would have been coins used by the Maccabees that included imagery linked to the worship of Dionysus such as grapes, vine leaves, and cups. However the belief that the Jewish god was identical with Dionysus/Sabazius was widespread enough that a coin dated to 55 BC depicting a kneeling king was labelled "Bacchus Judaeus" (BACCHIVS IVDAEVS), and in 139 BC praetor Cornelius Scipio Hispalus deported Jewish people for attempting to "infect the Roman customs with the cult of Jupiter Sabazius". Mythology Various different accounts and traditions existed in the ancient world regarding the parentage, birth, and life of Dionysus on earth, complicated by his several rebirths. By the first century BC, some mythographers had attempted to harmonise the various accounts of Dionysus's birth into a single narrative involving not only multiple births, but two or three distinct manifestations of the god on earth throughout history in different lifetimes. The historian Diodorus Siculus said that according to "some writers of myths" there were two gods named Dionysus, an older one, who was the son of Zeus and Persephone, but that the "younger one also inherited the deeds of the older, and so the men of later times, being unaware of the truth and being deceived because of the identity of their names thought there had been but one Dionysus." He also said that Dionysus "was thought to have two forms ... the ancient one having a long beard, because all men in early times wore long beards, and the younger one being long-haired, youthful and effeminate and young." Though the varying genealogy of Dionysus was mentioned in many works of classical literature, only a few contain the actual narrative myths surrounding the events of his multiple births. These include the first century BC Bibliotheca historica by Greek historian Diodorus, which describes the birth and deeds of the three incarnations of Dionysus; the brief birth narrative given by the first century AD Roman author Hyginus, which describes a double birth for Dionysus; and a longer account in the form of Greek poet Nonnus's epic Dionysiaca, which discusses three incarnations of Dionysus similar to Diodorus's account, but which focuses on the life of the third Dionysus, born to Zeus and Semele. Though Diodorus mentions some traditions which state an older, Indian or Egyptian Dionysus existed who invented wine, no narratives are given of his birth or life among mortals, and most traditions ascribe the invention of wine and travels through India to the last Dionysus. According to Diodorus, Dionysus was originally the son of Zeus and Persephone (or alternately, Zeus and Demeter). This is the same horned Dionysus described by Hyginus and Nonnus in later accounts, and the Dionysus worshiped by the Orphics, who was dismembered by the Titans and then reborn. Nonnus calls this Dionysus Zagreus, while Diodorus says he is also considered identical with Sabazius. However, unlike Hyginus and Nonnus, Diodorus does not provide a birth narrative for this incarnation of the god. It was this Dionysus who was said to have taught mortals how to use oxen to plow the fields, rather than doing so by hand. His worshipers were said to have honored him for this by depicting him with horns. The Greek poet Nonnus gives a birth narrative for Dionysus in his late fourth or early fifth century AD epic Dionysiaca. In it, he described how Zeus "intended to make a new Dionysos grow up, a bullshaped copy of the older Dionysos" who was the Egyptian god Osiris. (Dionysiaca 4) Zeus took the shape of a serpent ("drakon"), and "ravished the maidenhood of unwedded Persephoneia." According to Nonnus, though Persephone was "the consort of the blackrobed king of the underworld", she remained a virgin, and had been hidden in a cave by her mother to avoid the many gods who were her suitors, because "all that dwelt in Olympos were bewitched by this one girl, rivals in love for the marriageable maid." (Dionysiaca 5) After her union with Zeus, Persephone's womb "swelled with living fruit", and she gave birth to a horned baby, named Zagreus. Zagreus, despite his infancy, was able to climb onto the throne of Zeus and brandish his lightning bolts, marking him as Zeus's heir. Hera saw this and alerted the Titans, who smeared their faces with chalk and ambushed the infant Zagreus "while he contemplated his changeling countenance reflected in a mirror." They attacked him. However, according to Nonnus, "where his limbs had been cut piecemeal by the Titan steel, the end of his life was the beginning of a new life as Dionysos." He began to change into many different forms in which he returned the attack, including Zeus, Cronus, a baby, and "a mad youth with the flower of the first down marking his rounded chin with black." He then transformed into several animals to attack the assembled Titans, including a lion, a wild horse, a horned serpent, a tiger, and, finally, a bull. Hera intervened, killing the bull with a shout, and the Titans finally slaughtered him and cut him into pieces. Zeus attacked the Titans and had them imprisoned in Tartaros. This caused the mother of the Titans, Gaia, to suffer, and her symptoms were seen across the whole world, resulting in fires and floods, and boiling seas. Zeus took pity on her, and in order to cool down the burning land, he caused great rains to flood the world. (Dionysiaca 6) In the Orphic tradition, Dionysus was, in part, a god associated with the underworld. As a result, the Orphics considered him the son of Persephone, and believed that he had been dismembered by the Titans and then reborn. The earliest attestation of this myth of the dismemberment and rebirth of Dionysus comes from the 1st century BC, in the works of Philodemus and Diodorus Siculus. Later, Neoplatonists such as Damascius and Olympiodorus added a number of further elements to the myth, including the punishment of the Titans by Zeus for their act, their destruction by a thunderbolt from his hand, and the subsequent birth of humankind from their ashes; however, whether any of these elements were part of the original myth is the subject of debate among scholars. The dismemberment of Dionysus (the sparagmos) has often been considered the most important myth of Orphism. Many modern sources identify this "Orphic Dionysus" with the god Zagreus, though this name does not seem to have been used by any of the ancient Orphics, who simply called him Dionysus. As pieced together from various ancient sources, the reconstructed story, usually given by modern scholars, goes as follows. Zeus had intercourse with Persephone in the form of a serpent, producing Dionysus. The infant was taken to Mount Ida, where, like the infant Zeus, he was guarded by the dancing Curetes. Zeus intended Dionysus to be his successor as ruler of the cosmos, but a jealous Hera incited the Titans to kill the child. Damascius claims that he was mocked by the Titans, who gave him a fennel stalk (thyrsus) in place of his rightful scepter. Diodorus relates that Dionysus is the son of Zeus and Demeter, the goddess of agriculture, and that his birth narrative is an allegory for the generative power of the gods at work in nature. When the "Sons of Gaia" (i.e. the Titans) boiled Dionysus following his birth, Demeter gathered together his remains, allowing his rebirth. Diodorus noted the symbolism this myth held for its adherents: Dionysus, god of the vine, was born from the gods of the rain and the earth. He was torn apart and boiled by the sons of Gaia, or "earth born", symbolising the harvesting and wine-making process. Just as the remains of the bare vines are returned to the earth to restore its fruitfulness, the remains of the young Dionysus were returned to Demeter allowing him to be born again. The birth narrative given by Gaius Julius Hyginus (c. 64 BC – 17 AD) in Fabulae 167, agrees with the Orphic tradition that Liber (Dionysus) was originally the son of Jove (Zeus) and Proserpine (Persephone). Hyginus writes that Liber was torn apart by the Titans, so Jove took the fragments of his heart and put them into a drink which he gave to Semele, the daughter of Harmonia and Cadmus, king and founder of Thebes. This resulted in Semele becoming pregnant. Juno appeared to Semele in the form of her nurse, Beroe, and told her: "Daughter, ask Jove to come to you as he comes to Juno, so you may know what pleasure it is to sleep with a god." When Semele requested that Jove do so, she was killed by a thunderbolt. Jove then took the infant Liber from her womb, and put him in the care of Nysus. Hyginus states that "for this reason he is called Dionysus, and also the one with two mothers" (dimētōr). Nonnus describes how, when life was rejuvenated after the flood, it was lacking in revelry in the absence of Dionysus. "The Seasons, those daughters of the lichtgang, still joyless, plaited garlands for the gods only of meadow-grass. For Wine was lacking. Without Bacchos to inspire the dance, its grace was only half complete and quite without profit; it charmed only the eyes of the company, when the circling dancer moved in twists and turns with a tumult of footsteps, having only nods for words, hand for mouth, fingers for voice." Zeus declared that he would send his son Dionysus to teach mortals how to grow grapes and make wine, to alleviate their toil, war, and suffering. After he became protector of humanity, Zeus promises, Dionysus would struggle on earth, but be received "by the bright upper air to shine beside Zeus and to share the courses of the stars." (Dionysiaca 7). The mortal princess Semele then had a dream, in which Zeus destroyed a fruit tree with a bolt of lightning, but did not harm the fruit. He sent a bird to bring him one of the fruits, and sewed it into his thigh, so that he would be both mother and father to the new Dionysus. She saw the bull-shaped figure of a man emerge from his thigh, and then came to the realisation that she herself had been the tree. Her father Cadmus, fearful of the prophetic dream, instructed Semele to make sacrifices to Zeus. Semele became a priestess of the god and, on one occasion, she was observed by Zeus as she slaughtered a bull at his altar and afterwards swam in the river Asopus to cleanse herself of the blood. Flying over the scene in the guise of an eagle, Zeus fell in love with Semele and repeatedly visited her secretly. The first time he came to Semele in her bed, he was adorned with various symbols of Dionysus. He transformed into a snake, and "Zeus made long wooing, and shouted "Euoi!" as if the winepress were near, as he begat his son who would love the cry." Immediately, Semele's bed and chambers were overgrown with vines and flowers, and the earth laughed. Zeus then spoke to Semele, revealing his true identity, and telling her to be happy: "you bring forth a son who shall not die, and you I will call immortal. Happy woman! you have conceived a son who will make mortals forget their troubles, you shall bring forth joy for gods and men." (Dionysiaca 7). During her pregnancy, Semele rejoiced in the knowledge that her son would be divine. She dressed herself in garlands of flowers and wreathes of ivy, and would run barefoot to the meadows and forests to frolic whenever she heard music. Hera became envious and feared that Zeus would replace her with Semele as queen of Olympus. She went to Semele in the guise of an old woman who had been Cadmus's wet nurse. She made Semele jealous of the attention Zeus gave to Hera, compared with their own brief liaison and provoked her to request Zeus to appear before her in his full godhood. Semele prayed to Zeus that he show himself. Zeus answered her prayers but warned her that no other mortals had ever seen him as he held his lightning bolts. Semele reached out to touch them and was burnt to ash. (Dionysiaca 8). But the infant Dionysus survived, and Zeus rescued him from the flames, sewing him into his thigh. "So the rounded thigh in labour became female, and the boy too soon born was brought forth, but not in a mother's way, having passed from a mother's womb to a father's." (Dionysiaca 9). At his birth, he had a pair of horns shaped like a crescent moon. The Seasons crowned him with ivy and flowers, and wrapped horned snakes around his own horns. An alternate birth narrative is given by Diodorus from the Egyptian tradition. In it, Dionysus is the son of Ammon, who Diodorus regards both as the creator god and a quasi-historical king of Libya. Ammon had married the goddess Rhea, but he had an affair with Amaltheia, who bore Dionysus. Ammon feared Rhea's wrath if she were to discover the child, so he took the infant Dionysus to Nysa (Dionysus's traditional childhood home). Ammon brought Dionysus into a cave where he was to be cared for by Nysa, a daughter of the hero Aristaeus. Dionysus grew famous due to his skill in the arts, his beauty, and his strength. It was said that he discovered the art of winemaking during his boyhood. His fame brought him to the attention of Rhea, who was furious with Ammon for his deception. She attempted to bring Dionysus under her own power but, unable to do so, she left Ammon and married Cronus. Even in antiquity, the account of Dionysus's birth to a mortal woman led some to argue that he had been a historical figure who became deified over time, a suggestion of Euhemerism (an explanation of mythic events having roots in mortal history) often applied to demi-gods. The 4th-century Roman emperor and philosopher Julian encountered examples of this belief, and wrote arguments against it. In his letter To the Cynic Heracleios, Julian wrote "I have heard many people say that Dionysus was a mortal man because he was born of Semele and that he became a god through his knowledge of theurgy and the Mysteries, and like our lord Heracles for his royal virtue was translated to Olympus by his father Zeus." However, to Julian, the myth of Dionysus's birth (and that of Heracles) stood as an allegory for a deeper spiritual truth. The birth of Dionysus, Julian argues, was "no birth but a divine manifestation" to Semele, who foresaw that a physical manifestation of the god Dionysus would soon appear. However, Semele was impatient for the god to come, and began revealing his mysteries too early; for her transgression, she was struck down by Zeus. When Zeus decided it was time to impose a new order on humanity, for it to "pass from the nomadic to a more civilized mode of life", he sent his son Dionysus from India as a god made visible, spreading his worship and giving the vine as a symbol of his manifestation among mortals. In Julian's interpretation, the Greeks "called Semele the mother of Dionysus because of the prediction that she had made, but also because the god honored her as having been the first prophetess of his advent while it was yet to be." The allegorical myth of the birth of Dionysus, per Julian, was developed to express both the history of these events and encapsulate the truth of his birth outside the generative processes of the mortal world, but entering into it, though his true birth was directly from Zeus along into the intelligible realm. According to Nonnus, Zeus gave the infant Dionysus to the care of Hermes. Hermes gave Dionysus to the Lamides, or daughters of Lamos, who were river nymphs. But Hera drove the Lamides mad and caused them to attack Dionysus, who was rescued by Hermes. Hermes next brought the infant to Ino for fostering by her attendant Mystis, who taught him the rites of the mysteries (Dionysiaca 9). In Apollodorus's account, Hermes instructed Ino to raise Dionysus as a girl, to hide him from Hera's wrath. However, Hera found him, and vowed to destroy the house with a flood; however, Hermes again rescued Dionysus, this time bringing him to the mountains of Lydia. Hermes adopted the form of Phanes, most ancient of the gods, and so Hera bowed before him and let him pass. Hermes gave the infant to the goddess Rhea, who cared for him through his adolescence. Another version is that Dionysus was taken to the rain-nymphs of Nysa, who nourished his infancy and childhood, and for their care Zeus rewarded them by placing them as the Hyades among the stars (see Hyades star cluster). In yet another version of the myth, he is raised by his cousin Macris on the island of Euboea. Dionysus in Greek mythology is a god of foreign origin, and while Mount Nysa is a mythological location, it is invariably set far away to the east or to the south. The Homeric Hymn 1 to Dionysus places it "far from Phoenicia, near to the Egyptian stream". Others placed it in Anatolia, or in Libya ("away in the west beside a great ocean"), in Ethiopia (Herodotus), or Arabia (Diodorus Siculus). According to Herodotus: As it is, the Greek story has it that no sooner was Dionysus born than Zeus sewed him up in his thigh and carried him away to Nysa in Ethiopia beyond Egypt; and as for Pan, the Greeks do not know what became of him after his birth. It is therefore plain to me that the Greeks learned the names of these two gods later than the names of all the others, and trace the birth of both to the time when they gained the knowledge. — Herodotus, Histories 2.146.2 The Bibliotheca seems to be following Pherecydes, who relates how the infant Dionysus, god of the grapevine, was nursed by the rain-nymphs, the Hyades at Nysa. Young Dionysus was also said to have been one of the many famous pupils of the centaur Chiron. According to Ptolemy Chennus in the Library of Photius, "Dionysus was loved by Chiron, from whom he learned chants and dances, the bacchic rites and initiations." When Dionysus grew up, he discovered the culture of the vine and the mode of extracting its precious juice, being the first to do so; but Hera struck him with madness, and drove him forth a wanderer through various parts of the earth. In Phrygia the goddess Cybele, better known to the Greeks as Rhea, cured him and taught him her religious rites, and he set out on a progress through Asia teaching the people the cultivation of the vine. The most famous part of his wanderings is his expedition to India, which is said to have lasted several years. According to a legend, when Alexander the Great reached a city called Nysa near the Indus river, the locals said that their city was founded by Dionysus in the distant past and their city was dedicated to the god Dionysus. These travels took something of the form of military conquests; according to Diodorus Siculus he conquered the whole world except for Britain and Ethiopia. Another myth according to Nonnus involves Ampelus, a satyr, who was loved by Dionysus. As related by Ovid, Ampelus became the constellation Vindemitor, or the "grape-gatherer": ... not so will the Grape-gatherer escape thee. The origin of that constellation also can be briefly told. 'Tis said that the unshorn Ampelus, son of a nymph and a satyr, was loved by Bacchus on the Ismarian hills. Upon him the god bestowed a vine that trailed from an elm's leafy boughs, and still the vine takes from the boy its name. While he rashly culled the gaudy grapes upon a branch, he tumbled down; Liber bore the lost youth to the stars." Another story of Ampelus was related by Nonnus: in an accident foreseen by Dionysus, the youth was killed while riding a bull maddened by the sting of a gadfly sent by Selene, the goddess of the Moon. The Fates granted Ampelus a second life as a vine, from which Dionysus squeezed the first wine. Returning in triumph to Greece after his travels in Asia, Dionysus came to be considered the founder of the triumphal procession. He undertook efforts to introduce his religion into Greece, but was opposed by rulers who feared it, on account of the disorders and madness it brought with it. In one myth, adapted in Euripides's play The Bacchae, Dionysus returns to his birthplace, Thebes, which is ruled by his cousin Pentheus. Pentheus, as well as his mother Agave and his aunts Ino and Autonoë, disbelieve Dionysus's divine birth. Despite the warnings of the blind prophet Tiresias, they deny his worship and denounce him for inspiring the women of Thebes to madness. Dionysus uses his divine powers to drive Pentheus insane, then invites him to spy on the ecstatic rituals of the Maenads, in the woods of Mount Cithaeron. Pentheus, hoping to witness a sexual orgy, hides himself in a tree. The Maenads spot him; maddened by Dionysus, they take him to be a mountain-dwelling lion and attack him with their bare hands. Pentheus's aunts and his mother Agave are among them, and they rip him limb from limb. Agave mounts his head on a pike and takes the trophy to her father Cadmus. Euripides's description of this sparagmos was as follows: "But she was foaming at the mouth, her eyes rolled all around; her mind was mindless now. Held by the god, she paid the man no heed. She grabbed his left arm just below the elbow: wedging her foot against the victim's ribs she ripped his shoulder off – not by mere force; the god made easy everything they touch. On his right arm worked Ino, ripping flesh; Autonoë and the mob of maenads griped him, screaming as one. While he had breath, he cried, but they were whooping victory calls. One took an arm, a foot another, boot and all. They stripped his torso bare, staining their nails with blood, then tossed balls of flesh around. Pentheus' body lies in fragments now: on the hard rocks, and mingled with the leaves buried in the woodland, hard to find. His mother stumbled across his head: poor head! She grabbed it, and fixed it on her thyrsus, like a lions's, to wave in joyful triumph at her hunt." The madness passes. Dionysus arrives in his true, divine form, banishes Agave and her sisters, and transforms Cadmus and his wife Harmonia into serpents. Only Tiresias is spared. In the Iliad, when King Lycurgus of Thrace heard that Dionysus was in his kingdom, he imprisoned Dionysus's followers, the Maenads. Dionysus fled and took refuge with Thetis, and sent a drought which stirred the people to revolt. The god then drove King Lycurgus insane and had him slice his own son into pieces with an axe in the belief that he was a patch of ivy, a plant holy to Dionysus. An oracle then claimed that the land would stay dry and barren as long as Lycurgus lived, and his people had him drawn and quartered. Appeased by the king's death, Dionysus lifted the curse. In an alternative version, sometimes depicted in art, Lycurgus tries to kill Ambrosia, a follower of Dionysus, who was transformed into a vine that twined around the enraged king and slowly strangled him. The Homeric Hymn 7 to Dionysus recounts how, while he sat on the seashore, some sailors spotted him, believing him a prince. They attempted to kidnap him and sail away to sell him for ransom or into slavery. No rope would bind him. The god turned into a fierce lion and unleashed a bear on board, killing all in his path. Those who jumped ship were mercifully turned into dolphins. The only survivor was the helmsman, Acoetes, who recognised the god and tried to stop his sailors from the start. In a similar story, Dionysus hired a Tyrrhenian pirate ship to sail from Icaria to Naxos. When he was aboard, they sailed not to Naxos but to Asia, intending to sell him as a slave. This time the god turned the mast and oars into snakes, and filled the vessel with ivy and the sound of flutes so that the sailors went mad and, leaping into the sea, were turned into dolphins. In Ovid's Metamorphoses, Bacchus begins this story as a young child found by the pirates but transforms to a divine adult when on board. Many of the myths involve Dionysus defending his godhead against skeptics. Malcolm Bull notes that "It is a measure of Bacchus's ambiguous position in classical mythology that he, unlike the other Olympians, had to use a boat to travel to and from the islands with which he is associated". Paola Corrente notes that in many sources, the incident with the pirates happens towards the end of Dionysus's time among mortals. In that sense, it serves as final proof of his divinity and is often followed by his descent into Hades to retrieve his mother, both of whom can then ascend into heaven to live alongside the other Olympian gods. Pausanias, in book II of his Description of Greece, describes two variant traditions regarding Dionysus's katabasis, or descent into the underworld. Both describe how Dionysus entered into the afterlife to rescue his mother Semele, and bring her to her rightful place on Olympus. To do so, he had to contend with the hell dog Cerberus, which was restrained for him by Heracles. After retrieving Semele, Dionysus emerged with her from the unfathomable waters of a lagoon on the coast of the Argolid near the prehistoric site of Lerna, according to the local tradition. This mythic event was commemorated with a yearly nighttime festival, the details of which were held secret by the local religion. According to Paola Corrente, the emergence of Dionysus from the waters of the lagoon may signify a form of rebirth for both him and Semele as they reemerged from the underworld. A variant of this myth forms the basis of Aristophanes's comedy The Frogs. According to the Christian writer Clement of Alexandria, Dionysus was guided in his journey by Prosymnus or Polymnus, who requested, as his reward, to be Dionysus's lover. Prosymnus died before Dionysus could honor his pledge, so to satisfy Prosymnus's shade, Dionysus fashioned a phallus from a fig branch and penetrated himself with it at Prosymnus's tomb. This story survives in full only in Christian sources, whose aim was to discredit pagan mythology, but it appears to have also served to explain the origin of secret objects used by the Dionysian Mysteries. This same myth of Dionysus's descent to the underworld is related by both Diodorus Siculus in his first century BC work Bibliotheca historica, and Pseudo-Apollodorus in the third book of his first century AD work Bibliotheca. In the latter, Apollodorus tells how after having been hidden away from Hera's wrath, Dionysus traveled the world opposing those who denied his godhood, finally proving it when he transformed his pirate captors into dolphins. After this, the culmination of his life on earth was his descent to retrieve his mother from the underworld. He renamed his mother Thyone, and ascended with her to heaven, where she became a goddess. In this variant of the myth, it is implied that Dionysus must prove his godhood to mortals and then also legitimised his place on Olympus by proving his lineage and elevating his mother to divine status, before taking his place among the Olympic gods. Dionysus discovered that his old school master and foster father, Silenus, had gone missing. The old man had wandered away drunk, and was found by some peasants who carried him to their king Midas (alternatively, he passed out in Midas's rose garden). The king recognised him hospitably, feasting him for ten days and nights while Silenus entertained with stories and songs. On the eleventh day, Midas brought Silenus back to Dionysus. Dionysus offered the king his choice of reward. Midas asked that whatever he might touch would turn to gold. Dionysus consented, though was sorry that he had not made a better choice. Midas rejoiced in his new power, which he hastened to put to the test. He touched and turned to gold an oak twig and a stone, but his joy vanished when he found that his bread, meat, and wine also turned to gold. Later, when his daughter embraced him, she too turned to gold. The horrified king strove to divest the Midas Touch, and he prayed to Dionysus to save him from starvation. The god consented, telling Midas to wash in the river Pactolus. As he did so, the power passed into them, and the river sands turned gold: this etiological myth explained the gold sands of the Pactolus. When Theseus abandoned Ariadne sleeping on Naxos, Dionysus found and married her. They had a son named Oenopion, but she committed suicide or was killed by Perseus. In some variants, Dionysus had her crown put into the heavens as the constellation Corona; in others, he descended into Hades to restore her to the gods on Olympus. Another account claims Athena appeared in a dream to Theseus and instructed him to abandon Ariadne on the island of Naxos or Hermes told him so, for Dionysus wanted to marry her: 296 . Psalacantha, a nymph, promised to help Dionysus court Ariadne in exchange for his sexual favours; but Dionysus refused, so Psalacantha advised Ariadne against going with him. For this Dionysus turned her into the plant with the same name. Dionysus fell in love with a nymph named Nicaea, in some versions by Eros's binding. Nicaea however was a sworn virgin and scorned his attempts to court her. So one day, while she was away, he replaced the water in the spring from which she used to drink with wine. Intoxicated, Nicaea passed out, and Dionysus raped her in her sleep. When she woke up and realised what had happened, she sought him out to harm him, but she never found him. She gave birth to his sons Telete, Satyrus, and others. Dionysus named the ancient city of Nicaea after her. In Nonnus's Dionysiaca, Eros made Dionysus fall in love with Aura, a virgin companion of Artemis, as part of a ploy to punish Aura for having insulted Artemis. Dionysus used the same trick as with Nicaea to get her fall asleep, tied her up, and then raped her. Aura tried to kill herself, with little success. When she gave birth to twin sons by Dionysus, Iacchus and another boy, she ate one twin before drowning herself in the Sangarius river. Also in the Dionysiaca, Nonnus relates how Dionysus fell in love with a handsome satyr named Ampelos, who was killed by Selene due to him challenging her. On his death, Dionysus changed him into the first grapevine. Elsewhere in the same epic, Dionysus arrives in Thrace to punish the impious king Sithon who slays all of his daughter Pallene's suitors; after a brief wrestling match with the princess herself, he defeats her, kills Sithon and beds the maiden. Another account about Dionysus's parentage indicates that he is the son of Zeus and Gê (Gaia), also named Themelê (foundation), corrupted into Semele. When Hera got tricked by Hephaestus—in order to avenge his mother's having ejected him from Olympos—to sit on a golden throne he gifted her, she got tied by invisible chords and none but Hephaestus could get her off from the throne, it became necessary to fetch Hephaestus back to Olympos, which he refused: 177 . However, Dionysus was able to get Hephaestus drunk, and once intoxicated hauled him back to Olympus (to release Hera): 177 . During the Gigantomachy, Dionysus killed the giant Eurytus with his thyrsus. A third descent by Dionysus to Hades is invented by Aristophanes in his comedy The Frogs. Dionysus, as patron of the Athenian dramatic festival, the Dionysia, wants to bring back to life one of the great tragedians. After a poetry slam, Aeschylus is chosen in preference to Euripides. Callirhoë was a lovely Calydonian woman who scorned Coresus, a priest of Dionysus, so he begged the god to avenge him, and Dionysus sent a plague that drove people insane before killing them. The oracle of Dodona decreed that Dionysus would only be appeased if Callirhoë, or anyone willing to substitute her, was sacrificed to him. As Callirhoë could not persuade anyone to take her place, she was led to the altar like a victim. Coresus was the one with the duty to sacrifice Callirhoë, but he could not bring himself to do it and so he killed himself instead, becoming the substitute victim. In pity, Callirhoë killed herself by a spring which was later named after her. Dionysus also sent a fox that was fated never to be caught in Thebes. Creon, king of Thebes, sent Amphitryon to catch and kill the fox. Amphitryon obtained from Cephalus the dog that his wife Procris had received from Minos, which was fated to catch whatever it pursued.[citation needed] Hyginus relates that Dionysus once gave human speech to a donkey. The donkey then proceeded to challenge Priapus in a contest about which between them had the better penis; the donkey lost. Priapus killed the donkey, but Dionysus placed him among the stars, above the Crab. The following is a list of Dionysus's offspring, by various mothers. Beside each offspring, the earliest source to record the parentage is given, along with the century to which the source (in some cases approximately) dates. Iconography and depictions The earliest cult images of Dionysus show a mature male, bearded and robed. He holds a fennel staff, tipped with a pine-cone and known as a thyrsus. Later images show him as a beardless, sensuous, naked or half-naked androgynous youth: the literature describes him as womanly or "man-womanish". In its fully developed form, his central cult imagery shows his triumphant, disorderly arrival or return, as if from some place beyond the borders of the known and civilised. His procession (thiasus) is made up of wild female followers (maenads, or bassarides) in fox robes, and bearded satyrs with erect penises; some are armed with the thyrsus, some dance or play music. The god himself is drawn in a chariot, usually by exotic beasts such as lions or tigers, and is sometimes attended by a bearded, drunken Silenus. This procession is presumed to be the cult model for the followers of his Dionysian Mysteries. Dionysus is represented by city religions as the protector of those who do not belong to conventional society and he thus symbolises the chaotic, dangerous and unexpected, everything which escapes human reason and which can only be attributed to the unforeseeable action of the gods. Dionysus was a god of resurrection and he was strongly linked to the bull. In a cult hymn from Olympia, at a festival for Hera, Dionysus is invited to come as a bull; "with bull-foot raging". Walter Burkert relates, "Quite frequently [Dionysus] is portrayed with bull horns, and in Kyzikos he has a tauromorphic image", and refers also to an archaic myth in which Dionysus is slaughtered as a bull calf and impiously eaten by the Titans. The snake and phallus were symbols of Dionysus in ancient Greece, and of Bacchus in Greece and Rome. There is a procession called the phallophoria, in which villagers would parade through the streets carrying phallic images or pulling phallic representations on carts. He typically wears a panther or leopard skin and carries a thyrsus. His iconography sometimes includes maenads, who wear wreaths of ivy and serpents around their hair or neck. The cult of Dionysus was closely associated with trees, specifically the fig tree, and some of his bynames exhibit this, such as Endendros "he in the tree" or Dendritēs, "he of the tree". Peters suggests the original meaning as "he who runs among the trees", or that of a "runner in the woods". Janda (2010) accepts the etymology but proposes the more cosmological interpretation of "he who impels the (world-)tree". This interpretation explains how Nysa could have been re-interpreted from a meaning of "tree" to the name of a mountain: the axis mundi of Indo-European mythology is represented both as a world-tree and as a world-mountain. Dionysus is also closely associated with the transition between summer and autumn. In the Mediterranean summer, marked by the rising of the dog star Sirius, the weather becomes extremely hot, but it is also a time when the promise of coming harvests grow. Late summer, when Orion is at the center of the sky, was the time of the grape harvest in ancient Greece. Plato describes the gifts of this season as the fruit that is harvested as well as Dionysian joy. Pindar describes the "pure light of high summer" as closely associated with Dionysus and possibly even an embodiment of the god himself. An image of Dionysus's birth from Zeus's thigh calls him "the light of Zeus" (Dios phos) and associates him with the light of Sirius. The god, and still more often his followers, were commonly depicted in the painted pottery of Ancient Greece, much of which made to hold wine. But, apart from some reliefs of maenads, Dionysian subjects rarely appeared in large sculpture before the Hellenistic period, when they became common. In these, the treatment of the god himself ranged from severe archaising or Neo Attic types such as the Dionysus Sardanapalus to types showing him as an indolent and androgynous young man, often nude. Hermes and the Infant Dionysus is probably a Greek original in marble, and the Ludovisi Dionysus group is probably a Roman original of the second century AD. Well-known Hellenistic sculptures of Dionysian subjects, surviving in Roman copies, include the Barberini Faun, the Belvedere Torso, the Resting Satyr. The Furietti Centaurs and Sleeping Hermaphroditus reflect related subjects, which had by this time become drawn into the Dionysian orbit. The marble Dancer of Pergamon is an original, as is the bronze Dancing Satyr of Mazara del Vallo, a recent recovery from the sea. The Dionysian world by the Hellenistic period is a hedonistic but safe pastoral into which other semi-divine creatures of the countryside have been co-opted, such as centaurs, nymphs, and the gods Pan and Hermaphrodite. "Nymph" by this stage "means simply an ideal female of the Dionysian outdoors, a non-wild bacchant". Hellenistic sculpture also includes for the first time large genre subjects of children and peasants, many of whom carry Dionysian attributes such as ivy wreaths, and "most should be seen as part of his realm. They have in common with satyrs and nymphs that they are creatures of the outdoors and are without true personal identity." The fourth-century BC Derveni Krater, the unique survival of a very large scale Classical or Hellenistic metal vessel of top quality, depicts Dionysus and his followers. Dionysus appealed to the Hellenistic monarchies for a number of reasons, apart from merely being a god of pleasure: He was a human who became divine, he came from, and had conquered, the East, exemplified a lifestyle of display and magnificence with his mortal followers, and was often regarded as an ancestor. He continued to appeal to the rich of Imperial Rome, who populated their gardens with Dionysian sculpture, and by the second century AD were often buried in sarcophagi carved with crowded scenes of Bacchus and his entourage. The fourth-century AD Lycurgus Cup in the British Museum is a spectacular cage cup which changes colour when light comes through the glass; it shows the bound King Lycurgus being taunted by the god and attacked by a satyr; this may have been used for celebration of Dionysian mysteries. Elizabeth Kessler has theorised that a mosaic appearing on the triclinium floor of the House of Aion in Nea Paphos, Cyprus, details a monotheistic worship of Dionysus. In the mosaic, other gods appear but may only be lesser representations of the centrally imposed Dionysus. The mid-Byzantine Veroli Casket shows the tradition lingering in Constantinople around 1000 AD, but probably not very well understood. Bacchic subjects in art resumed in the Italian Renaissance, and soon became almost as popular as in antiquity, but his "strong association with feminine spirituality and power almost disappeared", as did "the idea that the destructive and creative powers of the god were indissolubly linked". In Michelangelo's statue (1496–97) "madness has become merriment". The statue tries to suggest both drunken incapacity and an elevated consciousness, but this was perhaps lost on later viewers, and typically the two aspects were thereafter split, with a clearly drunk Silenus representing the former, and a youthful Bacchus often shown with wings, because he carries the mind to higher places. Titian's Bacchus and Ariadne (1522–23) and The Bacchanal of the Andrians (1523–26), both painted for the same room, offer an influential heroic pastoral, while Diego Velázquez in The Triumph of Bacchus (or Los borrachos – "the drinkers", c. 1629) and Jusepe de Ribera in his Drunken Silenus choose a genre realism. Flemish Baroque painting frequently painted the Bacchic followers, as in Van Dyck's Drunken Silenus and many works by Rubens; Poussin was another regular painter of Bacchic scenes. A common theme in art beginning in the sixteenth century was the depiction of Bacchus and Ceres caring for a representation of love – often Venus, Cupid, or Amore. This tradition derived from a quotation by the Roman comedian Terence (c. 195/185 – c. 159 BC) which became a popular proverb in the Early Modern period: Sine Cerere et Baccho friget Venus ("without Ceres and Bacchus, Venus freezes"). Its simplest level of meaning is that love needs food and wine to thrive. Artwork based on this saying was popular during the period 1550–1630, especially in Northern Mannerism in Prague and the Low Countries, as well as by Rubens. Because of his association with the vine harvest, Bacchus became the god of autumn, and he and his followers were often shown in sets depicting the seasons. Dionysus has remained an inspiration to artists, philosophers and writers into the modern era. In The Birth of Tragedy (1872), the German philosopher Friedrich Nietzsche proposed that a tension between Apollonian and Dionysian aesthetic principles underlay the development of Greek tragedy; Dionysus represented what was unrestrained chaotic and irrational, while Apollo represented the rational and ordered. This concept of a rivalry or opposition between Dionysus and Apollo has been characterised as a "modern myth", as it is the invention of modern thinkers like Nietzsche and Johann Joachim Winckelmann, and is not found in classical sources. However, the acceptance and popularity of this theme in Western culture has been so great, that its undercurrent has influenced the conclusions of classical scholarship. Nietzsche also claimed that the oldest forms of Greek Tragedy were entirely based upon the suffering Dionysus. In Nietzsche's 1886 work Beyond Good and Evil, and later The Twilight of the Idols, The Antichrist and Ecce Homo, Dionysus is conceived as the embodiment of the unrestrained will to power. Towards the end of his life, Nietzsche famously went mad. He was known to sign letters as both Dionysus and "The Crucified" in this period of his life. In The Hellenic Religion of the Suffering God (1904), and Dionysus and Early Dionysianism (1921), the poet Vyacheslav Ivanov elaborates the theory of Dionysianism, tracing the origins of literature, and tragedy in particular, to ancient Dionysian mysteries. Ivanov said that Dionysus's suffering "was the distinctive feature of the cult" just as Christ's suffering is significant for Christianity. Karl Kerényi characterises Dionysus as representative of the psychological life force (Greek Zoê). Other psychological interpretations place Dionysus's emotionality in the foreground, focusing on the joy, terror or hysteria associated with the god. Sigmund Freud specified that his ashes should be kept in an Ancient Greek vase painted with Dionysian scenes from his collection, which remains on display at Golders Green Crematorium in London. In 1969, an adaption of The Bacchae was performed, called Dionysus in '69. A film was made of the same performance. The production was notable for involving audience participation, nudity, and theatrical innovations. In 1974, Stephen Sondheim and Burt Shevelove adapted Aristophanes's comedy The Frogs into a modern musical, which hit broadway in 2004 and was revived in London in 2017. The musical keeps the descent of Dionysus into Hades to bring back a playwright; however, the playwrights are updated to modern times, and Dionysus is forced to choose between George Bernard Shaw and William Shakespeare. In 2019, the South Korean boy band BTS released a rap-rock-synth-pop-hip hop track. named "Dionysus" as part of their album Map of the Soul: Persona. The naming of this song comes from the association of the namesake with debauchery and excess, this is reflected in its lyrics talking about "getting drunk on art" – playing on the Korean words for "alcohol" (술 sul) and "art" (예술 yesul) as an example – alongside expressions about their stardom, legacy, and artistic integrity. In 2024, French actor and singer Phillippe Katerine portrayed a blue and near naked Dionysus at the 2024 Summer Olympics opening ceremony in France. Parallels with Christianity Some scholars of comparative mythology identify both Dionysus and Jesus with the dying-and-rising god mythological archetype. On the other hand, it has been noted that the details of Dionysus's death and rebirth are starkly different both in content and symbolism from Jesus. The two stories take place in very different historical and geographic contexts. Also, the manner of death is different; in the most common myth, Dionysus was torn to pieces and eaten by the Titans, but "eventually restored to a new life" from the heart that was left over. Another parallel can be seen in The Bacchae where Dionysus appears before King Pentheus on charges of claiming divinity, which is compared to the New Testament scene of Jesus being interrogated by Pontius Pilate. However, a number of scholars dispute this parallel, since the confrontation between Dionysus and Pentheus ends with Pentheus dying, torn into pieces by the mad women, whereas the trial of Jesus ends with him being sentenced to death. E. Kessler has argued that the Dionysian cult developed into strict monotheism by the fourth century AD; together with Mithraism and other sects, the cult formed an instance of "pagan monotheism" in direct competition with Early Christianity during Late Antiquity. Scholars from the sixteenth century onwards, especially Gerard Vossius, also discussed the parallels between the biographies of Dionysus/Bacchus and Moses. John Moles has argued that the Dionysian cult influenced early Christianity, and especially how Christians understood themselves as a new religion centered around a savior deity. Genealogy See also Notes References Further reading External links Archived 15 May 2012 at the Wayback Machine |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Korybantes] | [TOKENS: 1338] |
Contents Korybantes According to Greek mythology, the Korybantes (/ˌkɒrɪˈbæntiːz/; Ancient Greek: Κορύβαντες), also spelled Corybantes or Corybants, were the armed and crested dancers who worshipped the Phrygian goddess Cybele with drumming and dancing. They are also called the Kurbantes in Phrygia. Etymology The name Korybantes is of uncertain etymology. Edzard Johan Furnée and R. S. P. Beekes have suggested a Pre-Greek origin. Others refer the name to *κορυβή (korybé), the Macedonian version of κορυφή (koryphé) "crown, top, mountain peak", explaining their association with mountains, particularly Olympus. Family The Korybantes were the offspring of Apollo by either the Muse Thalia, or the nymph Rhetia, or the nymph Danais. One account attests the parentage to Zeus and the Muse Calliope, or to Helios and Athena, or lastly, to Cronus. Kouretes The Kouretes (Κουρῆτες), also spelled Kuretes were nine dancers who venerated Rhea, the Cretan counterpart of Cybele. A fragment from Strabo's Book VII gives a sense of the roughly analogous character of these male confraternities, and the confusion rampant among those not initiated: Many assert that the gods worshipped in Samothrace as well as the Kurbantes and the Korybantes and in like manner the Kouretes and the Idaean Daktyls are the same as the Kabeiroi, but as to the Kabeiroi they are unable to tell who they are. Grant Showerman in the Encyclopædia Britannica Eleventh Edition addressed the confusion, stating that the Korybantes "are distinguished only [from the Kuretes] by their Asiatic origin and by the more pronouncedly orgiastic nature of their rites". According to Oppian, the Curetes, who had been tasked with guarding the young Zeus, were turned into lions by Cronus. Zeus then made them into the kings of the animals, while his mother Rhea yoked them to her chariot. Initiatory dance These armored male dancers kept time to a drum and the rhythmic stamping of their feet. Dance, according to Greek thought, was one of the civilizing activities, like wine-making or music. The dance in armor (the "Pyrrhic dance" or pyrrhichios [Πυρρίχη]) was a male coming-of-age initiation ritual linked to a warrior victory celebration. Both Jane Ellen Harrison and the French classicist Henri Jeanmaire have shown that both the Kouretes (Κουρῆτες) and Cretan Zeus, who was called "the greatest kouros (κοῦρος)", were intimately connected with the transition of boys into manhood in Cretan cities. The English "Pyrrhic Dance" is a corruption of the original Pyrríkhē or the Pyrríkhios Khorós "Pyrrhichian Dance". It has no relationship with the king Pyrrhus of Epirus, who invaded Italy in the 3rd century BC, and who gave his name to the Pyrrhic victory, which was achieved at such cost that it was tantamount to a defeat. Ecstatics The Phrygian Korybantes were often confused by Greeks with other ecstatic male confraternities, such as the Idaean Dactyls or the Cretan Kouretes, spirit-youths (kouroi) who acted as guardians of the infant Zeus. In Hesiod's telling of Zeus's birth, when Great Gaia came to Crete and hid the child Zeus in a "steep cave", beneath the secret places of the earth, on Mount Aigaion with its thick forests; there the Cretan Kouretes' ritual clashing spears and shields were interpreted by Hellenes as intended to drown out the infant god's cries, and prevent his discovery by his cannibal father Cronus. Emily Vermeule observed, This myth is Greek interpretation of mystifying Minoan ritual in an attempt to reconcile their Father Zeus with the Divine Child of Crete; the ritual itself we may never recover with clarity, but it is not impossible that a connection exists between the Kouretes' weapons at the cave and the dedicated weapons at Arkalochori". Among the offerings recovered from the cave, the most spectacular are decorated bronze shields with patterns that draw upon north Syrian originals and a bronze gong on which a god and his attendants are shown in a distinctly Near Eastern style. Korybantes also presided over the infancy of Dionysus, another god who was born as a babe, and of Zagreus, a Cretan child of Zeus, or child-doublet of Zeus. The wild ecstasy of their cult can be compared to the female Maenads who followed Dionysus. Ovid, in Metamorphoses, says the Kouretes were born from rainwater (Uranus fertilizing Gaia). This suggests a connection with the Hyades. Other functions The scholar Jane Ellen Harrison writes that besides being guardians, nurturers, and initiators of the infant Zeus, the Kouretes were primitive magicians and seers. She also writes that they were metal workers and that metallurgy was considered an almost magical art. There were several "tribes" of Korybantes, including the Cabeiri, the Korybantes Euboioi, the Korybantes Samothrakioi. Hoplodamos and his Gigantes were counted among Korybantes, and the Titan Anytos was considered a Kourete. Homer referred to select young men as kouretes, when Agamemnon instructs Odysseus to pick out kouretes, the bravest among the Achaeans to bear gifts to Achilles. The Greeks preserved a tradition down to Strabo's day, that the Kuretes of Aetolia and Acarnania in mainland Greece had been imported from Crete. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_ref-27] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Chicago] | [TOKENS: 17093] |
Contents Chicago Chicago[a] is the most populous city in the U.S. state of Illinois and in the Midwestern United States. Located on the western shore of Lake Michigan, it is the third-most populous city in the United States, with a population of 2.74 million at the 2020 census. The Chicago metropolitan area has 9.41 million residents and is the third-largest metropolitan area in the country. Chicago is the seat of Cook County, the second-most populous county in the U.S. Chicago was incorporated as a city in 1837 near a portage between the Great Lakes and the Mississippi River watershed. It grew rapidly in the mid-19th century. In 1871, the Great Chicago Fire destroyed several square miles and left more than 100,000 homeless, but Chicago's population continued to grow. Chicago made noted contributions to urban planning and architecture, such as the Chicago School, the development of the City Beautiful movement, and the steel-framed skyscraper. Chicago is an international hub for finance, culture, commerce, industry, education, technology, telecommunications, and transportation. It has the largest and most diverse finance derivatives market in the world, generating 20% of all volume in commodities and financial futures alone. O'Hare International Airport is routinely ranked among the world's top ten busiest airports by passenger traffic, and the region is also the nation's railroad hub. The Chicago area has one of the highest gross domestic products (GDP) of any urban region in the world, ranking sixth globally, generating over $919 billion in 2024. Chicago's economy is diverse, with no single industry employing more than 14% of the workforce. Chicago is a major destination for tourism, with 55 million visitors in 2024 to its cultural institutions, Lake Michigan beaches, restaurants, and more. Chicago's culture has contributed much to the visual arts, literature, film, theater, comedy (especially improvisational comedy), food, dance, and music (particularly jazz, blues, soul, hip-hop, gospel, industrial, and electronic dance music, especially house music). Chicago is home to the Chicago Symphony Orchestra and the Lyric Opera of Chicago, while the Art Institute of Chicago provides an influential visual arts museum and art school. The Chicago area also hosts the University of Chicago, Northwestern University, and the University of Illinois Chicago, among other institutions of learning. Professional sports in Chicago include all major professional leagues, including two Major League Baseball teams. The city also hosts the Chicago Marathon, one of the World Marathon Majors. Etymology and nicknames The name Chicago is derived from a French rendering of the indigenous Miami–Illinois word šikaakwa, which was a name for a wild relative of onion and garlic known to modern botanists as Allium tricoccum.[page needed] The first known reference to the site of the city of Chicago as "Checagou" was by Robert de LaSalle around 1679 in a memoir. Henri Joutel, in his journal of 1688, noted that the eponymous wild "garlic" grew profusely in the area. According to his diary of late September 1687: ... when we arrived at the said place called "Chicagou" which, according to what we were able to learn of it, has taken this name because of the quantity of garlic which grows in the forests in this region. The city has had several nicknames throughout its history, such as the Windy City, Chi-Town, Second City, and City of the Big Shoulders. History In the mid-18th century, the area was inhabited by the Potawatomi, an indigenous tribe who had succeeded the Miami, Sauk and Meskwaki peoples in this region. The first known permanent settler in Chicago was a trader, Jean Baptiste Point du Sable. Du Sable was of African descent, perhaps born in the French colony of Saint-Domingue (Haiti), and he established the settlement in the 1780s. He is commonly known as the "Founder of Chicago". In 1795, following the victory of the new United States in the Northwest Indian War, an area that was to be part of Chicago was turned over to the U.S. for a military post by native tribes in accordance with the Treaty of Greenville. In 1803, the U.S. Army constructed Fort Dearborn, which was destroyed during the War of 1812 in the Battle of Fort Dearborn by the Potawatomi before being later rebuilt. After the War of 1812, the Odawa, Ojibwe, and Potawatomi tribes ceded additional land to the United States in the 1816 Treaty of St. Louis. The Potawatomi were forcibly removed from their land after the 1833 Treaty of Chicago and sent west of the Mississippi River as part of the federal policy of Indian removal. On August 12, 1833, the Town of Chicago was organized with a population of about 200. Within seven years it grew to more than 6,000 people. On June 15, 1835, the first public land sales began with Edmund Dick Taylor as Receiver of Public Monies. The City of Chicago was incorporated on Saturday, March 4, 1837, and for several decades was the world's fastest-growing city. As the site of the Chicago Portage, the city became an important transportation hub between the eastern and western United States. Chicago's first railway, Galena and Chicago Union Railroad, and the Illinois and Michigan Canal opened in 1848. The canal allowed steamboats and sailing ships on the Great Lakes to connect to the Mississippi River. A flourishing economy brought residents from rural communities and immigrants from abroad. Manufacturing and retail and finance sectors became dominant, influencing the American economy. The Chicago Board of Trade (established 1848) listed the first-ever standardized "exchange-traded" forward contracts, which were called futures contracts. In the 1850s, Chicago gained national political prominence as the home of Senator Stephen Douglas, the champion of the Kansas–Nebraska Act and the "popular sovereignty" approach to the issue of the spread of slavery. These issues also helped propel another Illinoisan, Abraham Lincoln, to the national stage. Lincoln was nominated in Chicago for U.S. president at the 1860 Republican National Convention, which was held in a purpose-built auditorium called the Wigwam. He defeated Douglas in the general election, and this set the stage for the American Civil War. To accommodate rapid population growth and demand for better sanitation, the city improved its infrastructure. In February 1856, Chicago's Common Council approved Chesbrough's plan to build the United States' first comprehensive sewerage system. The project raised much of central Chicago to a new grade with the use of jackscrews for raising buildings. While elevating Chicago, and at first improving the city's health, the untreated sewage and industrial waste now flowed into the Chicago River, and subsequently into Lake Michigan, polluting the city's primary freshwater source. The city responded by tunneling two miles (3.2 km) out into Lake Michigan to newly built water cribs. In 1900, the problem of sewage contamination was largely resolved when the city completed a major engineering feat. It reversed the flow of the Chicago River so that the water flowed away from Lake Michigan rather than into it. This project began with the construction and improvement of the Illinois and Michigan Canal, and was completed with the Chicago Sanitary and Ship Canal that connects to the Illinois River, which flows into the Mississippi River. On October 8, 1871, the Great Chicago Fire destroyed an area about 4 miles (6.4 km) long and 1-mile (1.6 km) wide, a large section of the city at the time. At least 300 people were killed and over 100,000 were left homeless from the fire. However, much of the city, including railroads and stockyards, survived intact, and from the ruins of the previous wooden structures arose more modern constructions of steel and stone. These set a precedent for worldwide construction. During its rebuilding period, Chicago constructed the world's first skyscraper in 1885, using steel-skeleton construction. The city grew significantly in size and population by incorporating many neighboring townships between 1851 and 1920, with the largest annexation happening in 1889, with five townships joining the city, including the Hyde Park Township, which now comprises most of the South Side of Chicago and the far southeast of Chicago, and the Jefferson Township, which now makes up most of Chicago's Northwest Side. The desire to join the city was driven by municipal services that the city could provide its residents. Chicago's flourishing economy attracted huge numbers of new immigrants from Europe and migrants from the Eastern United States. Of the total population in 1900, more than 77% were either foreign-born or born in the United States of foreign parentage. Germans, Irish, Poles, Swedes, and Czechs made up nearly two-thirds of the foreign-born population (by 1900, whites were 98.1% of the city's population). Labor conflicts followed the industrial boom and the rapid expansion of the labor pool during the Gilded Age, including the Haymarket affair on May 4, 1886, and in 1894 the Pullman Strike. Anarchist and socialist groups played prominent roles in creating very large and highly organized labor actions. Concern for social problems among Chicago's immigrant poor led Jane Addams and Ellen Gates Starr to found Hull House in 1889. Programs that were developed there became a model for the new field of social work. During the 1870s and 1880s, Chicago attained national stature as the leader in the movement to improve public health. City laws and later, state laws that upgraded standards for the medical profession and fought urban epidemics of cholera, smallpox, and yellow fever were both passed and enforced. These laws became templates for public health reform in other cities and states. The city established many large, well-landscaped municipal parks, which also included public sanitation facilities. The chief advocate for improving public health in Chicago was John H. Rauch, M.D. Rauch established a plan for Chicago's park system in 1866. He created Lincoln Park by closing a cemetery filled with shallow graves, and in 1867, in response to an outbreak of cholera he helped establish a new Chicago Board of Health. Ten years later, he became the secretary and then the president of the first Illinois State Board of Health, which carried out most of its activities in Chicago. In the 1800s, Chicago became the nation's railroad hub, and by 1910 over 20 railroads operated passenger service out of six different downtown terminals. In 1883, Chicago's railway managers needed a general time convention, so they developed the standardized system of North American time zones. This system for telling time spread throughout the continent. In 1893, Chicago hosted the World's Columbian Exposition on former marshland at the present location of Jackson Park. The Exposition drew 27.5 million visitors, and is considered the most influential world's fair in history. The city's municipal device, a Y within a circle, was the result of a contest run by the Chicago Tribune in 1892, in anticipation of the Columbian Exposition. The University of Chicago, formerly at another location, moved to the same South Side location in 1892. The term "midway" for a fair or carnival referred originally to the Midway Plaisance, a strip of park land that still runs through the University of Chicago campus and connects the Washington and Jackson Parks. During World War I and the 1920s there was a major expansion in industry. The availability of jobs attracted African Americans from the Southern United States. Between 1910 and 1930, the African American population of Chicago increased dramatically, from 44,103 to 233,903. This Great Migration had an immense cultural impact, called the Chicago Black Renaissance, part of the New Negro Movement, in art, literature, and music. Continuing racial tensions and violence in the city, such as the Chicago race riot of 1919, also occurred. The ratification of the 18th amendment to the Constitution in 1919 made the production and sale (including exportation) of alcoholic beverages illegal in the United States. This ushered in the beginning of what is known as the gangster era, a time that roughly spans from 1919 until 1933 when Prohibition was repealed. The 1920s saw gangsters, including Al Capone, Dion O'Banion, Bugs Moran and Tony Accardo battle law enforcement and each other on the streets of Chicago during the Prohibition era. Chicago was the location of the infamous St. Valentine's Day Massacre in 1929, when Al Capone sent men to gun down members of a rival gang, North Side, led by Bugs Moran, leaving seven rival members dead. From 1920 to 1921, the city was affected by a series of tenant rent strikes, which led to the formation of the Chicago Tenants Protective association, passage of the Kessenger tenant laws, and of a heat ordinance that legally required flats to be kept above 68 °F during winter months by landlords. Chicago was the first American city to have a homosexual-rights organization. The organization, formed in 1924, was called the Society for Human Rights. It produced the first American publication for homosexuals, Friendship and Freedom. Police and political pressure caused the organization to disband. The Great Depression brought unprecedented suffering to Chicago, in no small part due to the city's heavy reliance on heavy industry. Notably, industrial areas on the south side and neighborhoods lining both branches of the Chicago River were devastated; by 1933 over 50% of industrial jobs in the city had been lost, and unemployment rates amongst blacks and Latinos in the city were over 40%. The Republican political machine in Chicago was utterly destroyed by the economic crisis, and every mayor since 1931 has been a Democrat. From 1928 to 1933, the city witnessed a tax revolt, and the city was unable to meet payroll or provide relief efforts. The fiscal crisis was resolved by 1933, and at the same time, federal relief funding began to flow into Chicago. Chicago was also a hotbed of labor activism, with Unemployed Councils contributing heavily in the early depression to create solidarity for the poor and demand relief; these organizations were created by socialist and communist groups. By 1935, the Workers Alliance of America began organizing the poor, workers, the unemployed. In the spring of 1937 Republic Steel Works witnessed the Memorial Day massacre of 1937 in the neighborhood of East Side. In 1933, Chicago Mayor Anton Cermak was fatally wounded in Miami, Florida, during a failed assassination attempt on President-elect Franklin D. Roosevelt by Giuseppe Zangara. In 1933 and 1934, the city celebrated its centennial by hosting the Century of Progress International Exposition World's Fair. The theme of the fair was technological innovation over the century since Chicago's founding. During World War II, the city of Chicago alone produced more steel than the United Kingdom every year from 1939 – 1945, and more than Nazi Germany from 1943 – 1945. The Great Migration, which had been on pause due to the Depression, resumed at an even faster pace in the second wave, as hundreds of thousands of blacks from the South arrived in the city to work in the steel mills, railroads, and shipping yards. On December 2, 1942, physicist Enrico Fermi conducted the world's first controlled nuclear reaction at the University of Chicago as part of the top-secret Manhattan Project. This led to the creation of the atomic bomb by the United States, which it used in World War II in 1945. Mayor Richard J. Daley, a Democrat, was elected in 1955, in the era of machine politics. In 1956, the city conducted its last major expansion when it annexed the land under O'Hare airport, including a small portion of DuPage County. By the 1960s, white residents in several neighborhoods left the city for the suburban areas – in many American cities, a process known as white flight – as Blacks continued to move beyond the Black Belt. While home loan discriminatory redlining against blacks continued, the real estate industry practiced what became known as blockbusting, completely changing the racial composition of whole neighborhoods. Structural changes in industry, such as globalization and job outsourcing, caused heavy job losses for lower-skilled workers. At its peak during the 1960s, some 250,000 workers were employed in the steel industry in Chicago, but the steel crisis of the 1970s and 1980s reduced this number to just 28,000 in 2015. In 1966, Martin Luther King Jr. and Albert Raby led the Chicago Freedom Movement, which culminated in agreements between Mayor Richard J. Daley and the movement leaders. Two years later, the city hosted the tumultuous 1968 Democratic National Convention, which featured physical confrontations both inside and outside the convention hall, with anti-war protesters, journalists and bystanders being beaten by police. Major construction projects, including the Sears Tower (now known as the Willis Tower, which in 1974 became the world's tallest building), University of Illinois at Chicago, McCormick Place, and O'Hare International Airport, were undertaken during Richard J. Daley's tenure. In 1979, Jane Byrne, the city's first female mayor, was elected. She was notable for temporarily moving into the crime-ridden Cabrini-Green housing project and for leading Chicago's school system out of a financial crisis. In 1983, Harold Washington became the first black mayor of Chicago. Washington's first term in office directed attention to poor and previously neglected minority neighborhoods. He was re‑elected in 1987 but died of a heart attack soon after. Washington was succeeded by 6th ward alderperson Eugene Sawyer, who was elected by the Chicago City Council and served until a special election. Richard M. Daley, son of Richard J. Daley, was elected in 1989. His accomplishments included improvements to parks and creating incentives for sustainable development, as well as closing Meigs Field in the middle of the night and destroying the runways. After successfully running for re-election five times, and becoming Chicago's longest-serving mayor, Richard M. Daley declined to run for a seventh term. In 1992, a construction accident near the Kinzie Street Bridge produced a breach connecting the Chicago River to a tunnel below, which was part of an abandoned freight tunnel system extending throughout the downtown Loop district. The tunnels filled with 250 million US gallons (1,000,000 m3) of water, affecting buildings throughout the district and forcing a shutdown of electrical power. The area was shut down for three days and some buildings did not reopen for weeks; losses were estimated at $1.95 billion. On February 23, 2011, Rahm Emanuel, a former White House Chief of Staff and member of the House of Representatives, won the mayoral election. Emanuel was sworn in as mayor on May 16, 2011, and won re-election in 2015. Lori Lightfoot, the city's first African American woman mayor and its first openly LGBTQ mayor, was elected to succeed Emanuel as mayor in 2019. All three city-wide elective offices were held by women (and women of color) for the first time in Chicago history: in addition to Lightfoot, the city clerk was Anna Valencia and the city treasurer was Melissa Conyears-Ervin. On May 15, 2023, Brandon Johnson assumed office as the 57th mayor of Chicago. Geography Chicago is located in northeastern Illinois on the southwestern shores of freshwater Lake Michigan. It is the principal city in the Chicago Metropolitan Area, situated in both the Midwestern United States and the Great Lakes region. The city rests on a continental divide at the site of the Chicago Portage, connecting the Mississippi River and the Great Lakes watersheds. In addition to it lying beside Lake Michigan, two rivers—the Chicago River in downtown and the Calumet River in the industrial far South Side—flow either entirely or partially through the city. Chicago's history and economy are closely tied to its proximity to Lake Michigan. While the Chicago River historically handled much of the region's waterborne cargo, today's huge lake freighters use the city's Lake Calumet Harbor on the South Side. The lake also provides another positive effect: moderating Chicago's climate, making waterfront neighborhoods slightly warmer in winter and cooler in summer. When Chicago was founded in 1837, most of the early building was around the mouth of the Chicago River, as can be seen on a map of the city's original 58 blocks. The overall grade of the city's central, built-up areas is relatively consistent with the natural flatness of its overall natural geography, generally exhibiting only slight differentiation otherwise. The average land elevation is 579 ft (176.5 m) above sea level. While measurements vary somewhat, the lowest points are along the lake shore at 578 ft (176.2 m), while the highest point, at 672 ft (205 m), is the morainal ridge of Blue Island in the city's far south side. Lake Shore Drive runs adjacent to a large portion of Chicago's waterfront. Some of the parks along the waterfront include Lincoln Park, Grant Park, Burnham Park, and Jackson Park. There are 24 public beaches across 26 miles (42 km) of the waterfront. Landfill extends into portions of the lake providing space for Navy Pier, Northerly Island, the Museum Campus, and large portions of the McCormick Place Convention Center. Most of the city's high-rise commercial and residential buildings are close to the waterfront. An informal name for the entire Chicago metropolitan area is "Chicagoland", which generally means the city and all its suburbs, though different organizations have slightly different definitions. Major sections of the city include the central business district, called the Loop, and the North, South, and West Sides. The three sides of the city are represented on the Flag of Chicago by three horizontal white stripes. The North Side is the city's most densely populated residential section, and many high-rises are on this side of the city along the lakefront. The South Side is the city's largest section, encompassing roughly 60% of its land area. The South Side contains most of the facilities of the Port of Chicago. In the late 1920s, sociologists at the University of Chicago subdivided the city into 77 distinct community areas, which can further be subdivided into over 200 informally defined neighborhoods. Chicago's streets were laid out in a street grid that grew from the city's original townsite plot, which was bounded by Lake Michigan on the east, North Avenue on the north, Wood Street on the west, and 22nd Street on the south. Streets following the Public Land Survey System section lines later became arterial streets in outlying sections. As new additions to the city were platted, city ordinance required them to be laid out with eight streets to the mile in one direction and sixteen in the other direction, about one street per 200 meters in one direction and one street per 100 meters in the other direction. The grid's regularity provided an efficient means of developing new real estate property. A scattering of diagonal streets, many of them originally Native American trails, also cross the city (Elston, Milwaukee, Ogden, Lincoln, etc.). Many additional diagonal streets were recommended in the Plan of Chicago, but only the extension of Ogden Avenue was ever constructed. In 2021, Chicago was ranked the fourth-most walkable large city in the United States. Many of the city's residential streets have a wide patch of grass or trees between the street and the sidewalk itself. This helps to keep pedestrians on the sidewalk further away from the street traffic. Chicago's Western Avenue is the longest continuous urban street in the world. Other notable streets include Michigan Avenue, State Street, 95th Street, Cicero Avenue, Clark Street, and Belmont Avenue. The City Beautiful movement inspired Chicago's boulevards and parkways. The destruction caused by the Great Chicago Fire led to the largest building boom in the history of the nation. In 1885, the first steel-framed high-rise building, the Home Insurance Building, rose in the city as Chicago ushered in the skyscraper era, which would then be followed by many other cities around the world. Today, Chicago's skyline is among the world's tallest and densest. Some of the United States' tallest towers are located in Chicago; Willis Tower (formerly Sears Tower) is the third tallest building in the Western Hemisphere after One World Trade Center, and Central Park Tower. The Loop's historic buildings include the Chicago Board of Trade Building, the Fine Arts Building, 35 East Wacker, and the Chicago Building, 860-880 Lake Shore Drive Apartments by Mies van der Rohe. Many other architects have left their impression on the Chicago skyline such as Daniel Burnham, Louis Sullivan, Charles B. Atwood, John Root, and Helmut Jahn. The Merchandise Mart, once the largest building in the world, had its own zip code until 2008, and stands near the junction of the North and South branches of the Chicago River. Presently, the four tallest buildings in the city are Willis Tower (formerly the Sears Tower, also a building with its own zip code), Trump International Hotel and Tower, the Aon Center (previously the Standard Oil Building), and the John Hancock Center. Industrial districts, such as some areas on the South Side, the areas along the Chicago Sanitary and Ship Canal, and the Northwest Indiana area are clustered. Chicago gave its name to the Chicago School and was home to the Prairie School, two movements in architecture. Multiple kinds and scales of houses, townhouses, condominiums, and apartment buildings can be found throughout Chicago. Large swaths of the city's residential areas away from the lake are characterized by brick bungalows built from the early 20th century through the end of World War II. Chicago is also a prominent center of the Polish Cathedral style of church architecture. The Chicago suburb of Oak Park was home to famous architect Frank Lloyd Wright, who had designed The Robie House located near the University of Chicago. A popular tourist activity is to take an architecture boat tour along the Chicago River. Chicago is famous for its outdoor public art with donors establishing funding for such art as far back as Benjamin Ferguson's 1905 trust. A number of Chicago's public art works are by modern figurative artists. Among these are Chagall's Four Seasons; the Chicago Picasso; Miró's Chicago; Calder's Flamingo; Oldenburg's Batcolumn; Moore's Large Interior Form, 1953–54, Man Enters the Cosmos and Nuclear Energy; Dubuffet's Monument with Standing Beast, Abakanowicz's Agora; and Anish Kapoor's Cloud Gate which has become an icon of the city. Some events which shaped the city's history have also been memorialized by art works, including the Great Northern Migration (Saar) and the centennial of statehood for Illinois. Finally, two fountains near the Loop also function as monumental works of art: Plensa's Crown Fountain as well as Burnham and Bennett's Buckingham Fountain. The city mostly lies within the typical hot-summer humid continental climate (Köppen: Dfa), and experiences four distinct seasons. Summers are hot and humid, with frequent heat waves. The July daily average temperature is 75.4 °F (24.1 °C), with afternoon temperatures peaking at 84.5 °F (29.2 °C). In a normal summer, temperatures reach at least 90 °F (32 °C) on 17 days, with lakefront locations staying cooler when winds blow off the lake. Winters are relatively cold and snowy. Blizzards do occur, such as in winter 2011. There are many sunny but cold days. The normal winter high from December through March is about 36 °F (2 °C). January and February are the coldest months. A polar vortex in January 2019 nearly broke the city's cold record of −27 °F (−33 °C), which was set on January 20, 1985. Measurable snowfall can continue through the first or second week of April. Spring and autumn are mild, short seasons, typically with low humidity. Dew point temperatures in the summer range from an average of 55.8 °F (13.2 °C) in June to 61.7 °F (16.5 °C) in July. They can reach nearly 80 °F (27 °C), such as during the July 2019 heat wave. The city lies within USDA plant hardiness zone 6a, transitioning to 5b in the suburbs. According to the National Weather Service, Chicago's highest official temperature reading of 105 °F (41 °C) was recorded on July 24, 1934. Midway Airport reached 109 °F (43 °C) one day prior and recorded a heat index of 125 °F (52 °C) during the 1995 heatwave. The lowest official temperature of −27 °F (−33 °C) was recorded on January 20, 1985, at O'Hare Airport. Most of the city's rainfall is brought by thunderstorms, averaging 38 a year. The region is prone to severe thunderstorms during the spring and summer which can produce large hail, damaging winds, and occasionally tornadoes. Notably, the F4 Oak Lawn tornado moved through the South Side of the city on April 21, 1967, moving onto Lake Michigan as a waterspout. Downtown Chicago was struck by an F3 tornado on May 6, 1876, again moving out over Lake Michigan. Like other major cities, Chicago experiences an urban heat island, making the city and its suburbs milder than surrounding rural areas, especially at night and in winter. The proximity to Lake Michigan tends to keep the Chicago lakefront somewhat cooler in summer and less brutally cold in winter than inland parts of the city and suburbs away from the lake, which is sufficient to give lakefront areas such as Northerly Island a humid subtropical (Cfa) climate using Köppen's 27 °F (−3 °C) winter isotherm (as opposed to the firmly continental climate of inland areas such as Midway and O'Hare International Airports), even though those areas are still continental (Dca) under Trewartha due to winters averaging below 32 °F (0 °C). Northeast winds from wintertime cyclones departing south of the region sometimes bring the city lake-effect snow. Demographics During its first hundred years, Chicago was one of the fastest-growing cities in the world. When founded in 1833, fewer than 200 people had settled on what was then the American frontier. By the time of its first census, seven years later, the population had reached over 4,000. In the forty years from 1850 to 1890, the city's population grew from slightly under 30,000 to over 1 million. By the 1890 census, Chicago was the second most populous city in the United States. By 1900, it was the fifth largest in the world behind Berlin, Paris, New York, and London, and the largest city founded in the prior century. Within sixty years of the Great Chicago Fire of 1871, the population went from about 300,000 to over 3 million, and reached its highest ever recorded population of 3.6 million for the 1950 census. From the last two decades of the 19th century, Chicago was the destination of waves of immigrants from Ireland, Southern, Central and Eastern Europe, including Italians, Jews, Russians, Poles, Greeks, Armenians, Lithuanians, Bulgarians, Albanians, Romanians, Turkish, Croatians, Serbs, Bosnians, Montenegrins and Czechs. To these ethnic groups, the basis of the city's industrial working class, were added an additional influx of African Americans from the American South as part of the Great Migration—with Chicago's black population doubling between 1910 and 1920 and doubling again between 1920 and 1930. Chicago has a significant Bosnian population, many of whom arrived in the 1990s and 2000s. In the 1920s and 1930s, the great majority of African Americans moving to Chicago settled in a so‑called "Black Belt" on the city's South Side. A large number of blacks also settled on the West Side. By 1930, two-thirds of Chicago's black population lived in sections of the city which were 90% black in racial composition. Around that time, a lesser known fact about African Americans on the North Side is that the block of 4600 Winthrop Avenue in Uptown was the only block African Americans could live or open establishments. Chicago's South Side emerged as United States second-largest urban black concentration, following New York's Harlem. In 1990, Chicago's South Side and the adjoining south suburbs constituted the largest black majority region in the entire United States. Since the 1980s, Chicago has had a massive exodus of African Americans (primarily from the South and West sides) to its suburbs or outside its metropolitan area. The above average crime and cost of living were leading reasons for the fast declining African American population in Chicago. Most of Chicago's foreign-born population were born in Mexico, Poland or India. A 2020 study estimated the total Jewish population of the Chicago metropolitan area, both religious and irreligious, at 319,500. Chicago's population declined in the latter half of the 20th century, from over 3.6 million in 1950 down to under 2.7 million by 2010. By the time of the official census count in 1990, it was overtaken by Los Angeles as the United States' second largest city. The city has seen a rise in population for the 2000 census and after a decrease in 2010, it rose again for the 2020 census. According to U.S. census estimates as of July 2019[update], Chicago's largest racial or ethnic group is non-Hispanic White at 32.8% of the population, Blacks at 30.1% and the Hispanic population at 29.0% of the population. Chicago has the third-largest LGBT population in the United States. In 2018, the Chicago Department of Health, estimated 7.5% of the adult population, approximately 146,000 Chicagoans, were LGBTQ. In 2015, roughly 4% of the population identified as LGBT. Since the 2013 legalization of same-sex marriage in Illinois, over 10,000 same-sex couples have wed in Cook County, a majority of them in Chicago. Chicago became a "de jure" sanctuary city in 2012 when Mayor Rahm Emanuel and the City Council passed the Welcoming City Ordinance. According to the U.S. Census Bureau's American Community Survey data estimates for 2022, the median income for a household in the city was $70,386, and the per capita income was $45,449. Male full-time workers had a median income of $68,870 versus $60,987 for females. About 17.2% of the population lived below the poverty line. In 2018, Chicago ranked seventh globally for the highest number of ultra-high-net-worth residents with roughly 3,300 residents worth more than $30 million. According to the 2022 American Community Survey, the specific ancestral groups having 10,000 or more persons in Chicago were: Persons who did not report or classify an ancestry were 548,790. According to a 2014 study by the Pew Research Center, Christianity is the most prevalently practiced religion in Chicago (71%), with the city being the fourth-most religious metropolis in the United States after Dallas, Atlanta and Houston. Roman Catholicism and Protestantism are the largest branches (34% and 35% respectively), followed by Eastern Orthodoxy and Jehovah's Witnesses with 1% each. Chicago also has a sizable non-Christian population. Non-Christian groups include Irreligious (22%), Judaism (3%), Islam (2%), Buddhism (1%) and Hinduism (1%). Chicago is the headquarters of several religious denominations, including the Evangelical Covenant Church and the Evangelical Lutheran Church in America. It is the seat of several dioceses. The Fourth Presbyterian Church is one of the largest Presbyterian congregations in the United States based on memberships. Since the 20th century Chicago has also been the headquarters of the Assyrian Church of the East. In 2014 the Catholic Church was the largest individual Christian denomination (34%), with the Roman Catholic Archdiocese of Chicago being the largest Catholic jurisdiction. Evangelical Protestantism form the largest theological Protestant branch (16%), followed by Mainline Protestants (11%), and historically Black churches (8%). Among denominational Protestant branches, Baptists formed the largest group in Chicago (10%); followed by Nondenominational (5%); Lutherans (4%); and Pentecostals (3%). Non-Christian faiths accounted for 7% of the religious population in 2014. Judaism has at least 261,000 adherents which is 3% of the population. A 2020 study estimated the total Jewish population of the Chicago metropolitan area, both religious and irreligious, at 319,500. The first two Parliament of the World's Religions in 1893 and 1993 were held in Chicago. Many international religious leaders have visited Chicago, including Mother Teresa, the Dalai Lama and Pope John Paul II in 1979. Pope Leo XIV was born in Chicago in 1955 and graduated from the Catholic Theological Union in Hyde Park. Economy Chicago has the third-largest gross metropolitan product in the United States—about $670.5 billion according to September 2017 estimates. The city has also been rated as having the most balanced economy in the United States, due to its high level of diversification. The Chicago metropolitan area has the third-largest science and engineering work force of any metropolitan area in the nation. Chicago was the base of commercial operations for industrialists John Crerar, John Whitfield Bunn, Richard Teller Crane, Marshall Field, John Farwell, Julius Rosenwald, and many other commercial visionaries who laid the foundation for Midwestern and global industry. Chicago is a major world financial center, with the second-largest central business district in the United States, following Midtown Manhattan. The city is the seat of the Federal Reserve Bank of Chicago, the Bank's Seventh District. The city has major financial and futures exchanges, including the Chicago Stock Exchange, the Chicago Board Options Exchange (CBOE), and the Chicago Mercantile Exchange (the "Merc"), which is owned, along with the Chicago Board of Trade (CBOT), by Chicago's CME Group. In 2017, Chicago exchanges traded 4.7 billion in derivatives.[citation needed] Chase Bank has its commercial and retail banking headquarters in Chicago's Chase Tower. Academically, Chicago has been influential through the Chicago school of economics, which fielded 12 Nobel Prize winners. The city and its surrounding metropolitan area contain the third-largest labor pool in the United States with about 4.63 million workers. Illinois is home to 66 Fortune 1000 companies, including those in Chicago. The city of Chicago also hosts 12 Fortune Global 500 companies and 17 Financial Times 500 companies. The city claims three Dow 30 companies: aerospace giant Boeing, which moved its headquarters from Seattle to the Chicago Loop in 2001; McDonald's; and Walgreens Boots Alliance. For six consecutive years from 2013 through 2018, Chicago was ranked the nation's top metropolitan area for corporate relocations. However, three Fortune 500 companies left Chicago in 2022, leaving the city with 35, still second to New York City. Manufacturing, printing, publishing, and food processing also play major roles in the city's economy. Several medical products and services companies are based in the Chicago area, including Baxter International, Boeing, Abbott Laboratories, and the Healthcare division of General Electric. Prominent food companies based in Chicago include the world headquarters of Conagra, Ferrara Candy Company, Kraft Heinz, McDonald's, Mondelez International, and Quaker Oats. Chicago has been a hub of the retail sector since its early development, with Montgomery Ward, Sears, and Marshall Field's. Today the Chicago metropolitan area is the headquarters of several retailers, including Walgreens, Sears, Ace Hardware, Claire's, ULTA Beauty, and Crate & Barrel. Late in the 19th century, Chicago was part of the bicycle craze, with the Western Wheel Company, which introduced stamping to the production process and significantly reduced costs, while early in the 20th century, the city was part of the automobile revolution, hosting the Brass Era car builder Bugmobile, which was founded there in 1907. Chicago was also the site of the Schwinn Bicycle Company. Chicago is a major world convention destination. The city's main convention center is McCormick Place. With its four interconnected buildings, it is the largest convention center in the nation and third-largest in the world. Chicago also ranks third in the U.S. (behind Las Vegas and Orlando) in number of conventions hosted annually. Chicago's minimum wage for non-tipped employees is one of the highest in the nation and reached $15 in 2021. Culture and contemporary life The city's waterfront location and nightlife attracts residents and tourists alike. Over a third of the city population is concentrated in the lakefront neighborhoods from Rogers Park in the north to South Shore in the south. The city has many upscale dining establishments as well as many ethnic restaurant districts. These districts include the Mexican American neighborhoods, such as Pilsen along 18th street, and La Villita along 26th Street; the Puerto Rican enclave of Paseo Boricua in the Humboldt Park neighborhood; Greektown, along South Halsted Street, immediately west of downtown; Little Italy, along Taylor Street; Chinatown in Armour Square; Polish Patches in West Town; Little Seoul in Albany Park around Lawrence Avenue; Little Vietnam near Broadway in Uptown; and the Desi area, along Devon Avenue in West Ridge. Downtown is the center of Chicago's financial, cultural, governmental, and commercial institutions and the site of Grant Park and many of the city's skyscrapers. Many of the city's financial institutions, such as the CBOT and the Federal Reserve Bank of Chicago, are located within a section of downtown called "The Loop", which is an eight-block by five-block area of city streets that is encircled by elevated rail tracks. The term "The Loop" is largely used by locals to refer to the entire downtown area as well. The central area includes the Near North Side, the Near South Side, and the Near West Side, as well as the Loop. These areas contribute famous skyscrapers, abundant restaurants, shopping, museums, Soldier Field, convention facilities, parkland, and beaches.[citation needed] Lincoln Park contains Lincoln Park Zoo and Lincoln Park Conservatory. The River North Gallery District features the nation's largest concentration of contemporary art galleries outside of New York City. Lake View is home to Boystown, the city's large LGBT nightlife and culture center. The Chicago Pride Parade, held the last Sunday in June, is one of the world's largest with over a million people in attendance. North Halsted Street is the main thoroughfare of Boystown. The South Side neighborhood of Hyde Park is the home of former U.S. President Barack Obama. It also contains the University of Chicago, ranked one of the world's top ten universities, and the Museum of Science and Industry. The 6-mile (9.7 km) long Burnham Park stretches along the waterfront of the South Side. Two of the city's largest parks are also located on this side of the city: Jackson Park, bordering the waterfront, hosted the World's Columbian Exposition in 1893, and is the site of the aforementioned museum; and slightly west sits Washington Park. The two parks themselves are connected by a wide strip of parkland called the Midway Plaisance, running adjacent to the University of Chicago. The South Side hosts one of the city's largest parades, the annual African American Bud Billiken Parade and Picnic, which travels through Bronzeville to Washington Park. Ford Motor Company has an automobile assembly plant on the South Side in Hegewisch, and most of the facilities of the Port of Chicago are also on the South Side.[citation needed] The West Side holds the Garfield Park Conservatory, one of the largest collections of tropical plants in any U.S. city. Prominent Latino cultural attractions found here include Humboldt Park's Institute of Puerto Rican Arts and Culture and the annual Puerto Rican People's Parade, as well as the National Museum of Mexican Art and St. Adalbert's Church in Pilsen. The Near West Side holds the University of Illinois at Chicago and was once home to Oprah Winfrey's Harpo Studios, the site of which has been rebuilt as the global headquarters of McDonald's. The city's distinctive accent, made famous by its use in classic films like The Blues Brothers and television programs like the Saturday Night Live skit "Bill Swerski's Superfans", is an advanced form of Inland Northern American English. This dialect can be found in other cities bordering the Great Lakes, such as Cleveland, Milwaukee, Detroit, and Buffalo, New York, and most prominently features a rearrangement of certain vowel sounds, such as the short "a" sound as in "cat", which can sound more like "kyet" to outsiders. The accent remains well associated with the city. Renowned Chicago theater companies include the Goodman Theatre in the Loop; the Steppenwolf Theatre Company and Victory Gardens Theater in Lincoln Park; and the Chicago Shakespeare Theater at Navy Pier. Broadway In Chicago offers Broadway-style entertainment at five theaters: the Nederlander Theatre, CIBC Theatre, Cadillac Palace Theatre, Auditorium Building of Roosevelt University, and Broadway Playhouse at Water Tower Place. Polish language productions for Chicago's large Polish speaking population can be seen at the historic Gateway Theatre in Jefferson Park. Since 1968, the Joseph Jefferson Awards are given annually to acknowledge excellence in theater in the Chicago area. Chicago's theater community spawned modern improvisational theater, and includes the prominent groups The Second City and I.O. (formerly ImprovOlympic).[citation needed] The Chicago Symphony Orchestra (CSO) performs at Symphony Center, and is recognized as one of the best orchestras in the world. Also performing regularly at Symphony Center is the Chicago Sinfonietta, a more diverse and multicultural counterpart to the CSO. In the summer, many outdoor concerts are given in Grant Park and Millennium Park. Ravinia Festival, located 25 miles (40 km) north of Chicago, is the summer home of the CSO, and is a favorite destination for many Chicagoans. The Civic Opera House is home to the Lyric Opera of Chicago. The Lithuanian Opera Company of Chicago was founded by Lithuanian Chicagoans in 1956, and presents operas in Lithuanian. The Joffrey Ballet and Chicago Festival Ballet perform in various venues, including the Harris Theater in Millennium Park. Chicago has several other contemporary and jazz dance troupes, such as the Hubbard Street Dance Chicago and Chicago Dance Crash.[citation needed] Other live-music genre which are part of the city's cultural heritage include Chicago blues, Chicago soul, jazz, and gospel. The city is the birthplace of house music (a popular form of electronic dance music) and industrial music, and is the site of an influential hip hop scene. In the 1980s and 90s, the city was the global center for house and industrial music, two forms of music created in Chicago, as well as being popular for alternative rock, punk, and new wave. The city has been a center for rave culture, since the 1980s. A flourishing independent rock music culture brought forth Chicago indie. Annual festivals feature various acts, such as Lollapalooza and the Pitchfork Music Festival.[citation needed] Lollapalooza originated in 1991 as a touring festival, but as of 2005 its home has been Chicago. A 2007 report on the Chicago music industry by the University of Chicago Cultural Policy Center ranked Chicago third among metropolitan U.S. areas in "size of music industry" and fourth among all U.S. cities in "number of concerts and performances". Chicago has a distinctive fine art tradition. For much of the twentieth century, it nurtured a strong style of figurative surrealism, as in the works of Ivan Albright and Ed Paschke. In 1968 and 1969, members of the Chicago Imagists, such as Roger Brown, Leon Golub, Robert Lostutter, Jim Nutt, and Barbara Rossi produced bizarre representational paintings. Henry Darger is one of the most celebrated figures of outsider art. In 2014[update], Chicago attracted 50.17 million domestic leisure travelers, 11.09 million domestic business travelers and 1.308 million overseas visitors. These visitors contributed more than US$13.7 billion to Chicago's economy. Upscale shopping along the Magnificent Mile and State Street, thousands of restaurants, as well as Chicago's eminent architecture, continue to draw tourists. The city is the United States' third-largest convention destination. A 2017 study by Walk Score ranked Chicago the sixth-most walkable of fifty largest cities in the United States. Most conventions are held at McCormick Place, just south of Soldier Field. Navy Pier, located just east of Streeterville, is 3,000 ft (910 m) long and houses retail stores, restaurants, museums, exhibition halls and auditoriums. Chicago was the first city in the world to ever erect a Ferris wheel. The Willis Tower (formerly named Sears Tower) is a popular destination for tourists. Among the city's museums are the Adler Planetarium & Astronomy Museum, the Field Museum of Natural History, and the Shedd Aquarium. The Museum Campus joins the southern section of Grant Park, which includes the renowned Art Institute of Chicago. Buckingham Fountain anchors the downtown park along the lakefront. The University of Chicago's Institute for the Study of Ancient Cultures, West Asia & North Africa has an extensive collection of ancient Egyptian and Near Eastern archaeological artifacts. Other museums and galleries in Chicago include the Chicago History Museum, the Driehaus Museum, the DuSable Museum of African American History, the Museum of Contemporary Art, the Peggy Notebaert Nature Museum, the Polish Museum of America, the Museum of Broadcast Communications, the Chicago Architecture Foundation, and the Museum of Science and Industry. Chicago lays claim to a large number of regional specialties that reflect the city's ethnic and working-class roots. Included among these are its nationally renowned deep-dish pizza; this style is said to have originated at Pizzeria Uno. The Chicago-style thin crust is also popular in the city. Certain Chicago pizza favorites include Lou Malnati's and Giordano's. The Chicago-style hot dog, typically an all-beef hot dog, is loaded with an array of toppings that often includes pickle relish, yellow mustard, pickled sport peppers, tomato wedges, dill pickle spear and topped off with celery salt on a poppy seed bun. Enthusiasts of the Chicago-style hot dog frown upon the use of ketchup as a garnish, but may prefer to add giardiniera. A distinctly Chicago sandwich, the Italian beef sandwich is thinly sliced beef simmered in au jus and served on an Italian roll with sweet peppers or spicy giardiniera. A popular modification is the Combo—an Italian beef sandwich with the addition of an Italian sausage. The Maxwell Street Polish is a grilled or deep-fried kielbasa—on a hot dog roll, topped with grilled onions, yellow mustard, and hot sport peppers. Chicken Vesuvio is roasted bone-in chicken cooked in oil and garlic next to garlicky oven-roasted potato wedges and a sprinkling of green peas. The Puerto Rican-influenced jibarito is a sandwich made with flattened, fried green plantains instead of bread. The mother-in-law is a tamale topped with chili and served on a hot dog bun. The tradition of serving the Greek dish saganaki while aflame has its origins in Chicago's Greek community. The appetizer, which consists of a square of fried cheese, is doused with Metaxa and flambéed table-side. Chicago-style barbecue features hardwood smoked rib tips and hot links which were traditionally cooked in an aquarium smoker, a Chicago invention. Annual festivals feature various Chicago signature dishes, such as Taste of Chicago and the Chicago Food Truck Festival. One of the world's most decorated restaurants and a recipient of three Michelin stars, Alinea is located in Chicago. Well-known chefs who have had restaurants in Chicago include: Charlie Trotter, Rick Tramonto, Grant Achatz, and Rick Bayless. In 2003, Robb Report named Chicago the country's "most exceptional dining destination". Chicago literature finds its roots in the city's tradition of lucid, direct journalism, lending to a strong tradition of social realism. In the Encyclopedia of Chicago, Northwestern University Professor Bill Savage describes Chicago fiction as prose which tries to "capture the essence of the city, its spaces and its people." The challenge for early writers was that Chicago was a frontier outpost that transformed into a global metropolis in the span of two generations. Narrative fiction of that time, much of it in the style of "high-flown romance" and "genteel realism", needed a new approach to describe the urban social, political, and economic conditions of Chicago. Nonetheless, Chicagoans worked hard to create a literary tradition that would stand the test of time, and create a "city of feeling" out of concrete, steel, vast lake, and open prairie. Much notable Chicago fiction focuses on the city itself, with social criticism keeping exultation in check. At least three short periods in the history of Chicago have had a lasting influence on American literature. These include from the time of the Great Chicago Fire to about 1900, what became known as the Chicago Literary Renaissance in the 1910s and early 1920s, and the period of the Great Depression to the 1940s. What would become the influential Poetry magazine was founded in 1912 by Harriet Monroe, who was working as an art critic for the Chicago Tribune. The magazine discovered such poets as Gwendolyn Brooks, James Merrill, and John Ashbery. T. S. Eliot's first professionally published poem, "The Love Song of J. Alfred Prufrock", was first published by Poetry. Contributors have included Ezra Pound, William Butler Yeats, William Carlos Williams, Langston Hughes, and Carl Sandburg, among others. The magazine was instrumental in launching the Imagist and Objectivist poetic movements. From the 1950s to the 1970s, American poetry continued to evolve in Chicago. In the 1980s, a modern form of poetry performance began in Chicago, the poetry slam. Sports The city has two Major League Baseball (MLB) teams: the Chicago Cubs of the National League play in Wrigley Field on the North Side; and the Chicago White Sox of the American League play in Rate Field on the South Side. The two teams have faced each other in a World Series only once, in 1906. The Cubs are the oldest Major League Baseball team to have never changed their city; they have played in Chicago since 1871. They had the dubious honor of having the longest championship drought in American professional sports, failing to win a World Series between 1908 and 2016. The White Sox have played on the South Side continuously since 1901. They have won three World Series titles (1906, 1917, 2005) and six American League pennants, including the first in 1901. The Chicago Bears, one of the last two remaining charter members of the National Football League (NFL), have won nine NFL Championships, including the 1985 Super Bowl XX. The Bears play their home games at Soldier Field. The Chicago Bulls of the National Basketball Association (NBA) is one of the most recognized basketball teams in the world. During the 1990s, with Michael Jordan leading them, the Bulls won six NBA championships in eight seasons. The Chicago Blackhawks of the National Hockey League (NHL) began play in 1926, and are one of the "Original Six" teams of the NHL. The Blackhawks have won six Stanley Cups, including in 2010, 2013, and 2015. Both the Bulls and the Blackhawks play at the United Center. Chicago Fire FC is a member of Major League Soccer (MLS) and plays at Soldier Field. The Fire have won one league title and four U.S. Open Cups, since their founding in 1997. In 1994, the United States hosted a successful FIFA World Cup with games played at Soldier Field. The Chicago Stars FC are a team in the National Women's Soccer League (NWSL). They previously played in Women's Professional Soccer (WPS), of which they were a founding member, before joining the NWSL in 2013. They play at SeatGeek Stadium in Bridgeview, Illinois. The Chicago Sky is a professional basketball team playing in the Women's National Basketball Association (WNBA). They play home games at the Wintrust Arena. The team was founded before the 2006 WNBA season began. The Chicago Marathon has been held each year since 1977 except for 1987, when a half marathon was run in its place. The Chicago Marathon is one of six World Marathon Majors. Five area colleges play in Division I conferences: two from major conferences—the DePaul Blue Demons (Big East Conference) and the Northwestern Wildcats (Big Ten Conference)—and three from other D1 conferences—the Chicago State Cougars (Northeast Conference); the Loyola Ramblers (Atlantic 10 Conference); and the UIC Flames (Missouri Valley Conference). Chicago has also entered into esports with the creation of the OpTic Chicago, a professional Call of Duty team that participates within the CDL. Parks and greenspace When Chicago was incorporated in 1837, it chose the motto Urbs in Horto, a Latin phrase which means "City in a Garden". Today, the Chicago Park District consists of more than 570 parks with over 8,000 acres (3,200 ha) of municipal parkland. There are 31 sand beaches, a plethora of museums, two world-class conservatories, and 50 nature areas. Lincoln Park, the largest of the city's parks, covers 1,200 acres (490 ha) and has over 20 million visitors each year, making it third in the number of visitors after Central Park in New York City, and the National Mall and Memorial Parks in Washington, D.C. There is a historic boulevard system, a network of wide, tree-lined boulevards which connect a number of Chicago parks. The boulevards and the parks were authorized by the Illinois legislature in 1869. A number of Chicago neighborhoods emerged along these roadways in the 19th century. The building of the boulevard system continued intermittently until 1942. It includes nineteen boulevards, eight parks, and six squares, along twenty-six miles of interconnected streets. The Chicago Park Boulevard System Historic District was listed on the National Register of Historic Places in 2018. With berths for more than 6,000 boats, the Chicago Park District operates the nation's largest municipal harbor system. In addition to ongoing beautification and renewal projects for the existing parks, a number of new parks have been added in recent years, such as the Ping Tom Memorial Park in Chinatown, DuSable Park on the Near North Side, and most notably, Millennium Park, which is in the northwestern corner of one of Chicago's oldest parks, Grant Park in the Chicago Loop.[citation needed] The wealth of greenspace afforded by Chicago's parks is further augmented by the Cook County Forest Preserves, a network of open spaces containing forest, prairie, wetland, streams, and lakes that are set aside as natural areas which lie along the city's outskirts, including both the Chicago Botanic Garden in Glencoe and the Brookfield Zoo in Brookfield. Washington Park is also one of the city's biggest parks; covering nearly 400 acres (160 ha). The park is listed on the National Register of Historic Places listings in South Side Chicago. Law and government The government of the City of Chicago is divided into executive and legislative branches. The mayor of Chicago is the chief executive, elected by general election for a term of four years, with no term limits. The incumbent mayor is Brandon Johnson. The mayor appoints commissioners and other officials who oversee the various departments. As well as the mayor, Chicago's clerk and treasurer are also elected citywide. The City Council is the legislative branch and is made up of 50 alderpersons, one elected from each ward in the city. The council takes official action through the passage of ordinances and resolutions and approves the city budget. The Chicago Police Department provides law enforcement and the Chicago Fire Department provides fire suppression and emergency medical services for the city and its residents. Civil and criminal law cases are heard in the Cook County Circuit Court of the State of Illinois court system, or in the Northern District of Illinois, in the federal system. In the state court, the public prosecutor is the Illinois state's attorney; in the Federal court it is the United States attorney. During much of the last half of the 19th century, Chicago's politics were dominated by a growing Democratic Party organization. During the 1880s and 1890s, Chicago had a powerful radical tradition with large and highly organized socialist, anarchist and labor organizations. For much of the 20th century, Chicago has been among the largest and most reliable Democratic strongholds in the United States; with Chicago's Democratic vote the state of Illinois has been "solid blue" in presidential elections since 1992. Even before then, it was not unheard of for Republican presidential candidates to win handily in downstate Illinois, only to lose statewide due to large Democratic margins in Chicago. The citizens of Chicago have not elected a Republican mayor since 1927, when William Thompson was voted into office. The strength of the party in Chicago is partly a consequence of Illinois state politics, where the Republicans have come to represent rural and farm concerns while the Democrats support urban issues such as Chicago's public school funding.[citation needed] Chicago contains less than 25% of the state's population, but it is split between eight of Illinois' 17 districts in the United States House of Representatives. All eight of the city's representatives are Democrats; only two Republicans have represented a significant portion of the city since 1973, for one term each: Robert P. Hanrahan from 1973 to 1975, and Michael Patrick Flanagan from 1995 to 1997.[citation needed] Machine politics persisted in Chicago after the decline of similar machines in other large U.S. cities. During much of that time, the city administration found opposition mainly from a liberal "independent" faction of the Democratic Party. The independents finally gained control of city government in 1983 with the election of Harold Washington (in office 1983–1987). From 1989 until May 16, 2011, Chicago was under the leadership of its longest-serving mayor, Richard M. Daley, the son of Richard J. Daley. Because of the dominance of the Democratic Party in Chicago, the Democratic primary vote held in the spring is generally more significant than the general elections in November for U.S. House and Illinois State seats. The aldermanic, mayoral, and other city offices are filled through nonpartisan elections with runoffs as needed. The city is home of former United States President Barack Obama and First Lady Michelle Obama; Barack Obama was formerly a state legislator representing Chicago and later a U.S. senator. The Obamas' residence is located near the University of Chicago in Kenwood on the city's south side. Chicago's crime rate in 2020 was 3,926 per 100,000 people. Chicago experienced major rises in violent crime in the 1920s, in the late 1960s, and in the 2020s. Chicago's biggest criminal justice challenges have changed little over the last 50 years, and statistically reside with homicide, armed robbery, gang violence, and aggravated battery. Chicago has a higher murder rate than the larger cities of New York and Los Angeles. However, while it has a large absolute number of crimes due to its size, Chicago is not among the top-25 most violent cities in the United States. Murder rates in Chicago vary greatly depending on the neighborhood in question. The neighborhoods of Englewood on the South Side, and Austin on the West side, for example, have homicide rates that are ten times higher than other parts of the city. Chicago has an estimated population of over 100,000 active gang members from nearly 60 factions. According to reports in 2013, "most of Chicago's violent crime comes from gangs trying to maintain control of drug-selling territories," and is specifically related to the activities of the Sinaloa Cartel, which is active in several American cities. Violent crime rates vary significantly by area of the city, with more economically developed areas having low rates, but other sections have much higher rates of crime. In 2013, the violent crime rate was 910 per 100,000 people; the murder rate was 10.4 per 100,000 – while high crime districts saw 38.9 murders, low crime districts saw 2.5 murders per 100,000. Chicago's long history of public corruption regularly draws the attention of federal law enforcement and federal prosecutors. From 2012 to 2019, 33 Chicago alderpersons were convicted on corruption charges, roughly one third of those elected in the time period. A report from the Office of the Legislative Inspector General noted that over half of Chicago's elected alderpersons took illegal campaign contributions in 2013. Most corruption cases in Chicago are prosecuted by the U.S. Attorney's office, as legal jurisdiction makes most offenses punishable as a federal crime. Education Chicago Public Schools (CPS) is the governing body of the school district that contains over 600 public elementary and high schools citywide, including several selective-admission magnet schools. There are eleven selective enrollment high schools in the Chicago Public Schools, designed to meet the needs of Chicago's most academically advanced students. These schools offer a rigorous curriculum with mainly honors and Advanced Placement (AP) courses. Walter Payton College Prep High School is ranked number one in the city of Chicago and the state of Illinois. Chicago high school rankings are determined by the average test scores on state achievement tests. The district, with an enrollment exceeding 400,545 students (2013–2014 20th Day Enrollment), is the third-largest in the U.S. In September 2012, teachers for the Chicago Teachers Union went on strike for the first time since 1987 over pay, resources, and other issues. According to 2014 data, Chicago's "choice system", where students who test or apply and may attend one of approximately 130 public high schools, sorts students of different achievement levels into different schools (high performing, middle performing, and low performing schools). Chicago has a network of Lutheran schools, and several private schools are run by other denominations and faiths, such as the Ida Crown Jewish Academy in West Ridge. The Roman Catholic Archdiocese of Chicago operates Catholic schools, including Jesuit preparatory schools and others. A number of private schools are completely secular. There is also the private Chicago Academy for the Arts, a high school focused on six different artistic disciplines, and the public Chicago High School for the Arts, a high school focused on five disciplines (visual arts, theatre, musical theatre, dance, and music). The Chicago Public Library system operates three regional libraries and 77 neighborhood branches, including the central library. Since the 1850s, Chicago has been a world center of higher education and research with several universities. These institutions consistently rank among the top "National Universities" in the United States, as determined by U.S. News & World Report. Highly regarded universities in Chicago and the surrounding area are the University of Chicago; Northwestern University; Illinois Institute of Technology; Loyola University Chicago; DePaul University; Columbia College Chicago and the University of Illinois Chicago. Other notable schools include: Chicago State University; the School of the Art Institute of Chicago; East–West University; National Louis University; North Park University; Northeastern Illinois University; Robert Morris University Illinois; Roosevelt University; Saint Xavier University; Rush University; and Shimer College. William Rainey Harper, the first president of the University of Chicago, was instrumental in the creation of the junior college concept, establishing nearby Joliet Junior College as the first in the nation in 1901. His legacy continues with the multiple community colleges in the Chicago proper, including the seven City Colleges of Chicago: Richard J. Daley College, Kennedy–King College, Malcolm X College, Olive–Harvey College, Truman College, Harold Washington College, and Wilbur Wright College, in addition to the privately held MacCormac College.[citation needed] Chicago also has a high concentration of post-baccalaureate institutions, graduate schools, seminaries, and theological schools, such as the Adler School of Professional Psychology, The Chicago School the Erikson Institute, Institute for Clinical Social Work, Lutheran School of Theology at Chicago, Catholic Theological Union, Moody Bible Institute, and University of Chicago Divinity School.[citation needed] Media The Chicago metropolitan area is a major media hub and the third-largest media market in the United States, after New York City and Los Angeles. Each of the big five U.S. television networks, NBC, ABC, CBS, Fox and The CW, directly owns and operates a high-definition television station in Chicago (WMAQ 5, WLS 7, WBBM 2, WFLD 32 and WGN-TV 9, respectively). WGN is owned by the CW through a majority stake held in the network by the Nexstar Media Group, which acquired it from its founding owner Tribune Broadcasting in 2019. WGN was once carried, with some programming differences, as "WGN America" on cable and satellite TV nationwide and in parts of the Caribbean. WGN America eventually became NewsNation in 2021. Chicago has been the home of several prominent talk shows, including The Oprah Winfrey Show, Steve Harvey Show, The Rosie Show, The Jerry Springer Show, The Phil Donahue Show, The Jenny Jones Show, and more. The city has one PBS member station (its second: WYCC 20, removed its affiliation with PBS in 2017): WTTW 11, producer of shows such as Sneak Previews, The Frugal Gourmet, Lamb Chop's Play-Along and The McLaughlin Group. As of 2018[update], Windy City Live is Chicago's only daytime talk show, which is hosted by Val Warner and Ryan Chiaverini at ABC7 Studios with a live weekday audience. Since 1999, Judge Mathis also films his syndicated arbitration-based reality court show at the NBC Tower. Beginning in January 2019, Newsy began producing 12 of its 14 hours of live news programming per day from its new facility in Chicago.[citation needed] Most of Chicago's television stations are owned and operated by the big television network companies. They are: Two major daily newspapers are published in Chicago: the Chicago Tribune and the Chicago Sun-Times, with the Tribune having the larger circulation. There are also several regional and special-interest newspapers and magazines, such as Chicago, the Dziennik Związkowy (Polish Daily News), Draugas (the Lithuanian daily newspaper), the Chicago Reader, the SouthtownStar, the Chicago Defender, the Daily Herald, Newcity, StreetWise and the Windy City Times. The entertainment and cultural magazine Time Out Chicago and GRAB magazine are published in the city, as well as local music magazine Chicago Innerview. Chicago is the home of satirical national news outlet, The Onion, and its sister pop-culture publication, The A.V. Club. Chicago has five 50,000 watt AM radio stations: the Audacy-owned WBBM and WSCR; the Tribune Broadcasting-owned WGN; the Cumulus Media-owned WLS; and the ESPN Radio-owned WMVP. Chicago is also home to a number of national radio shows, including Beyond the Beltway with Bruce DuMont on Sunday evenings.[citation needed] WBEZ produces nationally aired programs such as PRI's This American Life and NPR's Wait Wait...Don't Tell Me!.[citation needed] Infrastructure Chicago is a major transportation hub in the United States. It is an important component in global distribution, as it is the third-largest inter-modal port in the world after Hong Kong and Singapore. The city of Chicago has a higher than average percentage of households without a car. In 2015, 26.5 percent of Chicago households were without a car, and increased slightly to 27.5 percent in 2016. The national average was 8.7 percent in 2016. Chicago averaged 1.12 cars per household in 2016, compared to a national average of 1.8. Due to Chicago's wheel tax, residents of Chicago who own a vehicle are required to purchase a Chicago City Vehicle Sticker. In established Residential Parking Zones, only local residents can purchase Zone-specific parking stickers for themselves and guests. Chicago since 2009 has relinquished rights to its public street parking. In 2008, as Chicago struggled to close a growing budget deficit, the city agreed to a 75-year, $1.16 billion deal to lease its parking meter system to an operating company created by Morgan Stanley, called Chicago Parking Meters LLC. Daley said the "agreement is very good news for the taxpayers of Chicago because it will provide more than $1 billion in net proceeds that can be used during this very difficult economy." The rights of the parking ticket lease end in 2081, and since 2022 have already recouped over $1.5 billion in revenue for Chicago Parking Meters LLC investors. Seven mainline and four auxiliary interstate highways (55, 57, 65 (only in Indiana), 80 (also in Indiana), 88, 90 (also in Indiana), 94 (also in Indiana), 190, 290, 294, and 355) run through Chicago and its suburbs. Segments that link to the city center are named after influential politicians, with three of them named after former U.S. Presidents (Eisenhower, Kennedy, and Reagan) and one named after two-time Democratic candidate Adlai Stevenson. The Kennedy and Dan Ryan Expressways are the busiest state maintained routes in the entire state of Illinois. The Regional Transportation Authority (RTA) coordinates the operation of the three service boards: CTA, Metra, and Pace. Greyhound Lines provides inter-city bus service to and from the city at the Chicago Bus Station, and Chicago is also the hub for the Midwest network of Megabus. Amtrak long distance and commuter rail services originate from Chicago Union Station. Chicago is one of the largest hubs of passenger rail service in the nation. The services terminate in Port Huron, St. Paul, the San Francisco Area, New York City, New Orleans, Portland, Oregon, Seattle, Miami, Milwaukee, Carbondale, Quincy, Boston, St. Louis, Kansas City, Grand Rapids, Los Angeles, San Antonio, and Pontiac, Michigan. Future service will terminate at Moline. An attempt was made in the early 20th century to link Chicago with New York City via the Chicago – New York Electric Air Line Railroad. Parts of this were built, but it was never completed. In July 2013, the bicycle-sharing system Divvy was launched with 750 bikes and 75 docking stations. It is operated by Lyft for the Chicago Department of Transportation. As of July 2019, Divvy operated 5800 bicycles at 608 stations, covering almost all of the city, excluding Pullman, Rosedale, Beverly, Belmont Cragin and Edison Park. In May 2019, The City of Chicago announced its Chicago's Electric Shared Scooter Pilot Program, scheduled to run from June 15 to October 15. The program started on June 15 with 10 different scooter companies, including scooter sharing market leaders Bird, Jump, Lime and Lyft. Each company was allowed to bring 250 electric scooters, although both Bird and Lime claimed that they experienced a higher demand for their scooters. The program ended on October 15, with nearly 800,000 rides taken. Chicago is the largest hub in the railroad industry. All five Class I railroads meet in Chicago. As of 2002[update], severe freight train congestion caused trains to take as long to get through the Chicago region as it took to get there from the West Coast of the country (about 2 days). According to U.S. Department of Transportation, the volume of imported and exported goods transported via rail to, from, or through Chicago is forecast to increase nearly 150 percent between 2010 and 2040. CREATE, the Chicago Region Environmental and Transportation Efficiency Program, comprises about 70 programs, including crossovers, overpasses and underpasses, that intend to significantly improve the speed of freight movements in the Chicago area. Chicago is served by O'Hare International Airport, the world's busiest airport measured by airline operations, on the far Northwest Side, and Midway International Airport on the Southwest Side. In 2005, O'Hare was the world's busiest airport by aircraft movements and the second-busiest by total passenger traffic. Both O'Hare and Midway are owned and operated by the City of Chicago. Gary/Chicago International Airport and Chicago Rockford International Airport, located in Gary, Indiana and Rockford, Illinois, respectively, can serve as alternative Chicago area airports, however they do not offer as many commercial flights as O'Hare and Midway. In recent years the state of Illinois has been leaning towards building an entirely new airport in the Illinois suburbs of Chicago. The City of Chicago is the world headquarters for United Airlines, the world's third-largest airline. The Port of Chicago consists of several major port facilities within the city of Chicago operated by the Illinois International Port District (formerly known as the Chicago Regional Port District). The central element of the Port District, Calumet Harbor, is maintained by the U.S. Army Corps of Engineers. Electricity for most of northern Illinois is provided by Commonwealth Edison, also known as ComEd. Their service territory borders Iroquois County to the south, the Wisconsin border to the north, the Iowa border to the west and the Indiana border to the east. In northern Illinois, ComEd (a division of Exelon) operates the greatest number of nuclear generating plants in any U.S. state. Because of this, ComEd reports indicate that Chicago receives about 75% of its electricity from nuclear power. Recently, the city began installing wind turbines on government buildings to promote renewable energy. Natural gas is provided by Peoples Gas, a subsidiary of Integrys Energy Group, which is headquartered in Chicago. Domestic and industrial waste was once incinerated but it is now landfilled, mainly in the Calumet area. From 1995 to 2008, the city had a blue bag program to divert recyclable refuse from landfills. Because of low participation in the blue bag programs, the city began a pilot program for blue bin recycling like other cities. This proved successful and blue bins were rolled out across the city. The Illinois Medical District is on the Near West Side. It includes Rush University Medical Center, ranked as the second best hospital in the Chicago metropolitan area by U.S. News & World Report for 2014–16, the University of Illinois Medical Center at Chicago, Jesse Brown VA Hospital, and John H. Stroger Jr. Hospital of Cook County, one of the busiest trauma centers in the nation. Two of the country's premier academic medical centers, Northwestern Memorial Hospital and the University of Chicago Medical Center, reside in Chicago. The Chicago campus of Northwestern University includes the Feinberg School of Medicine; Northwestern Memorial Hospital, which was ranked as the best hospital in the Chicago metropolitan area by U.S. News & World Report for 2017–18; the Shirley Ryan AbilityLab (formerly named the Rehabilitation Institute of Chicago), which was ranked the best U.S. rehabilitation hospital by U.S. News & World Report; the new Prentice Women's Hospital; and Ann & Robert H. Lurie Children's Hospital of Chicago. The University of Illinois College of Medicine at UIC is the second-largest medical school in the United States (2,600 students, including those at campuses in Peoria, Rockford and Urbana–Champaign). In addition, the Chicago Medical School and Loyola University Chicago's Stritch School of Medicine are located in the suburbs of North Chicago and Maywood, respectively. The Midwestern University Chicago College of Osteopathic Medicine is in Downers Grove. The American Medical Association, Accreditation Council for Graduate Medical Education, Accreditation Council for Continuing Medical Education, American Osteopathic Association, American Dental Association, Academy of General Dentistry, Academy of Nutrition and Dietetics, American Association of Nurse Anesthetists, American College of Surgeons, American Society for Clinical Pathology, American College of Healthcare Executives, the American Hospital Association, and Blue Cross and Blue Shield Association are all based in Chicago. Sister cities See also Explanatory notes References Further reading External links Former: Evanston • Hyde Park • Jefferson • Lake • Lake View • North Chicago • Rogers Park • South Chicago • West Chicago |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_ref-Introduction_for_the_French_Reader_26-1] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-146] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jews#cite_note-auto-42] | [TOKENS: 15852] |
Contents Jews Jews (Hebrew: יְהוּדִים, ISO 259-2: Yehudim, Israeli pronunciation: [jehuˈdim]), or the Jewish people, are an ethnoreligious group and nation, originating from the Israelites of ancient Israel and Judah. They traditionally adhere to Judaism. Jewish ethnicity, religion, and community are highly interrelated, as Judaism is an ethnic religion, though many ethnic Jews do not practice it. Religious Jews regard converts to Judaism as members of the Jewish nation, pursuant to the long-standing conversion process. The Israelites emerged from the pre-existing Canaanite peoples to establish Israel and Judah in the Southern Levant during the Iron Age. Originally, Jews referred to the inhabitants of the kingdom of Judah and were distinguished from the gentiles and the Samaritans. According to the Hebrew Bible, these inhabitants predominately originate from the tribe of Judah, who were descendants of Judah, the fourth son of Jacob. The tribe of Benjamin were another significant demographic in Judah and were considered Jews too. By the late 6th century BCE, Judaism had evolved from the Israelite religion, dubbed Yahwism (for Yahweh) by modern scholars, having a theology that religious Jews believe to be the expression of the Mosaic covenant between God and the Jewish people. After the Babylonian exile, Jews referred to followers of Judaism, descendants of the Israelites, citizens of Judea, or allies of the Judean state. Jewish migration within the Mediterranean region during the Hellenistic period, followed by population transfers, caused by events like the Jewish–Roman wars, gave rise to the Jewish diaspora, consisting of diverse Jewish communities that maintained their sense of Jewish history, identity, and culture. In the following millennia, Jewish diaspora communities coalesced into three major ethnic subdivisions according to where their ancestors settled: the Ashkenazim (Central and Eastern Europe), the Sephardim (Iberian Peninsula), and the Mizrahim (Middle East and North Africa). While these three major divisions account for most of the world's Jews, there are other smaller Jewish groups outside of the three. Prior to World War II, the global Jewish population reached a peak of 16.7 million, representing around 0.7% of the world's population at that time. During World War II, approximately six million Jews throughout Europe were systematically murdered by Nazi Germany in a genocide known as the Holocaust. Since then, the population has slowly risen again, and as of 2021[update], was estimated to be at 15.2 million by the demographer Sergio Della Pergola or less than 0.2% of the total world population in 2012.[b] Today, over 85% of Jews live in Israel or the United States. Israel, whose population is 73.9% Jewish, is the only country where Jews comprise more than 2.5% of the population. Jews have significantly influenced and contributed to the development and growth of human progress in many fields, both historically and in modern times, including in science and technology, philosophy, ethics, literature, governance, business, art, music, comedy, theatre, cinema, architecture, food, medicine, and religion. Jews founded Christianity and had an indirect but profound influence on Islam. In these ways and others, Jews have played a significant role in the development of Western culture. Name and etymology The term "Jew" is derived from the Hebrew word יְהוּדִי Yehudi, with the plural יְהוּדִים Yehudim. Endonyms in other Jewish languages include the Ladino ג׳ודיו Djudio (plural ג׳ודיוס, Djudios) and the Yiddish ייִד Yid (plural ייִדן Yidn). Though Genesis 29:35 and 49:8 connect "Judah" with the verb yada, meaning "praise", scholars generally agree that "Judah" most likely derives from the name of a Levantine geographic region dominated by gorges and ravines. The gradual ethnonymic shift from "Israelites" to "Jews", regardless of their descent from Judah, although not contained in the Torah, is made explicit in the Book of Esther (4th century BCE) of the Tanakh. Some modern scholars disagree with the conflation, based on the works of Josephus, Philo and Apostle Paul. The English word "Jew" is a derivation of Middle English Gyw, Iewe. The latter was loaned from the Old French giu, which itself evolved from the earlier juieu, which in turn derived from judieu/iudieu which through elision had dropped the letter "d" from the Medieval Latin Iudaeus, which, like the New Testament Greek term Ioudaios, meant both "Jew" and "Judean" / "of Judea". The Greek term was a loan from Aramaic *yahūdāy, corresponding to Hebrew יְהוּדִי Yehudi. Some scholars prefer translating Ioudaios as "Judean" in the Bible since it is more precise, denotes the community's origins and prevents readers from engaging in antisemitic eisegesis. Others disagree, believing that it erases the Jewish identity of Biblical characters such as Jesus. Daniel R. Schwartz distinguishes "Judean" and "Jew". Here, "Judean" refers to the inhabitants of Judea, which encompassed southern Palestine. Meanwhile, "Jew" refers to the descendants of Israelites that adhere to Judaism. Converts are included in the definition. But Shaye J.D. Cohen argues that "Judean" is inclusive of believers of the Judean God and allies of the Judean state. Another scholar, Jodi Magness, wrote the term Ioudaioi refers to a "people of Judahite/Judean ancestry who worshipped the God of Israel as their national deity and (at least nominally) lived according to his laws." The etymological equivalent is in use in other languages, e.g., يَهُودِيّ yahūdī (sg.), al-yahūd (pl.), in Arabic, "Jude" in German, "judeu" in Portuguese, "Juif" (m.)/"Juive" (f.) in French, "jøde" in Danish and Norwegian, "judío/a" in Spanish, "jood" in Dutch, "żyd" in Polish etc., but derivations of the word "Hebrew" are also in use to describe a Jew, e.g., in Italian (Ebreo), in Persian ("Ebri/Ebrani" (Persian: عبری/عبرانی)) and Russian (Еврей, Yevrey). The German word "Jude" is pronounced [ˈjuːdə], the corresponding adjective "jüdisch" [ˈjyːdɪʃ] (Jewish) is the origin of the word "Yiddish". According to The American Heritage Dictionary of the English Language, fourth edition (2000), It is widely recognized that the attributive use of the noun Jew, in phrases such as Jew lawyer or Jew ethics, is both vulgar and highly offensive. In such contexts Jewish is the only acceptable possibility. Some people, however, have become so wary of this construction that they have extended the stigma to any use of Jew as a noun, a practice that carries risks of its own. In a sentence such as There are now several Jews on the council, which is unobjectionable, the substitution of a circumlocution like Jewish people or persons of Jewish background may in itself cause offense for seeming to imply that Jew has a negative connotation when used as a noun. Identity Judaism shares some of the characteristics of a nation, an ethnicity, a religion, and a culture, making the definition of who is a Jew vary slightly depending on whether a religious or national approach to identity is used.[better source needed] Generally, in modern secular usage, Jews include three groups: people who were born to a Jewish family regardless of whether or not they follow the religion, those who have some Jewish ancestral background or lineage (sometimes including those who do not have strictly matrilineal descent), and people without any Jewish ancestral background or lineage who have formally converted to Judaism and therefore are followers of the religion. In the context of biblical and classical literature, Jews could refer to inhabitants of the Kingdom of Judah, or the broader Judean region, allies of the Judean state, or anyone that followed Judaism. Historical definitions of Jewish identity have traditionally been based on halakhic definitions of matrilineal descent, and halakhic conversions. These definitions of who is a Jew date back to the codification of the Oral Torah into the Babylonian Talmud, around 200 CE. Interpretations by Jewish sages of sections of the Tanakh – such as Deuteronomy 7:1–5, which forbade intermarriage between their Israelite ancestors and seven non-Israelite nations: "for that [i.e. giving your daughters to their sons or taking their daughters for your sons,] would turn away your children from following me, to serve other gods"[failed verification] – are used as a warning against intermarriage between Jews and gentiles. Leviticus 24:10 says that the son in a marriage between a Hebrew woman and an Egyptian man is "of the community of Israel." This is complemented by Ezra 10:2–3, where Israelites returning from Babylon vow to put aside their gentile wives and their children. A popular theory is that the rape of Jewish women in captivity brought about the law of Jewish identity being inherited through the maternal line, although scholars challenge this theory citing the Talmudic establishment of the law from the pre-exile period. Another argument is that the rabbis changed the law of patrilineal descent to matrilineal descent due to the widespread rape of Jewish women by Roman soldiers. Since the anti-religious Haskalah movement of the late 18th and 19th centuries, halakhic interpretations of Jewish identity have been challenged. According to historian Shaye J. D. Cohen, the status of the offspring of mixed marriages was determined patrilineally in the Bible. He brings two likely explanations for the change in Mishnaic times: first, the Mishnah may have been applying the same logic to mixed marriages as it had applied to other mixtures (Kil'ayim). Thus, a mixed marriage is forbidden as is the union of a horse and a donkey, and in both unions the offspring are judged matrilineally. Second, the Tannaim may have been influenced by Roman law, which dictated that when a parent could not contract a legal marriage, offspring would follow the mother. Rabbi Rivon Krygier follows a similar reasoning, arguing that Jewish descent had formerly passed through the patrilineal descent and the law of matrilineal descent had its roots in the Roman legal system. Origins The prehistory and ethnogenesis of the Jews are closely intertwined with archaeology, biology, historical textual records, mythology, and religious literature. The ethnic origin of the Jews lie in the Israelites, a confederation of Iron Age Semitic-speaking tribes that inhabited a part of Canaan during the tribal and monarchic periods. Modern Jews are named after and also descended from the southern Israelite Kingdom of Judah. Gary A. Rendsburg links the early Canaanite nomadic pastoralists confederation to the Shasu known to the Egyptians around the 15th century BCE. According to the Hebrew Bible narrative, Jewish history begins with the Biblical patriarchs such as Abraham, his son Isaac, Isaac's son Jacob, and the Biblical matriarchs Sarah, Rebecca, Leah, and Rachel, who lived in Canaan. The twelve sons of Jacob subsequently gave birth to the Twelve Tribes. Jacob and his family migrated to Ancient Egypt after being invited to live with Jacob's son Joseph by the Pharaoh himself. Jacob's descendants were later enslaved until the Exodus, led by Moses. Afterwards, the Israelites conquered Canaan under Moses' successor Joshua, and went through the period of the Biblical judges after the death of Joshua. Through the mediation of Samuel, the Israelites were subject to a king, Saul, who was succeeded by David and then Solomon, after whom the United Monarchy ended and was split into a separate Kingdom of Israel and a Kingdom of Judah. The Kingdom of Judah is described as comprising the tribes of Judah, Benjamin and partially, Levi. They later assimilated remnants of other tribes who migrated there from the northern Kingdom of Israel. In the extra-biblical record, the Israelites become visible as a people between 1200 and 1000 BCE. There is well accepted archeological evidence referring to "Israel" in the Merneptah Stele, which dates to about 1200 BCE, and in the Mesha stele from 840 BCE. It is debated whether a period like that of the Biblical judges occurred and if there ever was a United Monarchy. There is further disagreement about the earliest existence of the Kingdoms of Israel and Judah and their extent and power. Historians agree that a Kingdom of Israel existed by c. 900 BCE,: 169–95 there is a consensus that a Kingdom of Judah existed by c. 700 BCE at least, and recent excavations in Khirbet Qeiyafa have provided strong evidence for dating the Kingdom of Judah to the 10th century BCE. In 587 BCE, Nebuchadnezzar II, King of the Neo-Babylonian Empire, besieged Jerusalem, destroyed the First Temple and deported parts of the Judahite population. Scholars disagree regarding the extent to which the Bible should be accepted as a historical source for early Israelite history. Rendsburg states that there are two approximately equal groups of scholars who debate the historicity of the biblical narrative, the minimalists who largely reject it, and the maximalists who largely accept it, with the minimalists being the more vocal of the two. Some of the leading minimalists reframe the biblical account as constituting the Israelites' inspiring national myth narrative, suggesting that according to the modern archaeological and historical account, the Israelites and their culture did not overtake the region by force, but instead branched out of the Canaanite peoples and culture through the development of a distinct monolatristic—and later monotheistic—religion of Yahwism centered on Yahweh, one of the gods of the Canaanite pantheon. The growth of Yahweh-centric belief, along with a number of cultic practices, gradually gave rise to a distinct Israelite ethnic group, setting them apart from other Canaanites. According to Dever, modern archaeologists have largely discarded the search for evidence of the biblical narrative surrounding the patriarchs and the exodus. According to the maximalist position, the modern archaeological record independently points to a narrative which largely agrees with the biblical account. This narrative provides a testimony of the Israelites as a nomadic people known to the Egyptians as belonging to the Shasu. Over time these nomads left the desert and settled on the central mountain range of the land of Canaan, in simple semi-nomadic settlements in which pig bones are notably absent. This population gradually shifted from a tribal lifestyle to a monarchy. While the archaeological record of the ninth century BCE provides evidence for two monarchies, one in the south under a dynasty founded by a figure named David with its capital in Jerusalem, and one in the north under a dynasty founded by a figure named Omri with its capital in Samaria. It also points to an early monarchic period in which these regions shared material culture and religion, suggesting a common origin. Archaeological finds also provide evidence for the later cooperation of these two kingdoms in their coalition against Aram, and for their destructions by the Assyrians and later by the Babylonians. Genetic studies on Jews show that most Jews worldwide bear a common genetic heritage which originates in the Middle East, and that they share certain genetic traits with other Gentile peoples of the Fertile Crescent. The genetic composition of different Jewish groups shows that Jews share a common gene pool dating back four millennia, as a marker of their common ancestral origin. Despite their long-term separation, Jewish communities maintained their unique commonalities, propensities, and sensibilities in culture, tradition, and language. History The earliest recorded evidence of a people by the name of Israel appears in the Merneptah Stele, which dates to around 1200 BCE. The majority of scholars agree that this text refers to the Israelites, a group that inhabited the central highlands of Canaan, where archaeological evidence shows that hundreds of small settlements were constructed between the 12th and 10th centuries BCE. The Israelites differentiated themselves from neighboring peoples through various distinct characteristics including religious practices, prohibition on intermarriage, and an emphasis on genealogy and family history. In the 10th century BCE, two neighboring Israelite kingdoms—the northern Kingdom of Israel and the southern Kingdom of Judah—emerged. Since their inception, they shared ethnic, cultural, linguistic and religious characteristics despite a complicated relationship. Israel, with its capital mostly in Samaria, was larger and wealthier, and soon developed into a regional power. In contrast, Judah, with its capital in Jerusalem, was less prosperous and covered a smaller, mostly mountainous territory. However, while in Israel the royal succession was often decided by a military coup d'état, resulting in several dynasty changes, political stability in Judah was much greater, as it was ruled by the House of David for the whole four centuries of its existence. Scholars also describe Biblical Jews as a 'proto-nation', in the modern nationalist sense, comparable to classical Greeks, the Gauls and the British Celts. Around 720 BCE, Kingdom of Israel was destroyed when it was conquered by the Neo-Assyrian Empire, which came to dominate the ancient Near East. Under the Assyrian resettlement policy, a significant portion of the northern Israelite population was exiled to Mesopotamia and replaced by immigrants from the same region. During the same period, and throughout the 7th century BCE, the Kingdom of Judah, now under Assyrian vassalage, experienced a period of prosperity and witnessed a significant population growth. This prosperity continued until the Neo-Assyrian king Sennacherib devastated the region of Judah in response to a rebellion in the area, ultimately halting at Jerusalem. Later in the same century, the Assyrians were defeated by the rising Neo-Babylonian Empire, and Judah became its vassal. In 587 BCE, following a revolt in Judah, the Babylonian king Nebuchadnezzar II besieged and destroyed Jerusalem and the First Temple, putting an end to the kingdom. The majority of Jerusalem's residents, including the kingdom's elite, were exiled to Babylon. According to the Book of Ezra, the Persian Cyrus the Great ended the Babylonian exile in 538 BCE, the year after he captured Babylon. The exile ended with the return under Zerubbabel the Prince (so called because he was a descendant of the royal line of David) and Joshua the Priest (a descendant of the line of the former High Priests of the Temple) and their construction of the Second Temple circa 521–516 BCE. As part of the Persian Empire, the former Kingdom of Judah became the province of Judah (Yehud Medinata), with a smaller territory and a reduced population. Judea was under control of the Achaemenids until the fall of their empire in c. 333 BCE to Alexander the Great. After several centuries under foreign imperial rule, the Maccabean Revolt against the Seleucid Empire resulted in an independent Hasmonean kingdom, under which the Jews once again enjoyed political independence for a period spanning from 110 to 63 BCE. Under Hasmonean rule the boundaries of their kingdom were expanded to include not only the land of the historical kingdom of Judah, but also the Galilee and Transjordan. In the beginning of this process the Idumeans, who had infiltrated southern Judea after the destruction of the First Temple, were converted en masse. In 63 BCE, Judea was conquered by the Romans. From 37 BCE to 6 CE, the Romans allowed the Jews to maintain some degree of independence by installing the Herodian dynasty as vassal kings. However, Judea eventually came directly under Roman control and was incorporated into the Roman Empire as the province of Judaea. The Jewish–Roman wars, a series of failed uprisings against Roman rule during the first and second centuries CE, had profound and devastating consequences for the Jewish population of Judaea. The First Jewish–Roman War (66–73/74 CE) culminated in the destruction of Jerusalem and the Second Temple, after which the significantly diminished Jewish population was stripped of political autonomy. A few generations later, the Bar Kokhba revolt (132–136 CE) erupted in response to Roman plans to rebuild Jerusalem as a Roman colony, and, possibly, to restrictions on circumcision. Its violent suppression by the Romans led to the near-total depopulation of Judea, and the demographic and cultural center of Jewish life shifted to Galilee. Jews were subsequently banned from residing in Jerusalem and the surrounding area, and the province of Judaea was renamed Syria Palaestina. These developments effectively ended Jewish efforts to restore political sovereignty in the region for nearly two millennia. Similar upheavals impacted the Jewish communities in the empire's eastern provinces during the Diaspora Revolt (115–117 CE), leading to the near-total destruction of Jewish diaspora communities in Libya, Cyprus and Egypt, including the highly influential community in Alexandria. The destruction of the Second Temple in 70 CE brought profound changes to Judaism. With the Temple's central place in Jewish worship gone, religious practices shifted towards prayer, Torah study (including Oral Torah), and communal gatherings in synagogues. Judaism also lost much of its sectarian nature.: 69 Two of the three main sects that flourished during the late Second Temple period, namely the Sadducees and Essenes, eventually disappeared, while Pharisaic beliefs became the foundational, liturgical, and ritualistic basis of Rabbinic Judaism, which emerged as the prevailing form of Judaism since late antiquity. The Jewish diaspora existed well before the destruction of the Second Temple in 70 CE and had been ongoing for centuries, with the dispersal driven by both forced expulsions and voluntary migrations. In Mesopotamia, a testimony to the beginnings of the Jewish community can be found in Joachin's ration tablets, listing provisions allotted to the exiled Judean king and his family by Nebuchadnezzar II, and further evidence are the Al-Yahudu tablets, dated to the 6th–5th centuries BCE and related to the exiles from Judea arriving after the destruction of the First Temple, though there is ample evidence for the presence of Jews in Babylonia even from 626 BCE. In Egypt, the documents from Elephantine reveal the trials of a community founded by a Persian Jewish garrison at two fortresses on the frontier during the 5th–4th centuries BCE, and according to Josephus the Jewish community in Alexandria existed since the founding of the city in the 4th century BCE by Alexander the Great. By 200 BCE, there were well established Jewish communities both in Egypt and Mesopotamia ("Babylonia" in Jewish sources) and in the two centuries that followed, Jewish populations were also present in Asia Minor, Greece, Macedonia, Cyrene, and, beginning in the middle of the first century BCE, in the city of Rome. Later, in the first centuries CE, as a result of the Jewish-Roman Wars, a large number of Jews were taken as captives, sold into slavery, or compelled to flee from the regions affected by the wars, contributing to the formation and expansion of Jewish communities across the Roman Empire as well as in Arabia and Mesopotamia. After the Bar Kokhba revolt, the Jewish population in Judaea—now significantly reduced— made efforts to recover from the revolt's devastating effects, but never fully regained its former strength. Between the second and fourth centuries CE, the region of Galilee emerged as the primary center of Jewish life in Syria Palaestina, experiencing both demographic growth and cultural development. It was during this period that two central rabbinic texts, the Mishnah and the Jerusalem Talmud, were composed. The Romans recognized the patriarchs—rabbinic sages such as Judah ha-Nasi—as representatives of the Jewish people, granting them a certain degree of autonomy. However, as the Roman Empire gave way to the Christianized Byzantine Empire under Constantine, Jews began to face persecution by both the Church and imperial authorities, Jews came to be persecuted by the church and the authorities, and many immigrated to communities in the diaspora. By the fourth century CE, Jews are believed to have lost their demographic majority in Syria Palaestina. The long-established Jewish community of Mesopotamia, which had been living under Parthian and later Sasanian rule, beyond the confines of the Roman Empire, became an important center of Jewish study as Judea's Jewish population declined. Estimates often place the Babylonian Jewish community of the 3rd to 7th centuries at around one million, making it the largest Jewish diaspora community of that period. Under the political leadership of the exilarch, who was regarded as a royal heir of the House of David, this community had an autonomous status and served as a place of refuge for the Jews of Syria Palaestina. A number of significant Talmudic academies, such as the Nehardea, Pumbedita, and Sura academies, were established in Mesopotamia, and many important Amoraim were active there. The Babylonian Talmud, a centerpiece of Jewish religious law, was compiled in Babylonia in the 3rd to 6th centuries. Jewish diaspora communities are generally described to have coalesced into three major ethnic subdivisions according to where their ancestors settled: the Ashkenazim (initially in the Rhineland and France), the Sephardim (initially in the Iberian Peninsula), and the Mizrahim (Middle East and North Africa). Romaniote Jews, Tunisian Jews, Yemenite Jews, Egyptian Jews, Ethiopian Jews, Bukharan Jews, Mountain Jews, and other groups also predated the arrival of the Sephardic diaspora. During the same period, Jewish communities in the Middle East thrived under Islamic rule, especially in cities like Baghdad, Cairo, and Damascus. In Babylonia, from the 7th to 11th centuries the Pumbedita and Sura academies led the Arab and to an extent the entire Jewish world. The deans and students of said academies defined the Geonic period in Jewish history. Following this period were the Rishonim who lived from the 11th to 15th centuries. Like their European counterparts, Jews in the Middle East and North Africa also faced periods of persecution and discriminatory policies, with the Almohad Caliphate in North Africa and Iberia issuing forced conversion decrees, causing Jews such as Maimonides to seek safety in other regions. Despite experiencing repeated waves of persecution, Ashkenazi Jews in Western Europe worked in a variety of fields, making an impact on their communities' economy and societies. In Francia, for example, figures like Isaac Judaeus and Armentarius occupied prominent social and economic positions. Francia also witnessed the development of a sophisticated tradition of biblical commentary, as exemplified by Rashi and the tosafists. In 1144, the first documented blood libel occurred in Norwich, England, marking an escalation in the pattern of discrimination and violence that Jews had already been subjected to throughout medieval Europe. During the 12th and 13th centuries, Jews faced frequent antisemitic legislation - including laws prescribing distinctive dress - alongside segregation, repeated blood libels, pogroms, and massacres such as the Rhineland Massacres (1066). The Jews of the Holy Roman Empire were designated Servi camerae regis (“servants of the imperial chamber”) by Frederick II, a status that afforded limited protection while simultaneously entangling them in the political struggles between the emperor and the German principalities and cities. Persecution intensified during the Black Death in the mid-14th century, when Jews were accused of poisoning wells and many communities were destroyed. These pressures, combined with major expulsions such as that from England in 1290, gradually pushed Ashkenazi Jewish populations eastward into Poland, Lithuania, and Russia. One of the largest Jewish communities of the Middle Ages was in the Iberian Peninsula, which for a time contained the largest Jewish population in Europe. Iberian Jewry endured discrimination under the Visigoths but saw its fortunes improve under Umayyad rule and later the Taifa kingdoms. During this period, the Jews of Muslim Spain entered a "Golden Age" marked by achievements in Hebrew poetry and literature, religious scholarship, grammar, medicine and science, with leading figures including Hasdai ibn Shaprut, Judah Halevi, Moses ibn Ezra and Solomon ibn Gabirol. Jews also rose to high office, most notably Samuel ibn Naghrillah, a scholar and poet who served as grand vizier and military commander of Granada. The Golden Age ended with the rise of the radical Almoravid and Almohad dynasties, whose persecutions drove many Jews from Iberia (including Maimonides), together with the advancing Reconquista. In 1391, widespread pogroms swept across Spain, leaving thousands dead and forcing mass conversions. The Spanish Inquisition was later established to pursue, torture and execute conversos who continued to practice Judaism in secret, while public disputations were staged to discredit Judaism. In 1492, after the Reconquista, Isabella I of Castile and Ferdinand II of Aragon decreed the expulsion of all Jews who refused conversion, sending an estimated 200,000 into exile in Portugal, Italy, North Africa, and the Ottoman Empire. In 1497, Portugal's Jews, about 30,000, were formally ordered expelled but instead were forcibly converted to retain their economic role. In 1498, some 3,500 Jews were expelled from Navarre. Many converts outwardly adopted Christianity while secretly preserving Jewish practices, becoming crypto-Jews (also known as marranos or anusim), who remained targets of the various Inquisitions for centuries. Following the expulsions from Spain and Portugal in the 1490s, Jewish exiles dispersed across the Mediterranean, Europe, and North Africa. Many settled in the Ottoman Empire—which, replacing the Iberian Peninsula, became home to the world's largest Jewish population—where new communities developed in Anatolia, the Balkans, and the Land of Israel. Cities such as Istanbul and Thessaloniki grew into major Jewish centers, while in 16th-century Safed a flourishing spiritual life took shape. There, Solomon Alkabetz, Moses Cordovero, and Isaac Luria developed influential new schools of Kabbalah, giving powerful impetus to Jewish mysticism, and Joseph Karo composed the Shulchan Aruch, which became a cornerstone of Jewish law. In the 17th century, Portuguese conversos who returned to Judaism and engaged in trade and banking helped establish Amsterdam as a prosperous Jewish center, while also forming communities in cities such as Antwerp and London. This period also witnessed waves of messianic fervor, most notably the rise of the Sabbatean movement in the 1660s, led by Sabbatai Zvi of İzmir, which reverberated throughout the Jewish world. In Eastern Europe, Poland–Lithuania became the principal center of Ashkenazi Jewry, eventually becoming home to the largest Jewish population in the world. Jewish life flourished there from in the early modern era, supported by relative stability, economic opportunity, and strong communal institutions. The mid-17th century brought devastation with the Cossack uprisings in Ukraine, which reversed migration flows and sent refugees westward, yet Poland–Lithuania remained the demographic and cultural heartland of Ashkenazic Jewry. Following the partitions of Poland, most of its Jews came under Russian rule and were confined to the "Pale of Settlement." The 18th century also witnessed new religious and intellectual currents. Hasidism, founded by Baal Shem Tov, emphasized mysticism and piety, while its opponents, the Misnagdim ("opponents") led by the Vilna Gaon, defended rabbinic scholarship and tradition. In Western Europe, during the 1760s and 1770s, the Haskalah (Jewish Enlightenment) emerged in German-speaking lands, where figures such as Moses Mendelssohn promoted secular learning, vernacular literacy, and integration into European society. Elsewhere, Jews began to be re-admitted to Western Europe, including England, where Menasseh ben Israel petitioned Oliver Cromwell for their return. In the Americas, Jews of Sephardic descent first arrived as conversos in Spanish and Portuguese colonies, where many faced trial by Inquisition tribunals for "judaizing." A more durable presence began in Dutch Brazil, where Jews openly practiced their religion and established the first synagogues in the New World, before the Portuguese reconquest forced their dispersal to Amsterdam, the Caribbean, and North America. Sephardic communities took root in Curaçao, Suriname, Jamaica, and Barbados, later joined by Ashkenazi migrants. In North America, Jews were present from the mid-17th century, with New Amsterdam hosting the first organized congregation in 1654. By the time of the American Revolution, small communities in New York, Newport, Philadelphia, Savannah, and Charleston played an active role in the struggle for independence. In the late 19th century, Jews in Western Europe gradually achieved legal emancipation, though social acceptance remained limited by persistent antisemitism and rising nationalism. In Eastern Europe, particularly within the Russian Empire's Pale of Settlement, Jews faced mounting legal restrictions and recurring pogroms. From this environment emerged Zionism, a national revival movement originating in Central and Eastern Europe that sought to re-establish a Jewish polity in the Land of Israel as a means of returning the Jewish people to their ancestral homeland and ending centuries of exile and persecution. This led to waves of Jewish migration to Ottoman-controlled Palestine. Theodor Herzl, who is considered the father of political Zionism, offered his vision of a future Jewish state in his 1896 book Der Judenstaat (The Jewish State); a year later, he presided over the First Zionist Congress. The antisemitism that inflicted Jewish communities in Europe also triggered a mass exodus of 2.8 million Jews to the United States between 1881 and 1924. Despite this, some Jews of Europe and the United States were able to make great achievements in various fields of science and culture. Among the most influential from this period are Albert Einstein in physics, Sigmund Freud in psychology, Franz Kafka in literature, and Irving Berlin in music. Many Nobel Prize winners at this time were Jewish, as is still the case. When Adolf Hitler and the Nazi Party came to power in Germany in 1933, the situation for Jews deteriorated rapidly as a direct result of Nazi policies. Many Jews fled from Europe to Mandatory Palestine, the United States, and the Soviet Union as a result of racial anti-Semitic laws, economic difficulties, and the fear of an impending war. World War II started in 1939, and by 1941, Hitler occupied almost all of Europe. Following the German invasion of the Soviet Union in 1941, the Final Solution—an extensive, organized effort with an unprecedented scope intended to annihilate the Jewish people—began, and resulted in the persecution and murder of Jews in Europe and North Africa. In Poland, three million were murdered in gas chambers in all concentration camps combined, with one million at the Auschwitz camp complex alone. The Holocaust is the name given to this genocide, in which six million Jews in total were systematically murdered. Before and during the Holocaust, enormous numbers of Jews immigrated to Mandatory Palestine. In 1944, the Jewish insurgency in Mandatory Palestine began with the aim of gaining full independence from the United Kingdom. On 14 May 1948, upon the termination of the mandate, David Ben-Gurion declared the creation of the State of Israel, a Jewish and democratic state. Immediately afterwards, all neighboring Arab states invaded, and were resisted by the newly formed Israel Defense Forces. In 1949, the war ended and Israel started building its state and absorbing waves of Aliyah, granting citizenship to Jews all over the world via the Law of Return passed in 1950. However, both the Israeli–Palestinian conflict and wider Arab–Israeli conflict continue to this day. Culture The Jewish people and the religion of Judaism are strongly interrelated. Converts to Judaism have a status within the Jewish people equal to those born into it. However, converts who go on to practice no Judaism are likely to be viewed with skepticism. Mainstream Judaism does not proselytize, and conversion is considered a difficult task. A significant portion of conversions are undertaken by children of mixed marriages, or would-be or current spouses of Jews. The Hebrew Bible, a religious interpretation of the traditions and early history of the Jews, established the first of the Abrahamic religions, which are now practiced by 54 percent of the world. Judaism guides its adherents in both practice and belief, and has been called not only a religion, but also a "way of life," which has made drawing a clear distinction between Judaism, Jewish culture, and Jewish identity rather difficult. Throughout history, in eras and places as diverse as the ancient Hellenic world, in Europe before and after The Age of Enlightenment (see Haskalah), in Islamic Spain and Portugal, in North Africa and the Middle East, India, China, or the contemporary United States and Israel, cultural phenomena have developed that are in some sense characteristically Jewish without being at all specifically religious. Some factors in this come from within Judaism, others from the interaction of Jews or specific communities of Jews with their surroundings, and still others from the inner social and cultural dynamics of the community, as opposed to from the religion itself. This phenomenon has led to considerably different Jewish cultures unique to their own communities. Hebrew is the liturgical language of Judaism (termed lashon ha-kodesh, "the holy tongue"), the language in which most of the Hebrew scriptures (Tanakh) were composed, and the daily speech of the Jewish people for centuries. By the 5th century BCE, Aramaic, a closely related tongue, joined Hebrew as the spoken language in Judea. By the 3rd century BCE, some Jews of the diaspora were speaking Greek. Others, such as in the Jewish communities of Asoristan, known to Jews as Babylonia, were speaking Hebrew and Aramaic, the languages of the Babylonian Talmud. Dialects of these same languages were also used by the Jews of Syria Palaestina at that time.[citation needed] For centuries, Jews worldwide have spoken the local or dominant languages of the regions they migrated to, often developing distinctive dialectal forms or branches that became independent languages. Yiddish is the Judaeo-German language developed by Ashkenazi Jews who migrated to Central Europe. Ladino is the Judaeo-Spanish language developed by Sephardic Jews who migrated to the Iberian Peninsula. Due to many factors, including the impact of the Holocaust on European Jewry, the Jewish exodus from Arab and Muslim countries, and widespread emigration from other Jewish communities around the world, ancient and distinct Jewish languages of several communities, including Judaeo-Georgian, Judaeo-Arabic, Judaeo-Berber, Krymchak, Judaeo-Malayalam and many others, have largely fallen out of use. For over sixteen centuries Hebrew was used almost exclusively as a liturgical language, and as the language in which most books had been written on Judaism, with a few speaking only Hebrew on the Sabbath. Hebrew was revived as a spoken language by Eliezer ben Yehuda, who arrived in Palestine in 1881. It had not been used as a mother tongue since Tannaic times. Modern Hebrew is designated as the "State language" of Israel. Despite efforts to revive Hebrew as the national language of the Jewish people, knowledge of the language is not commonly possessed by Jews worldwide and English has emerged as the lingua franca of the Jewish diaspora. Although many Jews once had sufficient knowledge of Hebrew to study the classic literature, and Jewish languages like Yiddish and Ladino were commonly used as recently as the early 20th century, most Jews lack such knowledge today and English has by and large superseded most Jewish vernaculars. The three most commonly spoken languages among Jews today are Hebrew, English, and Russian. Some Romance languages, particularly French and Spanish, are also widely used. Yiddish has been spoken by more Jews in history than any other language, but it is far less used today following the Holocaust and the adoption of Modern Hebrew by the Zionist movement and the State of Israel. In some places, the mother language of the Jewish community differs from that of the general population or the dominant group. For example, in Quebec, the Ashkenazic majority has adopted English, while the Sephardic minority uses French as its primary language. Similarly, South African Jews adopted English rather than Afrikaans. Due to both Czarist and Soviet policies, Russian has superseded Yiddish as the language of Russian Jews, but these policies have also affected neighboring communities. Today, Russian is the first language for many Jewish communities in a number of Post-Soviet states, such as Ukraine and Uzbekistan,[better source needed] as well as for Ashkenazic Jews in Azerbaijan, Georgia, and Tajikistan. Although communities in North Africa today are small and dwindling, Jews there had shifted from a multilingual group to a monolingual one (or nearly so), speaking French in Algeria, Morocco, and the city of Tunis, while most North Africans continue to use Arabic or Berber as their mother tongue.[citation needed] There is no single governing body for the Jewish community, nor a single authority with responsibility for religious doctrine. Instead, a variety of secular and religious institutions at the local, national, and international levels lead various parts of the Jewish community on a variety of issues. Today, many countries have a Chief Rabbi who serves as a representative of that country's Jewry. Although many Hasidic Jews follow a certain hereditary Hasidic dynasty, there is no one commonly accepted leader of all Hasidic Jews. Many Jews believe that the Messiah will act a unifying leader for Jews and the entire world. A number of modern scholars of nationalism support the existence of Jewish national identity in antiquity. One of them is David Goodblatt, who generally believes in the existence of nationalism before the modern period. In his view, the Bible, the parabiblical literature and the Jewish national history provide the base for a Jewish collective identity. Although many of the ancient Jews were illiterate (as were their neighbors), their national narrative was reinforced through public readings. The Hebrew language also constructed and preserved national identity. Although it was not widely spoken after the 5th century BCE, Goodblatt states: the mere presence of the language in spoken or written form could invoke the concept of a Jewish national identity. Even if one knew no Hebrew or was illiterate, one could recognize that a group of signs was in Hebrew script. ... It was the language of the Israelite ancestors, the national literature, and the national religion. As such it was inseparable from the national identity. Indeed its mere presence in visual or aural medium could invoke that identity. Anthony D. Smith, an historical sociologist considered one of the founders of the field of nationalism studies, wrote that the Jews of the late Second Temple period provide "a closer approximation to the ideal type of the nation [...] than perhaps anywhere else in the ancient world." He adds that this observation "must make us wary of pronouncing too readily against the possibility of the nation, and even a form of religious nationalism, before the onset of modernity." Agreeing with Smith, Goodblatt suggests omitting the qualifier "religious" from Smith's definition of ancient Jewish nationalism, noting that, according to Smith, a religious component in national memories and culture is common even in the modern era. This view is echoed by political scientist Tom Garvin, who writes that "something strangely like modern nationalism is documented for many peoples in medieval times and in classical times as well," citing the ancient Jews as one of several "obvious examples", alongside the classical Greeks and the Gaulish and British Celts. Fergus Millar suggests that the sources of Jewish national identity and their early nationalist movements in the first and second centuries CE included several key elements: the Bible as both a national history and legal source, the Hebrew language as a national language, a system of law, and social institutions such as schools, synagogues, and Sabbath worship. Adrian Hastings argued that Jews are the "true proto-nation", that through the model of ancient Israel found in the Hebrew Bible, provided the world with the original concept of nationhood which later influenced Christian nations. However, following Jerusalem's destruction in the first century CE, Jews ceased to be a political entity and did not resemble a traditional nation-state for almost two millennia. Despite this, they maintained their national identity through collective memory, religion and sacred texts, even without land or political power, and remained a nation rather than just an ethnic group, eventually leading to the rise of Zionism and the establishment of Israel. Steven Weitzman suggests that Jewish nationalist sentiment in antiquity was encouraged because under foreign rule (Persians, Greeks, Romans) Jews were able to claim that they were an ancient nation. This claim was based on the preservation and reverence of their scriptures, the Hebrew language, the Temple and priesthood, and other traditions of their ancestors. Doron Mendels further observes that the Hasmonean kingdom, one of the few examples of indigenous statehood at its time, significantly reinforced Jewish national consciousness. The memory of this period of independence contributed to the persistent efforts to revive Jewish sovereignty in Judea, leading to the major revolts against Roman rule in the 1st and 2nd centuries CE. Demographics Within the world's Jewish population there are distinct ethnic divisions, most of which are primarily the result of geographic branching from an originating Israelite population, and subsequent independent evolutions. An array of Jewish communities was established by Jewish settlers in various places around the Old World, often at great distances from one another, resulting in effective and often long-term isolation. During the millennia of the Jewish diaspora the communities would develop under the influence of their local environments: political, cultural, natural, and populational. Today, manifestations of these differences among the Jews can be observed in Jewish cultural expressions of each community, including Jewish linguistic diversity, culinary preferences, liturgical practices, religious interpretations, as well as degrees and sources of genetic admixture. Jews are often identified as belonging to one of two major groups: the Ashkenazim and the Sephardim. Ashkenazim are so named in reference to their geographical origins (their ancestors' culture coalesced in the Rhineland, an area historically referred to by Jews as Ashkenaz). Similarly, Sephardim (Sefarad meaning "Spain" in Hebrew) are named in reference their origins in Iberia. The diverse groups of Jews of the Middle East and North Africa are often collectively referred to as Sephardim together with Sephardim proper for liturgical reasons having to do with their prayer rites. A common term for many of these non-Spanish Jews who are sometimes still broadly grouped as Sephardim is Mizrahim (lit. 'easterners' in Hebrew). Nevertheless, Mizrahis and Sepharadim are usually ethnically distinct. Smaller groups include, but are not restricted to, Indian Jews such as the Bene Israel, Bnei Menashe, Cochin Jews, and Bene Ephraim; the Romaniotes of Greece; the Italian Jews ("Italkim" or "Bené Roma"); the Teimanim from Yemen; various African Jews, including most numerously the Beta Israel of Ethiopia; and Chinese Jews, most notably the Kaifeng Jews, as well as various other distinct but now almost extinct communities. The divisions between all these groups are approximate and their boundaries are not always clear. The Mizrahim for example, are a heterogeneous collection of North African, Central Asian, Caucasian, and Middle Eastern Jewish communities that are no closer related to each other than they are to any of the earlier mentioned Jewish groups. In modern usage, however, the Mizrahim are sometimes termed Sephardi due to similar styles of liturgy, despite independent development from Sephardim proper. Thus, among Mizrahim there are Egyptian Jews, Iraqi Jews, Lebanese Jews, Kurdish Jews, Moroccan Jews, Libyan Jews, Syrian Jews, Bukharian Jews, Mountain Jews, Georgian Jews, Iranian Jews, Afghan Jews, and various others. The Teimanim from Yemen are sometimes included, although their style of liturgy is unique and they differ in respect to the admixture found among them to that found in Mizrahim. In addition, there is a differentiation made between Sephardi migrants who established themselves in the Middle East and North Africa after the expulsion of the Jews from Spain and Portugal in the 1490s and the pre-existing Jewish communities in those regions. Ashkenazi Jews represent the bulk of modern Jewry, with at least 70 percent of Jews worldwide (and up to 90 percent prior to World War II and the Holocaust). As a result of their emigration from Europe, Ashkenazim also represent the overwhelming majority of Jews in the New World continents, in countries such as the United States, Canada, Argentina, Australia, and Brazil. In France, the immigration of Jews from Algeria (Sephardim) has led them to outnumber the Ashkenazim. Only in Israel is the Jewish population representative of all groups, a melting pot independent of each group's proportion within the overall world Jewish population. Y DNA studies tend to imply a small number of founders in an old population whose members parted and followed different migration paths. In most Jewish populations, these male line ancestors appear to have been mainly Middle Eastern. For example, Ashkenazi Jews share more common paternal lineages with other Jewish and Middle Eastern groups than with non-Jewish populations in areas where Jews lived in Eastern Europe, Germany, and the French Rhine Valley. This is consistent with Jewish traditions in placing most Jewish paternal origins in the region of the Middle East. Conversely, the maternal lineages of Jewish populations, studied by looking at mitochondrial DNA, are generally more heterogeneous. Scholars such as Harry Ostrer and Raphael Falk believe this indicates that many Jewish males found new mates from European and other communities in the places where they migrated in the diaspora after fleeing ancient Israel. In contrast, Behar has found evidence that about 40 percent of Ashkenazi Jews originate maternally from just four female founders, who were of Middle Eastern origin. The populations of Sephardi and Mizrahi Jewish communities "showed no evidence for a narrow founder effect." Subsequent studies carried out by Feder et al. confirmed the large portion of non-local maternal origin among Ashkenazi Jews. Reflecting on their findings related to the maternal origin of Ashkenazi Jews, the authors conclude "Clearly, the differences between Jews and non-Jews are far larger than those observed among the Jewish communities. Hence, differences between the Jewish communities can be overlooked when non-Jews are included in the comparisons." However, a 2025 genetic study on the Ashkenazi Jewish founder population supports the presence of a substantial Near Eastern component in the maternal lineages. Analyses of mitochondrial DNA (mtDNA) indicate that the core founder lineages, estimated at around 54, likely originated from the Near East, with these founder signatures appearing in multiple copies across the population. While later admixture introduced additional mtDNA lineages, these absorbed lineages are distinguishable from the original founders. The findings are consistent with genome-wide Identity-by-Descent and Lineage Extinction analyses, reinforcing the Near Eastern origin of the Ashkenazi maternal founders. A study showed that 7% of Ashkenazi Jews have the haplogroup G2c, which is mainly found in Pashtuns and on lower scales all major Jewish groups, Palestinians, Syrians, and Lebanese. Studies of autosomal DNA, which look at the entire DNA mixture, have become increasingly important as the technology develops. They show that Jewish populations have tended to form relatively closely related groups in independent communities, with most in a community sharing significant ancestry in common. For Jewish populations of the diaspora, the genetic composition of Ashkenazi, Sephardic, and Mizrahi Jewish populations show a predominant amount of shared Middle Eastern ancestry. According to Behar, the most parsimonious explanation for this shared Middle Eastern ancestry is that it is "consistent with the historical formulation of the Jewish people as descending from ancient Hebrew and Israelite residents of the Levant" and "the dispersion of the people of ancient Israel throughout the Old World". North African, Italian and others of Iberian origin show variable frequencies of admixture with non-Jewish historical host populations among the maternal lines. In the case of Ashkenazi and Sephardi Jews (in particular Moroccan Jews), who are closely related, the source of non-Jewish admixture is mainly Southern European, while Mizrahi Jews show evidence of admixture with other Middle Eastern populations. Behar et al. have remarked on a close relationship between Ashkenazi Jews and modern Italians. A 2001 study found that Jews were more closely related to groups of the Fertile Crescent (Kurds, Turks, and Armenians) than to their Arab neighbors, whose genetic signature was found in geographic patterns reflective of Islamic conquests. The studies also show that Sephardic Bnei Anusim (descendants of the "anusim" who were forced to convert to Catholicism), which comprise up to 19.8 percent of the population of today's Iberia (Spain and Portugal) and at least 10 percent of the population of Ibero-America (Hispanic America and Brazil), have Sephardic Jewish ancestry within the last few centuries. The Bene Israel and Cochin Jews of India, Beta Israel of Ethiopia, and a portion of the Lemba people of Southern Africa, despite more closely resembling the local populations of their native countries, have also been thought to have some more remote ancient Jewish ancestry. Views on the Lemba have changed and genetic Y-DNA analyses in the 2000s have established a partially Middle-Eastern origin for a portion of the male Lemba population but have been unable to narrow this down further. Although historically, Jews have been found all over the world, in the decades since World War II and the establishment of Israel, they have increasingly concentrated in a small number of countries. In 2021, Israel and the United States together accounted for over 85 percent of the global Jewish population, with approximately 45.3% and 39.6% of the world's Jews, respectively. More than half (51.2%) of world Jewry resides in just ten metropolitan areas. As of 2021, these ten areas were Tel Aviv, New York, Jerusalem, Haifa, Los Angeles, Miami, Philadelphia, Paris, Washington, and Chicago. The Tel Aviv metro area has the highest percent of Jews among the total population (94.8%), followed by Jerusalem (72.3%), Haifa (73.1%), and Beersheba (60.4%), the balance mostly being Israeli Arabs. Outside Israel, the highest percent of Jews in a metropolitan area was in New York (10.8%), followed by Miami (8.7%), Philadelphia (6.8%), San Francisco (5.1%), Washington (4.7%), Los Angeles (4.7%), Toronto (4.5%), and Baltimore (4.1%). As of 2010, there were nearly 14 million Jews around the world, roughly 0.2% of the world's population at the time. According to the 2007 estimates of The Jewish People Policy Planning Institute, the world's Jewish population is 13.2 million. This statistic incorporates both practicing Jews affiliated with synagogues and the Jewish community, and approximately 4.5 million unaffiliated and secular Jews.[citation needed] According to Sergio Della Pergola, a demographer of the Jewish population, in 2021 there were about 6.8 million Jews in Israel, 6 million in the United States, and 2.3 million in the rest of the world. Israel, the Jewish nation-state, is the only country in which Jews make up a majority of the citizens. Israel was established as an independent democratic and Jewish state on 14 May 1948. Of the 120 members in its parliament, the Knesset, as of 2016[update], 14 members of the Knesset are Arab citizens of Israel (not including the Druze), most representing Arab political parties. One of Israel's Supreme Court judges is also an Arab citizen of Israel. Between 1948 and 1958, the Jewish population rose from 800,000 to two million. Currently, Jews account for 75.4 percent of the Israeli population, or 6 million people. The early years of the State of Israel were marked by the mass immigration of Holocaust survivors in the aftermath of the Holocaust and Jews fleeing Arab lands. Israel also has a large population of Ethiopian Jews, many of whom were airlifted to Israel in the late 1980s and early 1990s. Between 1974 and 1979 nearly 227,258 immigrants arrived in Israel, about half being from the Soviet Union. This period also saw an increase in immigration to Israel from Western Europe, Latin America, and North America. A trickle of immigrants from other communities has also arrived, including Indian Jews and others, as well as some descendants of Ashkenazi Holocaust survivors who had settled in countries such as the United States, Argentina, Australia, Chile, and South Africa. Some Jews have emigrated from Israel elsewhere, because of economic problems or disillusionment with political conditions and the continuing Arab–Israeli conflict. Jewish Israeli emigrants are known as yordim. The waves of immigration to the United States and elsewhere at the turn of the 19th century, the founding of Zionism and later events, including pogroms in Imperial Russia (mostly within the Pale of Settlement in present-day Ukraine, Moldova, Belarus and eastern Poland), the massacre of European Jewry during the Holocaust, and the founding of the state of Israel, with the subsequent Jewish exodus from Arab lands, all resulted in substantial shifts in the population centers of world Jewry by the end of the 20th century. More than half of the Jews live in the Diaspora (see Population table). Currently, the largest Jewish community outside Israel, and either the largest or second-largest Jewish community in the world, is located in the United States, with 6 million to 7.5 million Jews by various estimates. Elsewhere in the Americas, there are also large Jewish populations in Canada (315,000), Argentina (180,000–300,000), and Brazil (196,000–600,000), and smaller populations in Mexico, Uruguay, Venezuela, Chile, Colombia and several other countries (see History of the Jews in Latin America). According to a 2010 Pew Research Center study, about 470,000 people of Jewish heritage live in Latin America and the Caribbean. Demographers disagree on whether the United States has a larger Jewish population than Israel, with many maintaining that Israel surpassed the United States in Jewish population during the 2000s, while others maintain that the United States still has the largest Jewish population in the world. Currently, a major national Jewish population survey is planned to ascertain whether or not Israel has overtaken the United States in Jewish population. Western Europe's largest Jewish community, and the third-largest Jewish community in the world, can be found in France, home to between 483,000 and 500,000 Jews, the majority of whom are immigrants or refugees from North African countries such as Algeria, Morocco, and Tunisia (or their descendants). The United Kingdom has a Jewish community of 292,000. In Eastern Europe, the exact figures are difficult to establish. The number of Jews in Russia varies widely according to whether a source uses census data (which requires a person to choose a single nationality among choices that include "Russian" and "Jewish") or eligibility for immigration to Israel (which requires that a person have one or more Jewish grandparents). According to the latter criteria, the heads of the Russian Jewish community assert that up to 1.5 million Russians are eligible for aliyah. In Germany, the 102,000 Jews registered with the Jewish community are a slowly declining population, despite the immigration of tens of thousands of Jews from the former Soviet Union since the fall of the Berlin Wall. Thousands of Israelis also live in Germany, either permanently or temporarily, for economic reasons. Prior to 1948, approximately 800,000 Jews were living in lands which now make up the Arab world (excluding Israel). Of these, just under two-thirds lived in the French-controlled Maghreb region, 15 to 20 percent in the Kingdom of Iraq, approximately 10 percent in the Kingdom of Egypt and approximately 7 percent in the Kingdom of Yemen. A further 200,000 lived in Pahlavi Iran and the Republic of Turkey. Today, around 26,000 Jews live in Muslim-majority countries, mainly in Turkey (14,200) and Iran (9,100), while Morocco (2,000), Tunisia (1,000), and the United Arab Emirates (500) host the largest communities in the Arab world. A small-scale exodus had begun in many countries in the early decades of the 20th century, although the only substantial aliyah came from Yemen and Syria. The exodus from Arab and Muslim countries took place primarily from 1948. The first large-scale exoduses took place in the late 1940s and early 1950s, primarily in Iraq, Yemen and Libya, with up to 90 percent of these communities leaving within a few years. The peak of the exodus from Egypt occurred in 1956. The exodus in the Maghreb countries peaked in the 1960s. Lebanon was the only Arab country to see a temporary increase in its Jewish population during this period, due to an influx of refugees from other Arab countries, although by the mid-1970s the Jewish community of Lebanon had also dwindled. In the aftermath of the exodus wave from Arab states, an additional migration of Iranian Jews peaked in the 1980s when around 80 percent of Iranian Jews left the country.[citation needed] Outside Europe, the Americas, the Middle East, and the rest of Asia, there are significant Jewish populations in Australia (112,500) and South Africa (70,000). There is also a 6,800-strong community in New Zealand. Since at least the time of the Ancient Greeks, a proportion of Jews have assimilated into the wider non-Jewish society around them, by either choice or force, ceasing to practice Judaism and losing their Jewish identity. Assimilation took place in all areas, and during all time periods, with some Jewish communities, for example the Kaifeng Jews of China, disappearing entirely. The advent of the Jewish Enlightenment of the 18th century (see Haskalah) and the subsequent emancipation of the Jewish populations of Europe and America in the 19th century, accelerated the situation, encouraging Jews to increasingly participate in, and become part of, secular society. The result has been a growing trend of assimilation, as Jews marry non-Jewish spouses and stop participating in the Jewish community. Rates of interreligious marriage vary widely: In the United States, it is just under 50 percent; in the United Kingdom, around 53 percent; in France, around 30 percent; and in Australia and Mexico, as low as 10 percent. In the United States, only about a third of children from intermarriages affiliate with Jewish religious practice. The result is that most countries in the Diaspora have steady or slightly declining religiously Jewish populations as Jews continue to assimilate into the countries in which they live.[citation needed] The Jewish people and Judaism have experienced various persecutions throughout their history. During Late Antiquity and the Early Middle Ages, the Roman Empire (in its later phases known as the Byzantine Empire) repeatedly repressed the Jewish population, first by ejecting them from their homelands during the pagan Roman era and later by officially establishing them as second-class citizens during the Christian Roman era. According to James Carroll, "Jews accounted for 10% of the total population of the Roman Empire. By that ratio, if other factors had not intervened, there would be 200 million Jews in the world today, instead of something like 13 million." Later in medieval Western Europe, further persecutions of Jews by Christians occurred, notably during the Crusades—when Jews all over Germany were massacred—and in a series of expulsions from the Kingdom of England, Germany, and France. Then there occurred the largest expulsion of all, when Spain and Portugal, after the Reconquista (the Catholic Reconquest of the Iberian Peninsula), expelled both unbaptized Sephardic Jews and the ruling Muslim Moors. In the Papal States, which existed until 1870, Jews were required to live only in specified neighborhoods called ghettos. Islam and Judaism have a complex relationship. Traditionally Jews and Christians living in Muslim lands, known as dhimmis, were allowed to practice their religions and administer their internal affairs, but they were subject to certain conditions. They had to pay the jizya (a per capita tax imposed on free adult non-Muslim males) to the Islamic state. Dhimmis had an inferior status under Islamic rule. They had several social and legal disabilities such as prohibitions against bearing arms or giving testimony in courts in cases involving Muslims. Many of the disabilities were highly symbolic. The one described by Bernard Lewis as "most degrading" was the requirement of distinctive clothing, not found in the Quran or hadith but invented in early medieval Baghdad; its enforcement was highly erratic. On the other hand, Jews rarely faced martyrdom or exile, or forced compulsion to change their religion, and they were mostly free in their choice of residence and profession. Notable exceptions include the massacre of Jews and forcible conversion of some Jews by the rulers of the Almohad dynasty in Al-Andalus in the 12th century, as well as in Islamic Persia, and the forced confinement of Moroccan Jews to walled quarters known as mellahs beginning from the 15th century and especially in the early 19th century. In modern times, it has become commonplace for standard antisemitic themes to be conflated with anti-Zionist publications and pronouncements of Islamic movements such as Hezbollah and Hamas, in the pronouncements of various agencies of the Islamic Republic of Iran, and even in the newspapers and other publications of Turkish Refah Partisi."[better source needed] Throughout history, many rulers, empires and nations have oppressed their Jewish populations or sought to eliminate them entirely. Methods employed ranged from expulsion to outright genocide; within nations, often the threat of these extreme methods was sufficient to silence dissent. The history of antisemitism includes the First Crusade which resulted in the massacre of Jews; the Spanish Inquisition (led by Tomás de Torquemada) and the Portuguese Inquisition, with their persecution and autos-da-fé against the New Christians and Marrano Jews; the Bohdan Chmielnicki Cossack massacres in Ukraine; the Pogroms backed by the Russian Tsars; as well as expulsions from Spain, Portugal, England, France, Germany, and other countries in which the Jews had settled. According to a 2008 study published in the American Journal of Human Genetics, 19.8 percent of the modern Iberian population has Sephardic Jewish ancestry, indicating that the number of conversos may have been much higher than originally thought. The persecution reached a peak in Nazi Germany's Final Solution, which led to the Holocaust and the slaughter of approximately 6 million Jews. Of the world's 16 million Jews in 1939, almost 40% were murdered in the Holocaust. The Holocaust—the state-led systematic persecution and genocide of European Jews (and certain communities of North African Jews in European controlled North Africa) and other minority groups of Europe during World War II by Germany and its collaborators—remains the most notable modern-day persecution of Jews. The persecution and genocide were accomplished in stages. Legislation to remove the Jews from civil society was enacted years before the outbreak of World War II. Concentration camps were established in which inmates were used as slave labour until they died of exhaustion or disease. Where the Third Reich conquered new territory in Eastern Europe, specialized units called Einsatzgruppen murdered Jews and political opponents in mass shootings. Jews and Roma were crammed into ghettos before being transported hundreds of kilometres by freight train to extermination camps where, if they survived the journey, the majority of them were murdered in gas chambers. Virtually every arm of Germany's bureaucracy was involved in the logistics of the mass murder, turning the country into what one Holocaust scholar has called "a genocidal nation." Throughout Jewish history, Jews have repeatedly been directly or indirectly expelled from both their original homeland, the Land of Israel, and many of the areas in which they have settled. This experience as refugees has shaped Jewish identity and religious practice in many ways, and is thus a major element of Jewish history. In summary, the pogroms in Eastern Europe, the rise of modern antisemitism, the Holocaust, as well as the rise of Arab nationalism, all served to fuel the movements and migrations of huge segments of Jewry from land to land and continent to continent until they arrived back in large numbers at their original historical homeland in Israel. In the Bible, the patriarch Abraham is described as a migrant to the land of Canaan from Ur of the Chaldees. His descendants, the Children of Israel, undertook the Exodus (meaning "departure" or "exit" in Greek) from ancient Egypt, as described in the Book of Exodus. The first movement documented in the historical record occurred with the resettlement policy of the Neo-Assyrian Empire, which mandated the deportation of conquered peoples, and it is estimated some 4,500,000 among its captive populations suffered this dislocation over three centuries of Assyrian rule. With regard to Israel, Tiglath-Pileser III claims he deported 80% of the population of Lower Galilee, some 13,520 people. Some 27,000 Israelites, 20 to 25% of the population of the Kingdom of Israel, were described as being deported by Sargon II, and were replaced by other deported populations and sent into permanent exile by Assyria, initially to the Upper Mesopotamian provinces of the Assyrian Empire. Between 10,000 and 80,000 people from the Kingdom of Judah were similarly exiled by Babylonia, but these people were then returned to Judea by Cyrus the Great of the Persian Achaemenid Empire. Many Jews were exiled again by the Roman Empire. The 2,000 year dispersion of the Jewish diaspora beginning under the Roman Empire, as Jews were spread throughout the Roman world and, driven from land to land, settled wherever they could live freely enough to practice their religion. Over the course of the diaspora the center of Jewish life moved from Babylonia to the Iberian Peninsula to Poland to the United States and, as a result of Zionism, back to Israel. There were also many expulsions of Jews during the Middle Ages and Enlightenment in Europe, including: 1290, 16,000 Jews were expelled from England, (see the Statute of Jewry); in 1396, 100,000 from France; in 1421, thousands were expelled from Austria. Many of these Jews settled in East-Central Europe, especially Poland. Following the Spanish Inquisition in 1492, the Spanish population of around 200,000 Sephardic Jews were expelled by the Spanish crown and Catholic church, followed by expulsions in 1493 in Sicily (37,000 Jews) and Portugal in 1496. The expelled Jews fled mainly to the Ottoman Empire, the Netherlands, and North Africa, others migrating to Southern Europe and the Middle East. During the 19th century, France's policies of equal citizenship regardless of religion led to the immigration of Jews (especially from Eastern and Central Europe). This contributed to the arrival of millions of Jews in the New World. Over two million Eastern European Jews arrived in the United States from 1880 to 1925. In the latest phase of migrations, the Islamic Revolution of Iran caused many Iranian Jews to flee Iran. Most found refuge in the US (particularly Los Angeles, California, and Long Island, New York) and Israel. Smaller communities of Persian Jews exist in Canada and Western Europe. Similarly, when the Soviet Union collapsed, many of the Jews in the affected territory (who had been refuseniks) were suddenly allowed to leave. This produced a wave of migration to Israel in the early 1990s. Israel is the only country with a Jewish population that is consistently growing through natural population growth, although the Jewish populations of other countries, in Europe and North America, have recently increased through immigration. In the Diaspora, in almost every country the Jewish population in general is either declining or steady, but Orthodox and Haredi Jewish communities, whose members often shun birth control for religious reasons, have experienced rapid population growth. Orthodox and Conservative Judaism discourage proselytism to non-Jews, but many Jewish groups have tried to reach out to the assimilated Jewish communities of the Diaspora in order for them to reconnect to their Jewish roots. Additionally, while in principle Reform Judaism favours seeking new members for the faith, this position has not translated into active proselytism, instead taking the form of an effort to reach out to non-Jewish spouses of intermarried couples. There is also a trend of Orthodox movements reaching out to secular Jews in order to give them a stronger Jewish identity so there is less chance of intermarriage. As a result of the efforts by these and other Jewish groups over the past 25 years, there has been a trend (known as the Baal teshuva movement) for secular Jews to become more religiously observant, though the demographic implications of the trend are unknown. Additionally, there is also a growing rate of conversion to Jews by Choice of gentiles who make the decision to head in the direction of becoming Jews. Contributions Jewish individuals have played a significant role in the development and growth of Western culture, advancing many fields of thought, science and technology, both historically and in modern times, including through discrete trends in Jewish philosophy, Jewish ethics and Jewish literature, as well as specific trends in Jewish culture, including in Jewish art, Jewish music, Jewish humor, Jewish theatre, Jewish cuisine and Jewish medicine. Jews have established various Jewish political movements, religious movements, and, through the authorship of the Hebrew Bible and parts of the New Testament, provided the foundation for Christianity and Islam. More than 20 percent of the awarded Nobel Prize have gone to individuals of Jewish descent. Philanthropic giving is a widespread core function among Jewish organizations. Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ministry_of_Education_(Israel)#cite_ref-5] | [TOKENS: 543] |
Contents Ministry of Education (Israel) The Ministry of Education (Hebrew: מִשְׂרָד הַחִנּוּךְ, translit. Misrad HaHinukh; Arabic: وزارة التربية والتعليم) is the branch of the Israeli government charged with overseeing public education institutions in Israel. The department is headed by the Minister of Education, who is a member of the cabinet. The ministry has previously included culture and sport, although this is now covered by the Ministry of Culture and Sport. History In the first decade of statehood, the education system was faced with the task of establishing a network of kindergartens and schools for a rapidly growing student population. In 1949, there were 80,000 elementary school students. By 1950, there were 120,000 - an increase of 50 percent within the span of one year. Israel also took over responsibility for the education of Arab schoolchildren. The first minister of education was Zalman Shazar, later president of the State of Israel. Since 2002, the Ministry of Education has awarded a National Education Award to five top localities in recognizing excellence in investing substantial resources in the educational system. In 2012, first place was awarded to the Shomron Regional Council and followed by Or Yehuda, Tiberias, Eilat and Beersheba. The prize has been awarded to a variety of educational institutions including kindergartens and elementary schools. In 2013–2014, the Ministry of Education promoted the regulation of the activities of external parties within the state schools, in a dialogue between the Ministry, the local government, parents' representatives, the business sector and philanthropic parties, as part of what was called "the intersectoral round table in the Ministry of Education". As part of the regulation, the Ministry compiled a database of external programs that have some kind of partnership with a representative from the Ministry of Education's headquarters. In 2019, a petition was filed by pluralist Jewish organizations against the Ministry of Education due to a procedure that reduces by tens of thousands of shekels the support for the activities of these organizations in schools. In April 2021, the High Court invalidated the procedure in question, and even emphasized the importance of implementing the principles of the Shanhar Committee report on the teaching of Judaism in state education. In November 2021 it was announced that the Ministry of Education is not implementing the High Court ruling and that the damage to those organizations continues. List of ministers References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Pentium_(original)] | [TOKENS: 3551] |
Contents Pentium (original) The Pentium (also referred to as the i586 or P5 Pentium) is a microprocessor introduced by Intel on March 22, 1993. It is the first CPU using the Pentium brand. Considered the fifth generation in the x86 (8086) compatible line of processors, succeeding the i486, its implementation and microarchitecture was internally called P5. Like the Intel i486, the Pentium is instruction set compatible with the 32-bit i386. It uses a very similar microarchitecture to the i486, but was extended enough to implement a dual integer pipeline design, as well as a more advanced floating-point unit (FPU) that was noted to be ten times faster than its predecessor. The Pentium was succeeded by the Pentium Pro in November 1995. In October 1996, the Pentium MMX was introduced, complementing the same basic microarchitecture of the original Pentium with the MMX instruction set, larger caches, and some other enhancements. Intel discontinued the original Pentium (P5) processors, which were sold as a lower-cost option after the Pentium II's release in 1997. The Pentium line was gradually replaced by the Celeron processor, which also took over the role of the 80486 brand. New models continued to be introduced until July 1999. Intel officially declared end-of-life and discontinued original Pentium processors on December 31, 2001, the same day support for Windows 95 and earlier versions of Windows ended. Overview The P5 Pentium is the first superscalar x86 processor, meaning it was often able to execute two instructions at the same time. Some techniques used to implement this were based on the earlier superscalar Intel i960 CA (1989), while other details were invented exclusively for the P5 design. Large parts were also copied from the i386 or i486, especially the strategies used to cope with the complicated x86 encodings in a pipelined fashion. Just like the i486, the Pentium used both an optimized microcode system and RISC-like techniques, depending on the particular instruction, or part of instruction. The dual integer pipeline design is something that had been argued being impossible to implement for a CISC instruction set, by certain academics and RISC competitors.[who?] Other central features include a redesigned and significantly faster floating-point unit, a wide 64-bit burst-mode data bus (external as well as internal), separate code and data caches, and many other techniques and features to enhance performance. It contains 256-bit internal data buses and write-back caches. It does contain System Management Mode that has been implemented since the Intel's SL architecture. The 66-MHz Pentium processor operates at 112 V1.1 Dhrystone MIPS and has SPECint92 rating of 64.5, a SPECfp92 rating of 56.9 and an iCOMP index rating of 567. The performance difference between 60- and 66-MHz version is about 10%. The P5 also has better support for multiprocessing compared to the i486, and is the first x86 CPU with hardware support for it similar to IBM mainframe computers. Intel worked with IBM to define this ability and also designed it into the P5 microarchitecture. This ability was absent in prior x86 generations and x86 processors from competitors. In order to employ the dual pipelines at their full potential, certain compilers were optimized to better exploit instruction level parallelism, although not all applications would substantially gain from being recompiled. The faster FPU always enhanced floating point performance significantly though, compared to the i486 or i387. Intel spent resources working with development tool vendors, ISVs and operating system (OS) companies to optimize their products. Competitors included the superscalar PowerPC 601 (1993), SuperSPARC (1992), DEC Alpha 21064 (1992), AMD 29050 (1990), Motorola MC88110 (1991) and Motorola 68060 (1994), most of which also used a superscalar in-order dual instruction pipeline configuration, and the non-superscalar Motorola 68040 (1990) and MIPS R4000 (1991). The name "Pentium" is originally derived from the Greek word pente (πέντε), meaning "five", a reference to the prior numeric naming convention of Intel's 80x86 processors (8086–80486), with the Latin ending -ium since the processor would otherwise have been named 80586 using that convention. Development The P5 microarchitecture was designed by the same Santa Clara team which designed the 386 and 486. Design work started in June 1989;: 88 the team decided to use a superscalar RISC architecture which would be a convergence of RISC and CISC technology, with on-chip cache, floating-point, and branch prediction. Vinod Dham then the Vice President of the Microprocessor Product Group and General Manager of Microprocessor Division 5/7 had the concept using this RISC technology into the existing x86 architecture that could compete from the other market. Their performance target could boost FPU by three times and five time over the existing Intel486 CPU. The preliminary design was first successfully simulated in 1990, followed by the laying-out of the design. By this time, the team had several dozen engineers. It took some 100 million clock cycles of pre-silicon verification test which includes major operating systems and many application were booted and running. They had to use the Quickturn Systems Inc. software to run pre-silicon simulation program which was 30,000 times quicker than the previous technique method available. By late 1990, they found that the planned feature could not fit into the die, they had to redesign the circuit feature that would slim down in order to fit what the intended design in place without sacrificing the performance. In spring of 1991, the die went another slimming procedure until Dham was happy with the size and its feature without affecting the performance. A group of engineers ran hundreds of tests to validate the designed features and ran 5000 different variables to validate its design. Out of the 14 circuit boards in collection and cables, they only found few bugs using every operating system they have it on hand including in development were used. By February 1992, the design was taped out in process which was completed by April 1992, at which point beta-testing began. The next few months the design was sent to the Intel's Mask Operation which it translate to mask layout for the Oregon's Fab 5 to be processed. By mid-1992, the P5 team had 200 engineers.: 89 Intel at first planned to demonstrate the P5 in June 1992 at the trade show PC Expo, and to formally announce the processor in September 1992, but design problems forced the demo to be cancelled, and the official introduction of the chip was delayed until the spring of 1993. The first computer systems featuring the Pentium appeared in the summer of 1993, the first being Advanced Logic Research and their Evolution V workstation, released in the first week of July 1993. John H. Crawford, chief architect of the original 386, co-managed the design of the P5, along with Donald Alpert, who managed the architectural team. Dror Avnon managed the design of the FPU. Vinod K. Dham was general manager of the P5 group.: 90 Intel's Larrabee multicore architecture project uses a processor core derived from a P5 core (P54C), augmented by multithreading, 64-bit instructions, and a 16-byte wide vector processing unit. Intel's low-powered Bonnell microarchitecture employed in early Atom processor cores also uses an in-order dual pipeline similar to P5. Intel used the Pentium name instead of 586, because in 1991, it had lost a trademark dispute over the "386" trademark, when a judge ruled that the number was generic. The company hired Lexicon Branding to come up with a new, non-numeric name. The P5 microarchitecture brings several important advances over the prior i486 architecture. The Pentium was designed to execute over 100 million instructions per second (MIPS), and the 75 MHz model was able to reach 126.5 MIPS in certain benchmarks. The Pentium architecture typically offered just under twice the performance of a 486 processor per clock cycle in common benchmarks. The fastest 80486 parts (with slightly improved microarchitecture and 100 MHz operation) were almost as powerful as the first-generation Pentiums, and the AMD Am5x86, which despite its name is actually a 486-class CPU, was roughly equal to the Pentium 75 regarding pure ALU performance. The early versions of 60–66 MHz P5 Pentiums had a problem in the floating-point unit that resulted in incorrect (but predictable) results from some division operations. This flaw, discovered in 1994 by professor Thomas Nicely at Lynchburg College, Virginia, became widely known as the Pentium FDIV bug and caused embarrassment for Intel, which created an exchange program to replace the faulty processors. In 1997, another erratum was discovered that could allow a malicious program to crash a system without any special privileges, the "F00F bug". All P5 series processors were affected and no fixed steppings were ever released, however contemporary operating systems were patched with workarounds to prevent crashes. Cores and steppings The Pentium was Intel's primary microprocessor for personal computers during the mid-1990s. The original design was reimplemented in newer processes and new features were added to maintain its competitiveness, and to address specific markets such as portable computers. As a result, there were several variants of the P5 microarchitecture. The first Pentium microprocessor core was code-named "P5". Its product code was 80501 (80500 for the earliest steppings Q0399). There were two versions, specified to operate at 60 MHz and 66 MHz respectively, using Socket 4. This first implementation of the Pentium was released using a 273-pin PGA form factor and ran on a 5 V power supply. (descended from the usual transistor-transistor logic (TTL) compatibility requirements). It contained 3.1 million transistors and measured 16.7 mm by 17.6 mm for an area of 293.92 mm2. It was fabricated in a 800 nm three-layer metal bipolar complementary metal–oxide–semiconductor (BiCMOS) process. The 5-volt design resulted in relatively high energy consumption for its operating frequency when compared to the directly following models. The P5 was followed by the P54C (80502) in 1994, with versions specified to operate at 75, 90, or 100 MHz using a 3.3 volt power supply. Marking the switch to Socket 5, this was the first Pentium processor to operate at 3.3 volts, reducing energy consumption, but necessitating voltage regulation on mainboards. As with higher-clocked 486 processors, an internal clock multiplier was employed from here on to let the internal circuitry work at a higher frequency than the external address and data buses, as it is more complicated and cumbersome to increase the external frequency, due to physical constraints. It also allowed two-way multiprocessing, and had an integrated local APIC and new power management features. It was fabricated in a BiCMOS process which has been described as both 500 nm and 600 nm due to differing definitions. The P54C was followed by the P54CQS in early 1995, which operated at 120 MHz. It was fabricated in a 350 nm BiCMOS process and was the first commercial microprocessor to be fabricated in a 350 nm process. Its transistor count is identical to the P54C and, despite the newer process, it had an identical die area as well. The chip was connected to the package using wire bonding, which only allows connections along the edges of the chip. A smaller chip would have required a redesign of the package, as there is a limit on the length of the wires and the edges of the chip would be further away from the pads on the package. The solution was to keep the chip the same size, retain the existing pad-ring, and only reduce the size of the Pentium's logic circuitry to enable it to achieve higher clock frequencies. The P54CQS was quickly followed by the P54CS, which operated at 133, 150, 166 and 200 MHz, and introduced Socket 7. It contained 3.3 million transistors, measured 90 mm2 and was fabricated in a 350 nm BiCMOS process with four levels of interconnect. The P24T Pentium OverDrive for 486 systems were released in 1995, which were based on 3.3 V 600 nm versions using a 63 or 83 MHz clock. Since these used Socket 2/3, some modifications had to be made to compensate for the 32-bit data bus and slower on-board L2 cache of 486 motherboards. They were therefore equipped with a 32 KB L1 cache (double that of pre-P55C Pentium CPUs). The P55C (or 80503) was developed by Intel's Research & Development Center in Haifa, Israel. It was sold as Pentium with MMX Technology (usually just called Pentium MMX); although it was based on the P5 core, it featured a new set of 57 "MMX" instructions intended to improve performance on multimedia tasks, such as encoding and decoding digital media data. The Pentium MMX line was introduced on October 22, 1996, and released in January 1997. The new instructions worked on new data types: 64-bit packed vectors of either eight 8-bit integers, four 16-bit integers, two 32-bit integers, or one 64-bit integer. So, for example, the PADDUSB (Packed ADD Unsigned Saturated Byte) instruction adds two vectors, each containing eight 8-bit unsigned integers together, elementwise; each addition that would overflow saturates, yielding 255, the maximal unsigned value that can be represented in a byte. These rather specialized instructions generally require special coding by the programmer for them to be used.[citation needed] Other changes to the core include a 6-stage pipeline (vs. 5 on P5) with a return stack (first done on Cyrix 6x86) and better parallelism, an improved instruction decoder, 16KB L1 data cache + 16KB L1 instruction cache with Both 4-way associativity (vs. 8KB L1 Data/instruction with 2-way on P5), 4 write buffers that could now be used by either pipeline (vs. one corresponding to each pipeline on P5) and an improved branch predictor taken from the Pentium Pro, with a 512-entry buffer (vs. 256 on P5). It contained 4.5 million transistors and had an area of 140 mm2. It was fabricated in a 280 nm CMOS process with the same metal pitches as the previous 350 nm BiCMOS process, so Intel described it as "350 nm" because of its similar transistor density. The process has four levels of interconnect. While the P55C remained compatible with Socket 7, the voltage requirements for powering the chip differ from the standard Socket 7 specifications. Most motherboards manufactured for Socket 7 before the establishment of the P55C standard are not compliant with the dual voltage rail required for proper operation of this CPU (2.8 volt core voltage, 3.3 volt input/output (I/O) voltage). Intel addressed the issue with OverDrive upgrade kits that featured an interposer with its own voltage regulation. Pentium MMX notebook CPUs used a mobile module that held the CPU. This module was a printed circuit board (PCB) with the CPU directly attached to it in a smaller form factor. The module snapped to the notebook motherboard, and typically a heat spreader was installed and made contact with the module. However, with the 250 nm Tillamook Mobile Pentium MMX (named after a city in Oregon), the module also held the 430TX chipset along with the system's 512 KB static random-access memory (SRAM) cache memory. Models and variants Competitors After the introduction of the Pentium, competitors such as NexGen, AMD, Cyrix, and Texas Instruments announced Pentium-compatible processors in 1994. CIO magazine identified NexGen's Nx586 as the first Pentium-compatible CPU, while PC Magazine described the Cyrix 6x86 as the first. These were followed by the AMD K5, which was delayed due to design difficulties. AMD later bought NexGen to help design the AMD K6, and Cyrix was bought by National Semiconductor. Later processors from AMD and Intel retain compatibility with the original Pentium. See also References External links These official manuals provide an overview of the Pentium processor and its features: |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-216] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/NGC_2119] | [TOKENS: 78] |
Contents NGC 2119 NGC 2119 (also identified as UGC 3380 or PGC 18136) is an elliptical galaxy in the constellation Orion. It was discovered by Édouard Stephan on January 9, 1880. See also References External links This elliptical galaxy article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_ref-28] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Data_center] | [TOKENS: 7132] |
Contents Data center A data center is a facility used to house computer systems and associated components, such as telecommunications and storage systems. Since IT operations are crucial for business continuity, a data center generally includes redundant or backup components and infrastructure for power supply, data communication connections, environmental controls (e.g., cooling, fire suppression), and various security devices. Data centers are the foundation of the digital infrastructure that powers the modern economy, aggregating collective computing demands for cloud services, video streaming, blockchain and crypto mining, machine learning, and virtual reality. Large data centers operate at an industrial scale, requiring significant energy. Estimated global data center electricity consumption in 2024 was around 415 terawatt hours (TWh), or about 1.5% of global electricity demand. The IEA projects that data center electricity consumption could double by 2030. High demand, driven by artificial intelligence (AI) and machine learning workloads is accelerating the deployment of high-performance servers, leading to greater power density and increased strain on electric grids. Data centers can vary widely in terms of size, power requirements, redundancy, and overall structure. Four common categories used to segment types of data centers are onsite data centers, colocation facilities, hyperscale data centers, and edge data centers. In particular, colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, which are critical to connecting the internet. History Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center.[note 1] Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes.[note 2] Basic design guidelines for controlling access to the computer room were therefore devised. During the microcomputer industry boom of the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The availability of inexpensive networking equipment, coupled with new standards for the network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term data center, as applied to specially designed computer rooms, started to gain popular recognition about this time.[note 3] A boom of data centers came during the dot-com bubble of 1997–2000.[note 4] Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called internet data centers (IDCs), which provide enhanced capabilities, such as crossover backup: "If a Bell Atlantic line is cut, we can transfer them to ... to minimize the time of outage." The term cloud data centers (CDCs) has been used. Increasingly, the division of these terms has almost disappeared and they are being integrated into the term data center. The global data center market saw steady growth in the 2010s, with a notable acceleration in the latter half of the decade. According to Gartner, worldwide data center infrastructure spending reached $200 billion in 2021, representing a 6% increase from 2020 despite the economic challenges posed by the COVID-19 pandemic. The latter part of the 2010s and early 2020s saw a significant shift towards AI and machine learning applications, generating a global boom for more powerful and efficient data center infrastructure. As of March 2021, global data creation was projected to grow to more than 180 zettabytes by 2025, up from 64.2 zettabytes in 2020. The United States is currently the foremost leader in data center infrastructure, hosting 5,381 data centers as of March 2024, the highest number of any country worldwide. According to global consultancy McKinsey & Co., U.S. market demand is expected to double to 35 gigawatts (GW) by 2030, up from 17 GW in 2022. As of 2023, the U.S. accounts for roughly 40 percent of the global market. In 2025, it was estimated that the U.S. GDP growth was only 0.1% without the investments in data centers for artificial intelligence. A study published by the Electric Power Research Institute (EPRI) in May 2024 estimates U.S. data center power consumption could range from 4.6% to 9.1% of the country's generation by 2030. As of 2023, about 80% of U.S. data center load was concentrated in 15 states, led by Virginia and Texas. Data center design Data centers house critical computing resources in a controlled environment and must generally operate with very high availability. Key design elements include providing power for the equipment, temperature and humidity control, cabling, fire safety, and security. Information security is also a concern, and for this reason, a data center has to offer a secure environment that minimizes the chances of a security breach. Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old. Gartner, another research company, says data centers older than seven years are obsolete. The growth in data (163 zettabytes by 2025) is one factor driving the need for data centers to modernize. Focus on modernization is not new: rapid obsolescence of data center equipment was a concern by at least 2007, and in 2011 Uptime Institute was concerned about aging equipment.[note 5] The Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center. Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to: Power supplies, either back up or continuous onsite power consists of one or more uninterruptible power supplies, battery banks, diesel, gas turbine, gas engine generating sets. Greater primary fuel energy efficiency can be achieved with the use of cogeneration technology, generating electricity, heating and cooling onsite. To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically given redundant copies, and critical servers are connected to both the A-side and B-side power feeds. This arrangement is often made to achieve N+1 redundancy in the systems. Static transfer switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.[citation needed] Options for low voltage cable routing might include; Data cabling that is routed through overhead cable trays; Raised floor cabling, both for security reasons and to avoid the extra cost of cooling systems over the racks; Smaller/less expensive data centers may use anti-static tiles instead for a flooring surface. Maintaining suitable temperature and humidity levels is critical to preventing equipment damage caused by overheating. Overheating can cause components, usually the silicon or copper of the wires or circuits to melt, causing loose connections and fire hazards. Typical temperature control methods include: Airflow management is the practice of achieving data center cooling efficiency by preventing the recirculation of hot exhaust air and by reducing bypass airflow. Common approaches include hot-aisle/cold-aisle containment and the deployment of in-row cooling units which position cooling directly between server racks to intercept exhaust heat before it mixes with room air. Humidity control not only prevents moisture-related issues: importantly, excess humidity can cause dust to adhere more readily to fan blades and heat sinks, impeding air cooling leading to higher temperatures. Cold aisle containment is done by exposing the rear of equipment racks, while the fronts of the servers are enclosed with doors and covers. This is similar to how large-scale food companies refrigerate and store their products. Computer cabinets/Server farms are often organized for containment of hot/cold aisles. Proper air duct placement prevents the cold and hot air from mixing. Rows of cabinets are paired to face each other so that the cool and hot air intakes and exhausts do not mix air, which would severely reduce cooling efficiency. Alternatively, a range of underfloor panels can create efficient cold air pathways directed to the raised-floor vented tiles. Either the cold aisle or the hot aisle can be contained. Another option is fitting cabinets with vertical exhaust duct chimneys. Hot exhaust pipes/vents/ducts can direct the air into a Plenum space above a Dropped ceiling and back to the cooling units or to outside vents. With this configuration, traditional hot/cold aisle configuration is not a requirement. Data centers feature fire protection systems, including passive and active design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a fire at its incipient stage. Although the main room usually does not allow Wet Pipe-based Systems due to the fragile nature of circuit boards, there still exist systems that can be used in the rest of the facility or in cold/hot aisle air circulation systems that are closed systems, such as: However, there also exist other means to put out fires, especially in Sensitive areas, usually using Gaseous fire suppression, of which Halon gas was the most popular, until the negative effects of producing and using it were discovered. Physical access is usually restricted. Layered security often starts with fencing, bollards and mantraps. Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information. Fingerprint recognition mantraps are starting to be commonplace. Logging access is required by some data protection regulations; some organizations tightly link this to access control systems. Multiple log entries can occur at the main entrance, entrances to internal rooms, and at equipment cabinets. Access control at cabinets can be integrated with intelligent power distribution units, so that locks are networked through the same appliance. Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security. Data center consolidation consists in reducing the number of data centers and avoiding server sprawl (both physical and virtual), often includes replacing aging data center equipment. Likewise, this process is aided by standardization which makes these systems follow a uniform set of configurations in order to simplify and improve efficiency. Automating tasks such as provisioning, configuration, patching, release management, and compliance are other ways in which data centers can be upgraded. These changes are needed not just when facing fewer skilled IT workers. Lastly, security initiatives integrate the protection of virtual systems with existing security of physical infrastructures. The first raised floor computer room was made by IBM in 1956 to allow access for wiring. During the 1970s, raised floors became more common because they allow cool air to circulate more efficiently. A raised floor standards guide (GR-2930) was developed by Telcordia Technologies, a subsidiary of Ericsson. The lights-out data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure. Generally speaking, local authorities prefer noise levels at data centers to be "10 dB below the existing night-time background noise level at the nearest residence." OSHA regulations require monitoring of noise levels inside data centers if noise exceeds 85 decibels. The average noise level in server areas of a data center may reach as high as 92–96 dB(A). Residents living near data centers have described the sound as "a high-pitched whirring noise 24/7", saying "It's like being on a tarmac with an airplane engine running constantly ... Except that the airplane keeps idling and never leaves." External sources of noise include HVAC equipment and energy generators. Various metrics exist for measuring the data-availability that results from data-center availability beyond 95% uptime, with the top of the scale counting how many nines can be placed after 99%. Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed. A modular data center may consist of data center equipment contained within shipping containers or similar portable containers. Components of the data center can be prefabricated and standardized which facilitates moving if needed. Dynamic infrastructure provides the ability to intelligently, automatically and securely move workloads within a data center anytime, anywhere, for migrations, provisioning, to enhance performance, or building co-location facilities. It also facilitates performing routine maintenance on either physical or virtual systems all while minimizing interruption. A related concept is Composable Infrastructure, which allows for the dynamic reconfiguration of the available resources to suit needs, only when needed. Side benefits include Non-mutually exclusive options for data backup are: Onsite is traditional, and one of its major advantages is immediate availability. Data backup techniques include having an encrypted copy of the data offsite. Methods used for transporting data are: Energy use Energy consumption is a central issue for data centers. Power draw ranges from a few kilowatts (kW) for small server racks to several tens of megawatts (MW) for large facilities. Modern hyperscale data centers can exhibit power densities exceeding 100 times those of conventional office buildings, primarily due to the high concentration of servers and cooling systems required to manage continuous digital workloads. For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center. As of 2024, data centers in the United States are primarily powered by natural gas, which supplies 40% of their electricity (with renewable energy at 24%, nuclear at about 20% and coal at about 15%). The Associated Press reported that electricity for AI data centers in the United States would likely come from natural gas or oil, as companies prefer using currently available power plants, which primarily use fossil fuels. Fossil energy is also often cheaper in locations where data centers are developed, and experts believe that energy demands from generative AI and data centers would be difficult to fulfill with renewable energy alone. Some companies such as Google, Amazon and Meta have expressed interest in nuclear power for their data centers. Other data centers, including xAI's Colossus, OpenAI's Stargate, and Meta's Prometheus use their own off-grid natural gas plants. Electric vehicle and lithium-ion batteries have also been used for powering data centers, including for Colossus. Power utility companies make upgrades to their infrastructure to handle demands of new data centers, and the price for these changes typically falls on consumers: smaller businesses or individual households. In December 2025, the Federal Energy Regulatory Commission published a unanimous order allowing data centers in the United States to have a direct connection with power plants. United States Secretary of Energy Chris Wright expressed support for un-retiring coal plants to power AI data centers. Trump paused leasing for offshore wind projects, a decision that Gizmodo criticized due to their potential to provide power to AI data centers. Electricity demands from AI data centers have slowed or reverssed the retirement of peaking power plants in the United States. In 2024, data centers are estimated to account for about 1.5% of global electricity consumption (approximately 415 TWh) and around 1% of greenhouse gas emissions according to U.S. Environmental Protection Agency (EPA). However, the rapid expansion is causing projections to rise sharply. Due to the accelerated demand from AI, data center's global electricity consumption is projected to more than double to around 945 TWh by 2030 in the IEA's base-case scenario, which represents just under 3% of 2030 total global electricity consumption. This growing electricity demand, much of which is still generated by fossil fuels, increases the potential environmental impact. They also said that lifecycle emissions should be considered, that is including embodied emissions, such as in buildings. Global data center carbon dioxide emissions are projected to rise from an estimated 220 million tonnes in 2024 to 300–320 million tonnes by 2035. Google and Microsoft now each consume more power than some fairly big countries, surpassing the consumption of more than 100 countries. As a result, there is increasing industry pressure for decarbonization. Companies are pursuing direct clean energy agreements, such as Tencent who has pledged to be carbon neutral by 2030, and Microsoft's 2024 agreement to re-open the Three Mile Island nuclear power plant to provide 100% of the electric power for its AI data centers for 20 years. The most commonly used energy efficiency metric for data centers is power usage effectiveness (PUE), calculated as the ratio of total power entering the data center divided by the power used by IT equipment. PUE measures the percentage of power used by overhead devices (cooling, lighting, etc.). The average U.S. data center has a PUE of 2.0, meaning two watts of total power (overhead + IT equipment) for every watt delivered to IT equipment. State-of-the-art data centers are estimated to have a PUE of roughly 1.2. Google publishes quarterly efficiency metrics from its data centers in operation. PUEs of as low as 1.01 have been achieved with two phase immersion cooling. The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile in energy efficiency of all reported facilities. The Energy Efficiency Improvement Act of 2015 (United States) requires federal facilities—including data centers—to operate more efficiently. California's Title 24 (2014) of the California Code of Regulations mandates that every newly constructed data center must have some form of airflow containment in place to optimize energy efficiency. The European Union also has a similar initiative: EU Code of Conduct for Data Centres. Efficiency improvements and renewable energy integration are helping offset some emissions, but fossil fuels remain a major electricity source for data center operations worldwide. In 2011, server racks in data centers were designed for more than 25 kW and the typical server was estimated to waste about 30% of the electricity it consumed. The energy demand for information storage systems is also rising. A high-availability data center is estimated to have a 1 MW demand and consume $20,000,000 in electricity over its lifetime, with cooling representing 35% to 45% of the data center's total cost of ownership. Calculations show that in two years, the cost of powering and cooling a server could be equal to the cost of purchasing the server hardware. Research in 2018 has shown that a substantial amount of energy could still be conserved by optimizing IT refresh rates and increasing server utilization. Research for optimizing task scheduling is also underway, with researchers looking to implement energy-efficient scheduling algorithms that could reduce energy consumption by anywhere between 6% to 44%. In 2011, Facebook, Rackspace and others founded the Open Compute Project (OCP) to develop and publish open standards for greener data center computing technologies. As part of the project, Facebook published the designs of its server, which it had built for its first dedicated data center in Prineville. Making servers taller left space for more effective heat sinks and enabled the use of fans that moved more air with less energy. By not buying commercial off-the-shelf servers, energy consumption due to unnecessary expansion slots on the motherboard and unneeded components, such as a graphics card, was also saved. In 2016, Google joined the project and published the designs of its 48V DC shallow data center rack. This design had long been part of Google data centers. By eliminating the multiple transformers usually deployed in data centers, Google had achieved a 30% increase in energy efficiency. In 2017, sales for data center hardware built to OCP designs topped $1.2 billion and are expected to reach $6 billion by 2021. Power is the largest recurring cost to the user of a data center. Cooling at or below 70 °F (21 °C) wastes money and energy. Furthermore, overcooling equipment in environments with a high relative humidity can expose equipment to a high amount of moisture that facilitates the growth of salt deposits on conductive filaments in the circuitry. A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures. A power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center. Power cooling density is a measure of how much square footage the center can cool at maximum capacity. The cooling of data centers is the second largest power consumer after servers. The cooling energy varies from 10% of the total energy consumption in the most efficient data centers and goes up to 45% in standard air-cooled data centers. An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center's Power Use Effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics. However, the limitation of most current metrics and approaches is that they do not include IT in the analysis. Case studies have shown that by addressing energy efficiency holistically in a data center, major efficiencies can be achieved that are not possible otherwise. This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling. By predicting the effects of these environmental conditions, CFD analysis of a data center can be used to predict the impact of high-density racks mixed with low-density racks and the onward impact on cooling resources, poor infrastructure management practices, and AC failure or AC shutdown for scheduled maintenance. Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center. This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units. Data centers use a lot of power, consumed by two main usages: The power required to run the actual equipment and then the power required to cool the equipment. Power efficiency reduces the first category. Cooling cost reduction through natural means includes location decisions: When the focus is avoiding good fiber connectivity, power grid connections, and people concentrations to manage the equipment, a data center can be miles away from the users. Mass data centers like Google or Facebook do not need to be near population centers. Arctic locations that can use outside air, which provides cooling, are becoming more popular. Renewable electricity sources are another plus. Thus countries with favorable conditions, such as Canada, Finland, Sweden, Norway, and Switzerland are trying to attract cloud computing data centers. Singapore lifted a three-year ban on new data centers in April 2022. A major data center hub for the Asia-Pacific region, Singapore lifted its moratorium on new data center projects in 2022, granting 4 new projects, but rejecting more than 16 data center applications from over 20 new data centers applications received. Singapore's new data centers shall meet very strict green technology criteria including "Water Usage Effectiveness (WUE) of 2.0/MWh, Power Usage Effectiveness (PUE) of less than 1.3, and have a "Platinum certification under Singapore's BCA-IMDA Green Mark for New Data Centre" criteria that clearly addressed decarbonization and use of hydrogen cells or solar panels. It is very difficult to reuse the heat which comes from air-cooled data centers. For this reason, data center infrastructures are more often equipped with heat pumps. Social and environmental impacts The rapid expansion of AI data centers has raised significant concerns over their water consumption, particularly in drought-prone regions. According to the International Energy Agency (IEA), a single 100-megawatt data center can use up to 2,000,000 litres (530,000 US gal) of water per day—equivalent to the daily consumption of 6,500 households. Its water usage can be divided into three categories, on-site (direct usage from data centers), off-site (indirect usage from electricity), and supply-chain (water usage from manufacturing processes). On-site water use refers to the direct water consumed by the data center for the cooling of its equipment. Water is used specifically for space humidification (adds moisture to the air), evaporative cooling systems (air is cooled before entering server rooms), and cooling towers (water is used to remove heat from the facility). Off-site water use is the indirect water usage from the electricity generated in data centers. It is estimated that 56% of U.S. data centers' electricity comes from fossil fuels in thermal power stations, which use water to generate power via steam. Lastly, data centers consume water through the process of AI chip and server manufacturing. These chips, specifically, consume a vast amount of ultrapure water for fabrication and cooling of semiconductor plants. While this scope of water usage is not as significant as the on-site and off-site water usage it is still a contributing factor. Since 2022, more than two-thirds of new data centers have been built in water-stressed areas, including Texas, Arizona, Saudi Arabia, and India, where freshwater scarcity is already a critical issue. The global water footprint of data centers is estimated at 560 billion litres (150×10^9 US gal) annually, a figure projected to double by 2030 due to increasing AI demand. In regions like Aragon, Spain, Amazon's planned data centers are licensed to withdraw 755,720 cubic metres (612.67 acre⋅ft) of water per year, sparking conflicts with farmers who rely on the same dwindling supplies. Similar tensions have arisen in Chile, the Netherlands, and Uruguay, where communities protest the diversion of water for tech infrastructure. Tech companies, including Microsoft, Google, and Amazon, have pledged to become "water positive" by 2030, aiming to replenish more water than they consume. However, critics argue that such commitments often rely on water offsetting, which does not address acute local shortages. At least 59 additional data centers are planned for water-stressed U.S. regions by 2028 and AI's projected global water demand is projected to reach 6.6 billion cubic metres (1,700×10^9 US gal) by 2027. Arizona State University water policy expert Kathryn Sorensen questioned the data center build out, asking: "Is the increase in tax revenue and the relatively paltry number of jobs worth the water?" Data centers generate significant electronic waste (e-waste) due to the frequent replacement of hardware such as servers, GPUs, CPUs, memory, and storage devices, often every 2–5 years to meet demands for digital transformation and artificial intelligence. Globally, e-waste totaled 62 million metric tons in 2022, with generative AI projected to contribute 1.2 to 5 million metric tons annually by 2030, including valuable metals like copper and gold alongside hazardous substances such as lead and mercury. In the United States, data centers contribute to an annual loss of $10 billion in discarded e-waste value, including $4 billion in precious metals. Only 22% of global e-waste is formally collected and recycled, exacerbating environmental pollution and health risks in informal processing sites, often in developing countries where exported waste is handled. From a political ecology perspective, data center e-waste highlights power imbalances in resource management and environmental justice, as tech corporations benefit from tax incentives while externalizing costs to marginalized communities through toxic exports and landfill burdens. This ties into debates on the commons, where shared resources like metals are depleted without equitable governance. Mitigation strategies include modular hardware designs, secure data erasure for reuse, and extended producer responsibility policies to reduce waste by up to 86% in optimized scenarios. The International Energy Agency expects that the AI boom could double global demand for electricity from data centers between 2022 and 2026. According to one 2025 energy model, the United States could see an increase of 8% on energy prices nationally by 2030. This has led to increased electricity prices in some regions, particularly in regions with lots of data centers like Santa Clara, California, and parts of upstate New York. Data centers have also generated concerns in Northern Virginia about whether residents will have to foot the bill for future power lines. It has also made it harder to develop housing in London. A Bank of America Institute report in July 2024 found that the increase in demand for electricity due in part to AI has been pushing electricity prices higher and is a significant contributor to electricity inflation. A Harvard Law School report in March 2025 found that because utilities are increasingly in competition to attract data center contracts from big tech companies, they are likely hiding subsidies to those trillion-dollar companies in power prices by raising costs for American consumers. Bitcoin used up 2% of U.S. electricity in 2023. Data centers have increasingly been analyzed through the lens of political ecology, which explores the intersections of power, politics, and environmental change in technological infrastructure. Scholars argue that these facilities reshape urban and rural landscapes by locating in marginalized or post-industrial areas, often repurposing abandoned sites while promising economic revitalization through job creation and tax revenues. For instance, hyperscale data centers in rural towns like Prineville, Oregon, or Luleå, Sweden, challenge traditional core-periphery dynamics but can exacerbate global inequalities in digital access and resource distribution. In the context of digital transformation, data centers enable the expansion of cloud computing, artificial intelligence, and machine learning, but at significant ecological and social costs. Critics from science and technology studies (STS) highlight how corporate sustainability pledges, such as carbon neutrality targets by companies like Microsoft and Google, often rely on offsets and renewable subsidies that privatize benefits while socializing environmental harms. This ties into debates on the commons, where data centers' intensive use of public resources (like electricity grids and water supplies) represents an enclosure, with tech firms receiving tax breaks and priority access at the expense of local communities. Bipartisan community resistance to data center development has grown, with residents voicing concerns about water scarcity, rising utility bills, noise pollution, and land sprawl. Data Center Watch reported that multiple projects collectively worth $64 billion had been stopped or delayed between May 2024 to March 2025. An additional $98 billion worth of projects were blocked or delayed between March and June of 2025. Protests in the Netherlands led to a temporary national ban on new mega-centers in 2022. Critics have pointed out that jobs created by data centers tend to be temporary or few in number. Residents have been concerned about air, water and noise pollution, as well as property devaluation, traffic, and the risk of fires. Other environmental concerns involving AI data centers include e-waste and construction materials that emit greenhouse gases such as concrete and cement. Network infrastructure Communications in data centers today are most often based on networks running the Internet protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world which are connected according to the data center network architecture. Redundancy of the internet connection is often provided by using two or more upstream service providers (see Multihoming). Some of the servers at the data center are used for running the basic internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers. Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, and so on. Also common are monitoring systems for the network and some of the applications. Additional off-site monitoring systems are also typical, in case of a failure of communications inside the data center. Modular data center For quick deployment or IT disaster recovery, several large hardware vendors have developed mobile/modular solutions that can be installed and made operational in a very short amount of time. Micro data center Micro data centers (MDCs) are access-level data centers which are smaller in size than traditional data centers but provide the same features. They are typically located near the data source to reduce communication delays, as their small size allows several MDCs to be spread out over a wide area. MDCs are well suited to user-facing, front end applications. They are commonly used in edge computing and other areas where low latency data processing is needed. Data centers in space Data centers in space is a proposed idea to place a data center in outer space in low Earth orbit. The theoretical advantages are that of space-based solar power, in addition to aiding in weather forecasting and weather prediction computation from weather satellites, and the ability to freely scale up. Challenges include temperature fluctuations, cosmic rays, and micrometeorites. References See also Notes External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.