text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Birds_of_a_feather_(computing)] | [TOKENS: 2714] |
Contents Internet Engineering Task Force The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster or requirements and all its participants are volunteers. Their work is usually funded by employers or other sponsors. The IETF was initially supported by the federal government of the United States but since 1993 has operated under the auspices of the Internet Society, a non-profit organization with local chapters around the world. Organization There is no membership in the IETF. Anyone can participate by signing up to a working group mailing list, or registering for an IETF meeting. The process for developing IETF standards is open and all-inclusive. Anyone can participate by joining a working group and attending meetings. Each working group normally has appointed two co-chairs (occasionally three); a charter that describes its focus; and what it is expected to produce, and when. It is open to all who want to participate and holds discussions on an open mailing list. Working groups hold open sessions at IETF meetings, where the onsite registration fee in 2024 was between US$875 (early registration) and $1200 per person for the week. Significant discounts are available for students and remote participants. As working groups do not make decisions at IETF meetings, with all decisions taken later on the working group mailing list, meeting attendance is not required for contributors.[citation needed] Rough consensus is the primary basis for decision making. There are no formal voting procedures. Each working group is intended to complete work on its topic and then disband. In some cases, the working group will instead have its charter updated to take on new tasks as appropriate. The working groups are grouped into areas by subject matter. Each area is overseen by an area director (AD), with most areas having two ADs. The ADs are responsible for appointing working group chairs. The area directors, together with the IETF Chair, form the Internet Engineering Steering Group (IESG), which is responsible for the overall operation of the IETF.[citation needed] The Internet Architecture Board (IAB) oversees the IETF's external relationships. The IAB provides long-range technical direction for Internet development. The IAB also manages the Internet Research Task Force (IRTF), with which the IETF has a number of cross-group relations. A nominating committee (NomCom) of ten randomly chosen volunteers who participate regularly at meetings, a non-voting chair and 4-5 liaisons, is vested with the power to appoint, reappoint, and remove members of the IESG, IAB, IETF Trust and the IETF LLC. To date, no one has been removed by a NomCom, although several people have resigned their positions, requiring replacements. In 1993, the IETF changed from an activity supported by the US federal government to an independent, international activity associated with the Internet Society, a US-based 501(c)(3) organization. In 2018, the Internet Society created a subsidiary, the IETF Administration LLC, to be the corporate, legal and financial home for the IETF. IETF activities are funded by meeting fees, meeting sponsors and by the Internet Society via its organizational membership and the proceeds of the Public Interest Registry. In December 2005, the IETF Trust was established to manage the copyrighted materials produced by the IETF. The Internet Engineering Steering Group (IESG) is a body composed of the Internet Engineering Task Force (IETF) chair and area directors. It provides the final technical review of Internet standards and is responsible for day-to-day management of the IETF. It receives appeals of the decisions of the working groups, and the IESG makes the decision to progress documents in the standards track. The chair of the IESG is the area director of the general area, who also serves as the overall IETF chair. Members of the IESG include the two directors, sometimes three, of each of the following areas: Liaison and ex officio members include:[citation needed] Early leadership and administrative history The Gateway Algorithms and Data Structures (GADS) Task Force was the precursor to the IETF. Its chairman was David L. Mills of the University of Delaware. In January 1986, the Internet Activities Board (IAB; now called the Internet Architecture Board) decided to divide GADS into two entities: an Internet Architecture (INARC) Task Force chaired by Mills to pursue research goals, and the IETF to handle nearer-term engineering and technology transfer issues. The first IETF chair was Mike Corrigan, who was then the technical program manager for the Defense Data Network (DDN). Also in 1986, after leaving DARPA, Robert E. Kahn founded the Corporation for National Research Initiatives (CNRI), which began providing administrative support to the IETF.[citation needed] In 1987, Corrigan was succeeded as IETF chair by Phill Gross. Effective March 1, 1989, but providing support dating back to late 1988, CNRI and NSF entered into a cooperative agreement, No. NCR-8820945, wherein CNRI agreed to create and provide a "secretariat" for the "overall coordination, management and support of the work of the IAB, its various task forces and, particularly, the IETF". In 1992, CNRI supported the formation and early funding of the Internet Society, which took on the IETF as a fiscally sponsored project, along with the IAB, the IRTF, and the organization of annual INET meetings. Gross continued to serve as IETF chair throughout this transition. Cerf, Kahn, and Lyman Chapin announced the formation of ISOC as "a professional society to facilitate, support, and promote the evolution and growth of the Internet as a global research communications infrastructure". At the first board meeting of the Internet Society, Cerf, representing CNRI, offered, "In the event a deficit occurs, CNRI has agreed to contribute up to USD$102,000 to offset it." In 1993, Cerf continued to support the formation of ISOC while working for CNRI, and the role of ISOC in "the official procedures for creating and documenting Internet Standards" was codified in the IETF's RFC 1602. In 1995, IETF's RFC 2031 describes ISOC's role in the IETF as being purely administrative, and ISOC as having "no influence whatsoever on the Internet Standards process, the Internet Standards or their technical content". In 1998, CNRI established Foretec Seminars, Inc. (Foretec), a for-profit subsidiary to take over providing secretariat services to the IETF. Foretec provided these services until at least 2004. By 2013, Foretec was dissolved. In 2003, IETF's RFC 3677 described IETFs role in appointing three board members to the ISOC's board of directors. In 2018, ISOC established The IETF Administration LLC, a separate LLC to handle the administration of the IETF. In 2019, the LLC issued a call for proposals to provide secretariat services to the IETF. Meetings The first IETF meeting was attended by 21 US federal government-funded researchers on 16 January 1986. It was a continuation of the work of the earlier GADS Task Force. Representatives from non-governmental entities (such as gateway vendors) were invited to attend starting with the fourth IETF meeting in October 1986. Since that time, all IETF meetings have been open to the public. Initially, the IETF met quarterly, but from 1991, it has been meeting three times a year. The initial meetings were very small, with fewer than 35 people in attendance at each of the first five meetings. The maximum attendance during the first 13 meetings was only 120 attendees. This occurred at the twelfth meeting, held during January 1989. These meetings have grown in both participation and scope a great deal since the early 1990s; it had a maximum attendance of 2810 at the December 2000 IETF held in San Diego, California. Attendance declined with industry restructuring during the early 2000s, and is currently around 1200. The locations for IETF meetings vary greatly. A list of past and future meeting locations is on the IETF meetings page. The IETF strives to hold its meetings near where most of the IETF volunteers are located. IETF meetings are held three times a year, with one meeting each in Asia, Europe and North America. An occasional exploratory meeting is held outside of those regions in place of one of the other regions.[promotion?] The IETF also organizes hackathons during the IETF meetings. The focus is on implementing code that will improve standards in terms of quality and interoperability. In an IETF context, BoF (birds of a feather) can refer to: The first use of this term among computer specialists is uncertain, but it was employed during DECUS conferences and may have been used at SHARE user group meetings in the 1960s. BoFs can facilitate networking and partnership formation among subgroups, including functionally oriented groups such as CEOs or geographically oriented groups. BoFs generally allow for more audience interaction than the panel discussions typically seen at conventions; the discussions are not completely unguided, though, as there is still a discussion leader. The term is derived from the proverb "birds of a feather flock together". The (idiomatic) phrase "birds of a feather" meaning "people having similar characters, backgrounds, interests, or beliefs". In old poetic English, "birds of a feather" means birds that have the same kind of feathers, so the proverb refers to the fact that birds congregate with birds of their own species. Operations The details of IETF operations have changed considerably as the organization has grown, but the basic mechanism remains publication of proposed specifications, development based on the proposals, review and independent testing by participants, and republication as a revised proposal, a draft proposal, or eventually as an Internet Standard. IETF standards are developed in an open, all-inclusive process in which any interested individual can participate. All IETF documents are freely available over the Internet and can be reproduced at will. Multiple, working, useful, interoperable implementations are the chief requirement before an IETF proposed specification can become a standard. Most specifications are focused on single protocols rather than tightly interlocked systems. This has allowed the protocols to be used in many different systems, and its standards are routinely reused by bodies which create full-fledged architectures (e.g. 3GPP IMS).[citation needed] Because it relies on volunteers and uses "rough consensus and running code" as its touchstone, results can be slow whenever the number of volunteers is either too small to make progress or so large as to make consensus difficult, or when volunteers lack the necessary expertise. For protocols like SMTP, which is used to transport e-mail for a user community in the many hundreds of millions, there is also considerable resistance to any change that is not fully backward compatible, except for IPv6. Work within the IETF on ways to improve the speed of the standards-making process is ongoing but, because the number of volunteers with opinions on it is very great, consensus on improvements has been slow to develop.[citation needed] The IETF cooperates with the W3C, ISO/IEC, ITU, and other standards bodies. Statistics are available that show who the top contributors by RFC publication are. While the IETF only allows for participation by individuals, and not by corporations or governments, sponsorship information is available from these statistics.[citation needed] In 2025, criticism intensified within the IETF that intelligence agencies were exerting undue influence, sparking concerns on a lack of transparency in the decision-making process. Chairs The IETF chairperson is selected by the NomCom process for a two-year renewable term. Before 1993, the IETF Chair was selected by the IAB. A list of the past and current chairs of the IETF: Topics of interest The IETF works on a broad range of networking technologies that provide the foundation for the Internet's growth and evolution. It aims to improve the efficiency in management of networks as they grow in size and complexity. The IETF is also standardizing protocols for autonomic networking that enable networks to be self managing. It is a network of physical objects or things that are embedded with electronics, sensors, software and also enables objects to exchange data with operator, manufacturer and other connected devices. Several IETF working groups are developing protocols that are directly relevant to IoT. Its development provides the ability of internet applications to send data over the Internet. There are some well-established transport protocols such as TCP (Transmission Control Protocol) and UDP (User Datagram Protocol), which are continuously getting extended and refined to meet the needs of the global Internet. The IETF defines a "Generic Security Services Application Programming Interface" (GSSAPI) which provides security services to callers in a generic fashion. Among these are various implementations. Java provides these features in its standard library package org.ietf.jgss.*. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/New_Deal] | [TOKENS: 23547] |
Contents New Deal The New Deal was a 1933–1938 series of economic, social, and political reforms in response to the Great Depression in the United States under President Franklin D. Roosevelt. He introduced the phrase when accepting the Democratic Party presidential nomination in the 1932 United States presidential election, winning in a landslide over incumbent Herbert Hoover, whose administration was widely viewed as ineffective. Roosevelt attributed the Depression to inherent market instability and inadequate aggregate demand (following the Keynesian economic model), and argued that stabilizing and rationalizing the economy required massive government intervention. During Roosevelt's first hundred days in office in 1933 until 1935, FDR introduced what historians refer to as the "First New Deal", which focused on the "3 R's": relief for the unemployed and for the poor, recovery of the economy back to normal levels, and reforms of the financial system to prevent a repeat depression. Roosevelt signed the Emergency Banking Act, which authorized the Federal Reserve to insure deposits to restore confidence, and the 1933 Banking Act made this permanent with the Federal Deposit Insurance Corporation (FDIC). Other laws created the National Recovery Administration (NRA), which allowed industries to create "codes of fair competition"; the Securities and Exchange Commission (SEC), which protected investors from abusive stock market practices; and the Agricultural Adjustment Administration (AAA), which raised rural incomes by controlling production. Public works were undertaken in order to find jobs for the unemployed (25 percent of the workforce when Roosevelt took office): the Civilian Conservation Corps (CCC) enlisted young men for manual labor on government land, and the Tennessee Valley Authority (TVA) promoted electricity generation and other forms of economic development in the drainage basin of the Tennessee River. Although the First New Deal helped many find work and restored confidence in the financial system, by 1935 stock prices were still below pre-Depression levels and unemployment still exceeded 20 percent. From 1935 to 1938, the "Second New Deal" introduced further legislation and additional agencies which focused on job creation and on improving the conditions of the elderly, workers, and the poor. The Works Progress Administration (WPA) supervised the construction of bridges, libraries, parks, and other facilities, while also investing in the arts; the National Labor Relations Act guaranteed employees the right to organize trade unions; and the Social Security Act introduced pensions for senior citizens and benefits for the disabled, mothers with dependent children, and the unemployed. The Fair Labor Standards Act prohibited "oppressive" child labor, and enshrined a 40-hour work week and national minimum wage. In 1938, the Republican Party gained seats in Congress and joined with conservative Democrats to block further New Deal legislation, and some of it was declared unconstitutional by the Supreme Court. The New Deal produced a political realignment, reorienting the Democratic Party's base to the New Deal coalition of labor unions, blue-collar workers, big city machines, racial minorities (most importantly African-Americans), white Southerners, and intellectuals. The realignment crystallized into a powerful liberal coalition which dominated presidential elections into the 1960s, as an opposing conservative coalition largely controlled Congress in domestic affairs from 1939 onwards. Historians still debate the effectiveness of the New Deal programs, although most accept that full employment was not achieved until World War II began in 1939. Contrary to some allegations, the New Deal legislation was in fact steered through Congress in large part through Roosevelt's influential vice president John Nance Garner. In addition, the influential Senate majority leader Joseph T. Robinson, who served directly on the Senate floor and was regarded as the New Deal's "marshal," was also credited with propelling New Deal legislation through the U.S. Senate early on. Summary of First and Second New Deal programs The First New Deal (1933–1934) dealt with the pressing banking crisis through the Emergency Banking Act and the 1933 Banking Act. The Federal Emergency Relief Administration (FERA) provided US$500 million (equivalent to $12.4 billion in 2025) for relief operations by states and cities, and the short-lived CWA gave locals money to operate make-work projects from 1933 to 1934. The Securities Act of 1933 was enacted to prevent a repeated stock market crash. The controversial work of the National Recovery Administration (NRA) was also part of the First New Deal. The Second New Deal in 1935–1936 included the National Labor Relations Act to protect labor organizing, the Works Progress Administration (WPA) relief program (which made the federal government the largest employer in the nation), the Social Security Act and new programs to aid tenant farmers and migrant workers. The final major items of New Deal legislation were the creation of the United States Housing Authority and the Farm Security Administration (FSA), which both occurred in 1937; and the Fair Labor Standards Act of 1938, which set maximum hours and minimum wages for most categories of workers. The FSA was also one of the oversight authorities of the Puerto Rico Reconstruction Administration, which administered relief efforts to Puerto Rican citizens affected by the Great Depression. Roosevelt had built a New Deal coalition, but the economic downturn of 1937–1938 and the bitter split between the American Federation of Labor (AFL) and Congress of Industrial Organizations (CIO) labor unions led to major Republican gains in Congress in 1938. Conservative Republicans and Democrats in Congress joined the informal conservative coalition, which took control of both Houses of Congress following the 1938 midterm elections. By 1942–1943, they shut down relief programs such as the WPA and the CCC and blocked major progressive proposals. Noting the composition of the new Congress, one study argued The Congress that assembled in January 1939 was quite unlike any with which Roosevelt had to contend before. Since all Democratic losses took place in the North and the West, and particularly in states like Ohio and Pennsylvania, southerners held a much stronger position. The House contained 169 non-southern Democrats, 93 southern Democrats, 169 Republicans, and 4 third-party representatives. For the first time, Roosevelt could not form a majority without the help of some southerners or Republicans. In addition, the president had to contend with several senators who, having successfully resisted the purge, no longer owed him anything. Most observers agreed, therefore, that the president could at best hope to consolidate, but certainly not to extend, the New Deal. James Farley thought that Roosevelt's wisest course would be "to clean up odds and ends, tighten up and improve things [he] already has but not try [to] start anything new." In any event, Farley predicted that Congress would discard much of Roosevelt's program. As noted by another study, "the 1938 elections proved a decisive point in the consolidation of the conservative coalition in Congress. The liberal bloc in the House had been halved, and conservative Democrats had escaped 'relatively untouched'". In the House elected in 1938 there were at least 30 anti-New Deal Democrats and another 50 who were "not at all enthusiastic". In addition, "The new Senate was split about evenly between pro- and anti-New Deal factions." The Fair Labor Standards Act of 1938 was the last major New Deal legislation that Roosevelt succeeded in enacting into law before the conservative coalition won control of Congress. Though he could usually use the veto to restrain Congress, Congress could block any Roosevelt legislation it disliked. Nonetheless, Roosevelt turned his attention to the war effort and won reelection in 1940–1944. Furthermore, the Supreme Court declared the NRA and the first version of the Agricultural Adjustment Act (AAA) unconstitutional, but the AAA was rewritten and then upheld. Republican President Dwight D. Eisenhower (1953–1961) left the New Deal largely intact, even expanding it in some areas. In the 1960s, Lyndon B. Johnson's Great Society used the New Deal as inspiration for a dramatic expansion of progressive programs, which Republican Richard Nixon generally retained. However, after 1974 the call for deregulation of the economy gained bipartisan support. The New Deal regulation of banking (Glass–Steagall Act) lasted until it was suspended in the 1990s. Several organizations created by New Deal programs remain active and those operating under the original names include the Federal Deposit Insurance Corporation (FDIC), the Federal Crop Insurance Corporation (FCIC), the Federal Housing Administration (FHA), and the Tennessee Valley Authority (TVA). The largest programs still in existence are the Social Security System and the Securities and Exchange Commission (SEC). Origins From 1929 to 1933 manufacturing output decreased by one third, which economist Milton Friedman later called the Great Contraction. Prices fell by 20%, causing deflation that made repaying debts much harder. Unemployment in the United States increased from 4% to 25%. Additionally, one-third of all employed persons were downgraded to working part-time on much smaller paychecks. In the aggregate, almost 50% of the nation's human work-power was going unused. Before the New Deal, USA bank deposits were not "guaranteed" by government. When thousands of banks closed, depositors temporarily lost access to their money; most of the funds were eventually restored but there was gloom and panic. The United States had no national safety net, no public unemployment insurance and no Social Security. Relief for the poor was the responsibility of families, private charity and local governments, but as conditions worsened year by year demand skyrocketed and their combined resources increasingly fell far short of demand. The depression had psychologically devastated the nation. As Roosevelt took the oath of office at noon on March 4, 1933, all state governors had authorized bank holidays or restricted withdrawals—many Americans had little or no access to their bank accounts. Farm income had fallen by over 50% since 1929. Between 1930 and 1933, an estimated 844,000 non-farm mortgages were foreclosed on, out of a total of five million. Political and business leaders feared revolution and anarchy. Joseph P. Kennedy Sr., who remained wealthy during the Depression, recalled that "in those days I felt and said I would be willing to part with half of what I had if I could be sure of keeping, under law and order, the other half." Throughout the nation men and women, forgotten in the political philosophy of the Government, look to us here for guidance and for more equitable opportunity to share in the distribution of national wealth... I pledge myself to a new deal for the American people. This is more than a political campaign. It is a call to arms. The phrase "New Deal" was coined by an adviser to Roosevelt, Stuart Chase, who used A New Deal as the title for an article published in the progressive magazine The New Republic a few days before Roosevelt's speech. Speechwriter Rosenman added it to his draft of FDR's presidential nomination acceptance speech at the last minute. Upon accepting the 1932 Democratic nomination for president, Roosevelt promised "a new deal for the American people". In campaign speeches, Roosevelt committed to carrying out, if elected, several elements of what would become the New Deal, such as unemployment relief and public works programs. First New Deal (1933–1934) Roosevelt entered office with clear ideas for policies to address the Great Depression, though he remained open to experimentation as his presidency began implementing these. Among Roosevelt's more famous advisers was an informal "Brain Trust", a group that tended to view pragmatic government intervention in the economy positively. His choice for Secretary of Labor, Frances Perkins, greatly influenced his initiatives. Her list of what her priorities would be if she took the job illustrates: "a forty-hour workweek, a minimum wage, worker's compensation, unemployment compensation, a federal law banning child labor, direct federal aid for unemployment relief, Social Security, a revitalized public employment service and health insurance". The New Deal policies drew from many different ideas proposed earlier in the 20th century. Assistant Attorney General Thurman Arnold led efforts that hearkened back to an anti-monopoly tradition rooted in American politics by figures such as Andrew Jackson and Thomas Jefferson. Supreme Court Justice Louis Brandeis, an influential adviser to many New Dealers, argued that "bigness" (referring, presumably, to corporations) was a negative economic force, producing waste and inefficiency. Other leaders such as Hugh S. Johnson of the NRA took ideas from the Woodrow Wilson Administration, advocating techniques used to mobilize the economy for World War I. They brought ideas and experience from the government controls and spending of 1917–1918. Other New Deal planners revived experiments suggested in the 1920s, such as the TVA. The "First New Deal" (1933–1934) encompassed the proposals offered by a wide spectrum of groups (not included was the Socialist Party, whose influence was all but destroyed). This first phase of the New Deal was also characterized by fiscal conservatism (see Economy Act, below) and experimentation with several different, sometimes contradictory, cures for economic ills. Roosevelt created dozens of new agencies. They are traditionally and typically known to Americans by their alphabetical initials. U.S. Vice President John Nance Garner would have a very prominent role in shaping the president's policies, with Roosevelt using Garner's knowledge and experience to pilot New Deal legislation through Congress. In 1933 alone, U.S. Senate Majority Leader Joseph T. Robinson would also propel legislation through the U.S. Senate. The American people were generally extremely dissatisfied with the crumbling economy, mass unemployment, declining wages, and profits, and especially Herbert Hoover's policies such as the Smoot–Hawley Tariff Act and the Revenue Act of 1932. Roosevelt entered office with enormous political capital. Americans of all political persuasions were demanding immediate action and Roosevelt responded with a remarkable series of new programs in the "first hundred days" of the administration, in which he met with Congress for 100 days. During those 100 days of lawmaking, Congress granted every request Roosevelt asked and passed a few programs (such as the Federal Deposit Insurance Corporation to insure bank accounts) that he opposed. Ever since, presidents have been judged against Roosevelt for what they accomplished in their first 100 days. Walter Lippmann famously noted: At the end of February we were a congeries of disorderly panic-stricken mobs and factions. In the hundred days from March to June, we became again an organized nation confident of our power to provide for our own security and to control our own destiny. The economy had hit bottom in March 1933 and then started to expand. Economic indicators show the economy reached its lowest point in the first days of March, then began a steady, sharp upward recovery. Thus the Federal Reserve Index of Industrial Production sank to its lowest point of 52.8 in July 1932 and was practically unchanged at 54.3 in March 1933. However, by July 1933 it reached 85.5, a dramatic rebound of 57% in four months. Recovery was steady and strong until 1937. Except for employment, the economy by 1937 surpassed the levels of the late 1920s. The Recession of 1937 was a temporary downturn. Private sector employment, especially in manufacturing, recovered to the level of the 1920s but failed to advance further until the war. The U.S. population was 124,840,471 in 1932 and 128,824,829 in 1937, an increase of 3,984,468. The ratio of these numbers, times the number of jobs in 1932, means there was a need for 938,000 more jobs in 1937, to maintain the same employment level. The Economy Act, drafted by Budget Director Lewis Williams Douglas, was passed on March 15, 1933. The act proposed to balance the "regular" (non-emergency) federal budget by cutting the salaries of government employees and cutting pensions to veterans by fifteen percent. It saved $500 million per year and reassured deficit hawks, such as Douglas, that the new president was fiscally conservative. Roosevelt argued there were two budgets: the "regular" federal budget, which he balanced; and the emergency budget, which was needed to defeat the depression. It was imbalanced on a temporary basis. Roosevelt initially favored balancing the budget, but soon found himself running spending deficits to fund his numerous programs. However, Douglas—rejecting the distinction between a regular and emergency budget—resigned in 1934 and became an outspoken critic of the New Deal. Roosevelt strenuously opposed the Bonus Bill that would give World War I veterans a cash bonus. Congress finally passed it over his veto in 1936 and the Treasury distributed $1.5 billion in cash as bonus welfare benefits to 4 million veterans just before the 1936 election. New Dealers never accepted the Keynesian argument for government spending as a vehicle for recovery. Most economists of the era, along with Henry Morgenthau of the Treasury Department, rejected Keynesian solutions and favored balanced budgets. At the beginning of the Great Depression, the economy was destabilized by bank failures followed by credit crunches. The initial reasons were substantial losses in investment banking, followed by bank runs. Bank runs occur when a large number of customers withdraw their deposits because they believe the bank might become insolvent. As the bank run progressed, it generated a self-fulfilling prophecy: as more people withdrew their deposits, the likelihood of default increased and this encouraged further withdrawals. Milton Friedman and Anna Schwartz have argued that the drain of money out of the banking system caused the monetary supply to shrink, forcing the economy to likewise shrink. As credit and economic activity diminished, price deflation followed, causing further economic contraction with disastrous impact on banks. Between 1929 and 1933, 40% of all banks (9,490 out of 23,697 banks) failed. Much of the Great Depression's economic damage was caused directly by bank runs. Herbert Hoover had already considered a bank holiday to prevent further bank runs but rejected the idea because he was afraid to incite a panic. However, Roosevelt gave a radio address, held in the atmosphere of a Fireside Chat. He explained to the public in simple terms the causes of the banking crisis, what the government would do, and how the population could help. He closed all the banks in the country and kept them all closed until new legislation could be passed. On March 9, 1933, Roosevelt sent to Congress the Emergency Banking Act, drafted in large part by Hoover's top advisors. The act was passed and signed into law the same day. It provided for a system of reopening sound banks under Treasury supervision, with federal loans available if needed. Three-quarters of the banks in the Federal Reserve System reopened within the next three days. Billions of dollars in hoarded currency and gold flowed back into them within a month, thus stabilizing the banking system. By the end of 1933, 4,004 small local banks were permanently closed and merged into larger banks. Their deposits totaled $3.6 billion. Depositors lost $540 million (equivalent to $13,430,591,260 in 2025) and eventually received on average 85 cents on the dollar of their deposits. The Glass–Steagall Act limited commercial bank securities activities and affiliations between commercial banks and securities firms to regulate speculations. It also established the Federal Deposit Insurance Corporation (FDIC), which insured deposits for up to $2,500, ending the risk of runs on banks.[page needed] This banking reform offered unprecedented stability because throughout the 1920s more than five hundred banks failed per year, and then it was less than ten banks per year after 1933. By contrast, Canada did not have a single failure system during the Great Depression. This was true in great part because it fostered stability by allowing banks diversity by branching across provincial lines. Historian David T. Beito has criticized FDR for failing to oppose a filibuster by Senator Huey Long against a bill by Senator Carter Glass during the interregnum period permitting banks to branch across state lines. "Had FDR or Hoover vigorously pushed the Canadian branch banking model," writes Beito, "and done it much earlier, the course of the financial crisis during the transition might have been much less bleak. Under the gold standard, the United States kept the dollar convertible to gold. The Federal Reserve would have had to execute an expansionary monetary policy to fight the deflation and to inject liquidity into the banking system to prevent it from crumbling—but lower interest rates would have led to a gold outflow. Under the gold standards, price–specie flow mechanism countries that lost gold, but nevertheless wanted to maintain the gold standard, had to permit their money supply to decrease and the domestic price level to decline (deflation). As long as the Federal Reserve had to defend the gold parity of the dollar it had to sit idle while the banking system crumbled. In March and April in a series of laws and executive orders, the government suspended the gold standard. Roosevelt stopped the outflow of gold by forbidding the export of gold except under license from the Treasury. Anyone holding significant amounts of gold coinage was mandated to exchange it for the existing fixed price of U.S. dollars. The Treasury no longer paid out gold for dollars and gold would no longer be considered valid legal tender for debts in private and public contracts. The dollar was allowed to float freely on foreign exchange markets with no guaranteed price in gold. With the passage of the Gold Reserve Act in 1934, the nominal price of gold was changed from $20.67 per troy ounce to $35. These measures enabled the Federal Reserve to increase the amount of money in circulation to the level the economy needed. Markets immediately responded well to the suspension in the hope that the decline in prices would finally end. In her essay "What ended the Great Depression?" (1992), Christina Romer argued that this policy raised industrial production by 25% until 1937 and by 50% until 1942. Before the Wall Street Crash of 1929, securities were unregulated at the federal level. Even firms whose securities were publicly traded published no regular reports, or even worse, rather misleading reports based on arbitrarily selected data. To avoid another crash, the Securities Act of 1933 was passed. It required the disclosure of the balance sheet, profit and loss statement, and the names and compensations of corporate officers for firms whose securities were traded. Additionally, the reports had to be verified by independent auditors. In 1934, the U.S. Securities and Exchange Commission was established to regulate the stock market and prevent corporate abuses relating to corporate reporting and the sale of securities. In a measure that garnered substantial popular support for his New Deal, Roosevelt moved to put to rest one of the most divisive cultural issues of the 1920s. He signed the bill to legalize the manufacture and sale of alcohol, an interim measure pending the repeal of prohibition, for which a constitutional amendment of repeal (the 21st) was already in process. The repeal amendment was ratified later in 1933. States and cities gained additional new revenue and Roosevelt secured his popularity especially in the cities and ethnic areas by legalizing alcohol. Relief was the immediate effort to help the one-third of the population that was hardest hit by the depression. Relief was also aimed at providing temporary help to suffering and unemployed Americans. Local and state budgets were sharply reduced because of falling tax revenue, but New Deal relief programs were used not just to hire the unemployed but also to build needed schools, municipal buildings, waterworks, sewers, streets, and parks according to local specifications. While the regular Army and Navy budgets were reduced, Roosevelt juggled relief funds to provide for their claimed needs. All of the CCC camps were directed by army officers, whose salaries came from the relief budget. The PWA built numerous warships, including two aircraft carriers; the money came from the PWA agency. PWA also built warplanes, and the WPA built military bases and airfields. To prime the pump and cut unemployment, the NIRA created the Public Works Administration (PWA), a major program of public works, which organized and provided funds for the building of useful works such as government buildings, airports, hospitals, schools, roads, bridges, and dams. From 1933 to 1935, PWA spent $3.3 billion with private companies to build 34,599 projects, many of them quite large. The NIRA also contained a provision for the "construction, reconstruction, alteration, or repair under public regulation or control of low-cost housing and slum-clearance projects". Many unemployed people were put to work under Roosevelt on a variety of government-financed public works projects, including the construction of bridges, airports, dams, post offices, hospitals, and hundreds of thousands of miles of road. Through reforestation and flood control, they reclaimed millions of hectares of soil from erosion and devastation. As noted by one authority, Roosevelt's New Deal "was literally stamped on the American landscape". The rural U.S. was a high priority for Roosevelt and his energetic Secretary of Agriculture, Henry A. Wallace. Roosevelt believed that full economic recovery depended upon the recovery of agriculture and raising farm prices was a major tool, even though it meant higher food prices for the poor living in cities. Many rural people lived in severe poverty, especially in the South. Major programs addressed to their needs included the Resettlement Administration (RA), the Rural Electrification Administration (REA), rural welfare projects sponsored by the WPA, National Youth Administration (NYA), Forest Service and Civilian Conservation Corps (CCC), including school lunches, building new schools, opening roads in remote areas, reforestation and purchase of marginal lands to enlarge national forests. In 1933, the Roosevelt administration launched the Tennessee Valley Authority, a project involving dam construction planning on an unprecedented scale to curb flooding, generate electricity, and modernize poor farms in the Tennessee Valley region of the Southern United States. Under the Farmers' Relief Act of 1933, the government paid compensation to farmers who reduced output, thereby raising prices. Because of this legislation, the average income of farmers almost doubled by 1937. In the 1920s, farm production had increased dramatically thanks to mechanization, more potent insecticides, and increased use of fertilizer. Due to an overproduction of agricultural products, farmers faced severe and chronic agricultural depression throughout the 1920s. The Great Depression even worsened the agricultural crises and, at the beginning of 1933, agricultural markets nearly faced collapse. Farm prices were so low that in Montana wheat was rotting in the fields because it could not be profitably harvested. In Oregon, sheep were slaughtered and left to rot because meat prices were not sufficient to warrant transportation to markets. Roosevelt was keenly interested in farm issues and believed that true prosperity would not return until farming was prosperous. Many different programs were directed at farmers. The first 100 days produced the Farm Security Act to raise farm incomes by raising the prices farmers received, which was achieved by reducing total farm output. The Agricultural Adjustment Act created the Agricultural Adjustment Administration (AAA) in May 1933. The act reflected the demands of leaders of major farm organizations (especially the Farm Bureau) and reflected debates among Roosevelt's farm advisers such as Secretary of Agriculture Henry A. Wallace, M.L. Wilson, Rexford Tugwell and George Peek. The AAA aimed to raise prices for commodities through artificial scarcity. The AAA used a system of domestic allotments, setting total output of corn, cotton, dairy products, hogs, rice, tobacco, and wheat. The farmers themselves had a voice in the process of using the government to benefit their incomes. The AAA paid land owners subsidies for leaving some of their land idle with funds provided by a new tax on food processing. To force up farm prices to the point of "parity", 10 million acres (40,000 km2) of growing cotton was plowed up, bountiful crops were left to rot and six million piglets were killed and discarded. The idea was to give farmers a "fair exchange value" for their products in relation to the general economy ("parity level"). Farm incomes and the income for the general population recovered fast since the beginning of 1933. Food prices remained still well below the 1929 peak. The AAA established an important and long-lasting federal role in the planning of the entire agricultural sector of the economy and was the first program on such a scale for the troubled agricultural economy. The original AAA targeted landowners, and therefore did not provide for any sharecroppers or tenants or farm laborers who might become unemployed. A Gallup poll printed in The Washington Post revealed that a majority of the American public opposed the AAA. In 1936, the Supreme Court declared the AAA to be unconstitutional, stating, "a statutory plan to regulate and control agricultural production, [is] a matter beyond the powers delegated to the federal government". The AAA was replaced by a similar program that did win Court approval. Instead of paying farmers for letting fields lie barren, this program subsidized them for planting soil-enriching crops such as alfalfa that would not be sold on the market. Federal regulation of agricultural production has been modified many times since then, but together with large subsidies is still in effect. A number of other measures affecting rural areas were introduced under Roosevelt. The National Industrial Recovery Act of 1933 included subsistence homestead provisions providing (as noted by one study) "100-acre farm plots and homes to the unemployed for non-commercial farming." The Bankhead Cotton Control Act of 1934 placed mandatory limits on the number of bales a farmer could produce and received support from most farmers who wanted to see higher prices. The Farm Credit Act of 1933 authorized farmers "to organize a nationwide system of local credit cooperatives -- production credit associations -- to make operating credit readily accessible to farmers throughout the country." The Farm Mortgage Foreclosure Act of 1934 provided for debt reduction and the redemption of foreclosed farms, and the Homestead Settler's Act of 1934 liberalized homestead residence requirements. The Farm Research Act of 1935 included various provisions such as the development of cooperative agricultural extension, and the Commodity Exchange Act of 1936 enabled "the Commodity Credit Corporation to better serve the needs of farmers in orderly marketing, and provided credit and facilities for carrying surpluses from season to season". The Farmers Mortgage Amendatory Act of 1936 authorized the Reconstruction Finance Corporation to make loans to drainage, levee, and irrigation districts, while under the Soil Conservation and Domestic Allotment Act of 1936 payments to farmers to encourage conservation were authorized. In 1937, the Water Facilities Act was enacted "to provide loans for individuals and association farm water systems in 17 Western states where drought and water shortage were familiar hardships." The Bankhead–Jones Farm Tenant Act of 1937 was the last major New Deal legislation that concerned farming. It created the Farm Security Administration (FSA), which replaced the Resettlement Administration. The Food Stamp Plan, a major new welfare program for urban poor, was established in 1939 to provide stamps to poor people who could use them to purchase food at retail outlets. The program ended during wartime prosperity in 1943 but was restored in 1961. It survived into the 21st century with little controversy because it was seen to benefit the urban poor, food producers, grocers, wholesalers, and farmers, so it gained support from both progressive and conservative Congressmen. In 2013, Tea Party activists in the House nonetheless tried to end the program, now known as the Supplemental Nutrition Assistance Program, while the Senate fought to preserve it. Recovery was the effort in numerous programs to restore the economy to normal levels. By most economic indicators, this was achieved by 1937—except for unemployment, which remained stubbornly high until World War II began. Recovery was designed to help the economy bounce back from depression. Economic historians led by Price Fishback have examined the impact of New Deal spending on improving health conditions in the 114 largest cities, 1929–1937. They estimated that every additional $153,000 in relief spending (in 1935 dollars, or $1.95 million in the year 2000 dollars) was associated with a reduction of one infant death, one suicide, and 2.4 deaths from infectious diseases. From 1929 to 1933, the industrial economy suffered from a vicious cycle of deflation. Since 1931, the U.S. Chamber of Commerce, the voice of the nation's organized business, promoted an anti-deflationary scheme that would permit trade associations to cooperate in government-instigated cartels to stabilize prices within their industries. Though existing antitrust laws clearly forbade such practices, the organized business were entertained by the Roosevelt Administration. Roosevelt's advisors believed that excessive competition and technical progress had led to overproduction and lowered wages and prices, which they believed lowered demand and employment (deflation). He argued that government economic planning was necessary to remedy this. New Deal economists argued that cut-throat competition had hurt many businesses and that with prices having fallen 20% and more, "deflation" exacerbated the burden of debt and would delay recovery. They rejected a strong move in Congress to limit the workweek to 30 hours. Instead, their remedy, designed in cooperation with big business, was the National Industrial Recovery Act (NIRA). It included stimulus funds for the WPA to spend and sought to raise prices, give more bargaining power for unions (so the workers could purchase more), and reduce harmful competition. At the center of the NIRA was the National Recovery Administration (NRA), headed by former General Hugh S. Johnson, who had been a senior economic official in World War I. Johnson called on every business establishment in the nation to accept a stopgap "blanket code": a minimum wage of between 20 and 45 cents per hour, a maximum workweek of 35–45 hours and the abolition of child labor. Johnson and Roosevelt contended that the "blanket code" would raise consumer purchasing power and increase employment. To mobilize political support for the NRA, Johnson launched the "NRA Blue Eagle" publicity campaign to boost what he called "industrial self-government". The NRA brought together leaders in each industry to design specific sets of codes for that industry—the most important provisions were anti-deflationary floors below which no company would lower prices or wages and agreements on maintaining employment and production. In a remarkably short time, the NRA announced agreements from almost every major industry in the nation. By March 1934, industrial production was 45% higher than in March 1933. NRA Administrator Hugh Johnson was showing signs of a mental breakdown due to the extreme pressure and workload of running the National Recovery Administration. Johnson lost power in September 1934, but kept his title. Roosevelt replaced his position with a new National Industrial Recovery Board, of which Donald Richberg was named Executive Director. On May 27, 1935, the NRA was found to be unconstitutional by a unanimous decision of the U.S. Supreme Court in the case of A.L.A. Schechter Poultry Corp. v. United States. After the end of the NRA, quotas in the oil industry were fixed by the Railroad Commission of Texas with Tom Connally's federal Hot Oil Act of 1935, which guaranteed that illegal "hot oil" would not be sold. By the time NRA ended in May 1935, well over 2 million employers accepted the new standards laid down by the NRA, which had introduced a minimum wage and an eight-hour workday, together with abolishing child labor. These standards were reintroduced by the Fair Labor Standards Act of 1938. Historian William E. Leuchtenburg argued in 1963: The NRA could boast some considerable achievements: it gave jobs to some two million workers; it helped stop a renewal of the deflationary spiral that had almost wrecked the nation; it did something to improve business ethics and civilize competition; it established a national pattern of maximum hours and minimum wages; and it all but wiped out child labor and the sweatshop. But this was all it did. It prevented things from getting worse, but it did little to speed recovery, and probably actually hindered it by its support of restrictionism and price raising. The NRA could maintain a sense of national interest against private interests only so long as the spirit of national crisis prevailed. As it faded, restriction-minded businessmen moved into a decisive position of authority. By delegating power over price and production to trade associations, the NRA created a series of private economic governments. Other labor measures were carried out under the First New Deal. The Wagner-Peyser Act of 1933 established a national system of public employment offices, and the Anti-Kickback Act of 1934 "established penalties for employers on Government contracts who induce employees to return any part of pay to which they are entitled". That same year, the Railway Labor Act of 1926 was amended "to outlaw company unions and yellow dog contracts, and to provide that the majority of any craft or class of employees shall determine who shall represent them in collective bargaining". In July 1933, Secretary of Labor Frances Perkins held at the Department of Labor what was described as "a very successful conference of 16 state minimum wage boards (some of the states had minimum wage laws long before the Federal Government)". The following year she held a two-day conference on state labor legislation in which 39 states were represented. According to one study, "State officials in attendance were gratified that the U.S. Department of Labor was showing interest in their problems. They called on Perkins to make the labor legislation conferences an annual event. She did so and participated actively in them every year until she left office. The conferences continued under Labor Department auspices for another ten years, by which time they had largely accomplished their goal of improving and standardizing state labor laws and administration." As a means of institutionalizing the work she tried to achieve with these conferences, Perkins established the Division of Labor Standards (which was later redesignated a bureau) in 1934 as a service agency and informational clearinghouse for state governments and other federal agencies. Its goal was to promote (through voluntary means) improved conditions of work, and the Division "offered many services in addition to helping the states deal with administrative problems". It offered, for instance, training for factory inspectors, and drew national attention "to the area of workers' health with a series of conferences on silicosis. This wide-spread lung disease had been dramatized by the 'Gauley Bridge Disaster' in which hundreds of tunnel workers died from breathing silica-filled air. The Division also worked with unions, whose support was needed in passing labor legislation in the States." The Muscle Shoals Act contained various provisions of interest to labor, including prevailing wage rate and workmen's compensation. A resolution approved by the Senate, June 13, authorized the President to accept membership for the Government of the United States in the International Labor Organization, without assuming any obligation under the covenant of the League of Nations. The resolution was approved by the House, June 16, by a vote of 232 to 109. Public Act 448 amended the Federal Employees' Civil Service Retirement Act of 1930 by, as noted by one study, "giving to the employee the right to name a beneficiary irrespective of the amount to his credit without the need of an appointment of an administrator". Public Act No. 245 "provided for the development of vocational education in the States by appropriating funds for the fiscal years 1935, 1936 and 1937, and Public Act 296 amended the United States Bankruptcy Act with safeguards for labor. Public Act No. 349 provided for hourly rates of pay for substitute laborers in the mail service and time credits when appointed as regular laborers, and Public Act No. 461 authorized the President to create a "federal prison industries", in which inmates hereafter "receiving injuries while in the course of their employment will receive the benefits of compensation, limited however to that amount prescribed in the Federal Employees' Compensation Act". Public Act No. 467 created a Federal Credit Union Law, one of the main purposes of which was to make a system of credit for provident purposes available to people of small means. For those in the District of Columbia, an Act concerning fire escapes on certain buildings was amended by Public Act No. 284." The New Deal had an important impact on the housing field. The New Deal followed and increased President Hoover's lead-and-seek measures. The New Deal sought to stimulate the private home building industry and increase the number of individuals who owned homes. The Public Works Administration of the Interior Department planned to construct public housing across the country, providing low-rent apartments for low-income families. However resistance from the private housing sector was strong except in New York city, which welcomed the program. Furthermore, the White House reallocated most of the funding into relief projects, where each million federal dollars would create more jobs for the unemployed. As a result by 1937 there were only 49 projects nationwide, containing about 21,800 apartments. It was taken over in 1938 by the Federal Housing Administration (FHA). Starting in 1933 the New Deal operated the new Home Owners' Loan Corporation (HOLC) that helped finance mortgages on private houses. HOLC set uniform national appraisal methods and simplified the mortgage process. The Federal Housing Administration (FHA) created national standards for home construction. In 1934 the Alley Dwelling Authority was established by Congress "to provide for the discontinuation of the use as dwellings of the buildings situated in alleys in the District of Columbia". That year, a National Housing Act was approved which was aimed at improving employment while making private credit available for repairing and homebuilding. In 1938 this act was amended and as noted by one study "provision was made renewing the insurance on repair loans, for insuring mortgages up to 90 percent of the value of small-owner –occupied homes, and for insuring mortgages on rental property". This also marked the beginning of discriminatory redlining within the United states under the HOLC. Their maps broadly determined what housing loans would be backed by the federal government. Though other criteria existed, the most major criterion was race. Any neighborhood with "inharmonious racial groups" would either be marked red or yellow, depending on the proportion of black residents. This was explicitly stated within the FHA underwriting manual that the HOLC used as a guideline for its maps. Alongside other discriminatory housing policy, this meant in practice is that Black Americans were denied federally backed mortgages locking most out of the housing market and all Americans were denied backing for any loans within black neighborhood. Lastly, for the other policies in place meant for neighborhood building projects, the federal government required they be explicitly segregated to be backed. The federal government's financial backing also required the use of racially restrictive covenants, that banned white homeowners from reselling their house to any black buyers. Reform was based on the assumption that the depression was caused by the inherent instability of the market and that government intervention was necessary to rationalize and stabilize the economy and to balance the interests of farmers, business, and labor. Reforms targeted the causes of the depression and sought to prevent a crisis like it from happening again. In other words, this sought to financially rebuild the U.S. while ensuring not to repeat history. Most economic historians assert that protectionist policies, culminating in the Smoot-Hawley Act of 1930, worsened the Depression. Roosevelt already spoke against the act while campaigning for president during 1932. In 1934, the Reciprocal Tariff Act was drafted by Cordell Hull. It gave the president power to negotiate bilateral, reciprocal trade agreements with other countries. The act enabled Roosevelt to liberalize American trade policy around the globe and it is widely credited with ushering in the era of liberal trade policy that persists to this day. The Puerto Rico Reconstruction Administration oversaw a separate set of programs in Puerto Rico. It promoted land reform and helped small farms, it set up farm cooperatives, promoted crop diversification, and helped the local industry. Second New Deal (1935–1936) In the spring of 1935, responding to the setbacks in the Court, a new skepticism in Congress, and the growing popular clamor for more dramatic action, New Dealers passed important new initiatives. Historians refer to them as the "Second New Deal" and note that it was more progressive and more controversial than the "First New Deal" of 1933–1934. Until 1935, only a dozen states had implemented old-age insurance, and these programs were woefully underfunded. Just one state (Wisconsin) had an insurance program. The United States was the only modern industrial country where people faced the Depression without any national system of social security. The work programs of the "First New Deal" such as CWA and FERA were designed for immediate relief, for a year or two. The most important program of 1935, and perhaps of the New Deal itself, was the Social Security Act. It established a permanent system of universal retirement pensions (Social Security), unemployment insurance and welfare benefits for the handicapped and needy children in families without a father present. It established the framework for the U.S. welfare system. Roosevelt insisted that it should be funded by payroll taxes rather than from the general fund—he said: "We put those payroll contributions there so as to give the contributors a legal, moral, and political right to collect their pensions and unemployment benefits. With those taxes in there, no damn politician can ever scrap my social security program". The National Labor Relations Act of 1935, also known as the Wagner Act, finally guaranteed workers the rights to collective bargaining through unions of their own choice. The Act also established the National Labor Relations Board (NLRB) to facilitate wage agreements and to suppress the repeated labor disturbances. The Wagner Act did not compel employers to reach agreement with their employees, but it opened possibilities for American labor. The result was a tremendous growth of membership in the labor unions, especially in the mass-production sector, led by the older and larger American Federation of Labor and the new, more radical Congress of Industrial Organizations. Labor thus became a major component of the New Deal political coalition. However, the intense battle for members between the AFL and the CIO coalitions weakened labor's power. To help agricultural labor, the 1934 Jones-Costigan Act included provisions such as the prohibition of child labor under the age of 14, limiting the working hours of children aged 14–16, and the granting to the USDA "the authority to fix minimum wages, but only after holding public hearings 'at a place accessible to producers and workers'". In addition, the Act called for farmers "to pay their workers 'promptly' and 'in full' before collecting their benefit payments as a way to deal with the historic inequalities embedded in staggered payments and hold-back clauses". This Act was replaced by the 1937 Sugar Act after the Supreme Court ruled the AAA unconstitutional. In passing the Act, Congress not only followed Roosevelt's advice by continuing the previous Act's labor provisions but strengthened them. As noted by one study, the Act "once again prohibited child labor and made the 'fair, reasonable and equitable' minimum wage determinations mandatory". The Public Contracts (Walsh-Healey) Act of 1936 established labor standards on government contracts, "including minimum wages, overtime compensation for hours in excess of 8 a day or 40 a week, child and convict labor provisions, and health and safety requirements". The Anti-Strikebreaker (Byrnes) Act from that same year declared it unlawful "to transport or aid in transporting strikebreakers in interstate or foreign commerce". The Davis-Bacon Act Amendment (Public Act 403) was approved in August 1935, "Establishing prevailing wages for mechanics and laborers employed on public buildings and public works". Under the Miller Act of 1935, as noted by one study, "every construction worker or person who furnished material on a covered contract has the right to sue the contractor or surety if not fully paid within 90 days after performing labor or furnishing such material". The Motor Carrier Act of 1935, as noted by one study, "authorized the Interstate Commerce Commission to limit the hours of service and to prescribe other measures to safeguard motor carrier employees and passengers, as well as the users of highways generally". The Merchant Marine Act of 1936 directed the Maritime Commission "to investigate and specify suitable wage and manning scales and working conditions with respect to subsidized ships". Public Act 783 of March 1936 sought to extend "the facilities of the Public Health Service to seamen on Government vessels not in the military or Naval establishments". The Railway Labor Act Amendment (Public Act 487) was approved in April 1936, "Extending protection of Railway Labor Act to employees of air transportation companies engaged in interstate and foreign commerce". The Bituminous Coal Act of 1937 contained various labor provisions such as prohibiting "requiring an employee or applicant for employment to join a company union". A national Railroad Retirement program was introduced that year, which in 1938 also introduced unemployment benefits. The Randolph-Sheppard Act provided for "licensing of blind persons to operate vending stands in Federal buildings". Public Law No. 814 of the 74th Congress, as noted by one study, conferred jurisdiction "upon each of the several states to extend the provisions of their State workmen's compensation laws to employments on Federal property and premises located within the respective States". The National Apprenticeship Act of 1937 established standards for apprenticeship programs. The Chandler Act of 1938 allowed wage earners "to extend debt payments over longer periods of time." That same year the Interstate Commerce Commission "issued an order regulating the hours of drivers of motor vehicles engaged in interstate commerce". The Wagner-O'Day Act in 1938 set up a program "designed to increase employment opportunities for persons who are blind so they could manufacture and sell their goods to the federal government". Public Act No. 702 provided an 8-hour day for officers and seamen on certain vessels that navigated the Great Lakes and adjacent waters, and the Second Deficiency Appropriation Act (Public, No. 723) contained an appropriation for investigating labor conditions in Hawaii. Public Act No. 706 provided for the preservation of the right of air carrier employees "to obtain higher compensation and better working conditions so as to conform to a decision of the National Labor Board of May 10, 1934 (No. 83). Under Public Act No. 486 the provisions of section 13 of the air-mail act of 1934 "relating to pay, working conditions, and relations of pilots and other employees shall apply to all contracts awarded under the act". A number of laws affecting federal employees were also enacted. An act of 1936, for instance, provided vacations and accumulated leaves for Government employees, and another 1936 act provided for accumulated sick leave with pay for Government employees. The Fair Labor Standards Act of 1938 set maximum hours (44 per week) and minimum wages (25 cents per hour) for most categories of workers. Child labor of children under the age of 16 was forbidden and children under 18 years were forbidden to work in hazardous employment. As a result, the wages of 300,000 workers, especially in the South, were increased and the hours of 1.3 million were reduced. Various laws were also passed to advance consumer rights. In 1935 the Public Utility Holding Company Act of 1935 was passed "to protect consumers and investors from abuses by holding companies with interests in gas and electric utilities". The Federal Power Act of 1935 sought "to protect customers and to assure reasonableness in the provision of a service essential to life in modern society". The Natural Gas Act of 1938 sought to protect consumers "against exploitation at the hands of natural gas companies". The Food, Drug and Cosmetic Act of 1938 granted to the Food and Drug Administration "the power to test and license drugs and to test the safety of cosmetics, and to the Department of Agriculture the authority to set food quality standards." In addition, the Wheeler-Lea Act "gave the Free Trade Commission, an old Progressive agency, the power to prohibit unfair and deceptive business acts or practices." Roosevelt nationalized unemployment relief through the Works Progress Administration (WPA), headed by close friend Harry Hopkins. Roosevelt had insisted that the projects had to be costly in terms of labor, beneficial in the long term and the WPA was forbidden to compete with private enterprises—therefore the workers had to be paid smaller wages. The Works Progress Administration (WPA) was created to return the unemployed to the workforce. The WPA financed a variety of projects such as hospitals, schools, and roads, and employed more than 8.5 million workers who built 650,000 miles of highways and roads, 125,000 public buildings as well as bridges, reservoirs, irrigation systems, parks, playgrounds and so on. Prominent projects were the Lincoln Tunnel, the Triborough Bridge, the LaGuardia Airport, the Overseas Highway and the San Francisco–Oakland Bay Bridge. The Rural Electrification Administration used cooperatives to bring electricity to rural areas, many of which still operate. Between 1935 and 1940, the percentage of rural homes lacking electricity fell from 90% to 40.% The National Youth Administration was another semi-autonomous WPA program for youth. Its Texas director, Lyndon B. Johnson, later used the NYA as a model for some of his Great Society programs in the 1960s. The WPA was organized by states, but New York City had its own branch Federal One, which created jobs for writers, musicians, artists and theater personnel. It became a hunting ground for conservatives searching for communist employees. The Federal Writers' Project operated in every state, where it created a famous guide book—it also catalogued local archives and hired many writers, including Margaret Walker, Zora Neale Hurston and Anzia Yezierska, to document folklore. Other writers interviewed elderly ex-slaves and recorded their stories. Under the Federal Theater Project, headed by charismatic Hallie Flanagan, actresses and actors, technicians, writers and directors put on stage productions. The tickets were inexpensive or sometimes free, making theater available to audiences unaccustomed to attending plays. One Federal Art Project paid 162 trained woman artists on relief to paint murals or create statues for newly built post offices and courthouses. Many of these works of art can still be seen in public buildings around the country, along with murals sponsored by the Treasury Relief Art Project of the Treasury Department. During its existence, the Federal Theatre Project provided jobs for circus people, musicians, actors, artists, and playwrights, together with increasing public appreciation of the arts. In 1935, Roosevelt called for a tax program called the Wealth Tax Act (Revenue Act of 1935) to redistribute wealth. The bill imposed an income tax of 79% on incomes over $5 million. Since that was an extraordinarily high income in the 1930s, the highest tax rate actually covered just one individual—John D. Rockefeller. The bill was expected to raise only about $250 million in additional funds, so revenue was not the primary goal. Morgenthau called it "more or less a campaign document". In a private conversation with Raymond Moley, Roosevelt admitted that the purpose of the bill was "stealing Huey Long's thunder" by making Long's supporters of his own. At the same time, it raised the bitterness of the rich who called Roosevelt "a traitor to his class" and the wealth tax act a "soak the rich tax". A tax called the undistributed profits tax was enacted in 1936. This time the primary purpose was revenue, since Congress had enacted the Adjusted Compensation Payment Act, calling for payments of $2 billion to World War I veterans. The bill established the persisting principle that retained corporate earnings could be taxed. Paid dividends were tax deductible by corporations. Its proponents intended the bill to replace all other corporation taxes—believing this would stimulate corporations to distribute earnings and thus put more cash and spending power in the hands of individuals. In the end, Congress watered down the bill, setting the tax rates at 7 to 27% and largely exempting small enterprises. Facing widespread and fierce criticism, the tax deduction of paid dividends was repealed in 1938. The United States Housing Act of 1937 created the United States Housing Authority within the U.S. Department of the Interior. It was one of the last New Deal agencies created. The bill passed in 1937 with some Republican support to abolish slums. By 1936, the term "progressive" was typically used for supporters of the New Deal and "conservative" for its opponents.[page needed] Roosevelt was assisted in his endeavors by the election of a liberal Congress in 1932. According to one source "We recognize that the best liberal legislation in American history was enacted following the election of President Roosevelt and a liberal Congress in 1932. After the midterm congressional election setbacks in 1938, labor was faced with a hostile congress until 1946. Only the presidential veto prevented the enactment of reactionary anti-labor laws." In noting the composition of the Seventy-Third Congress, one study has stated: "Though much of the Democratic congressional leadership remained old-guard, southern, agrarian, and conservative, the rank-and-file Democratic majorities in both houses were largely made up of fresh, northern, urban-industrial representatives of at least potentially liberal bent. At a minimum they were impatient with inaction, and not likely to be silenced by appeals to tradition. They were, as yet, an unformed and reckoned force, one that Roosevelt might mould to his purposes of remaking his party – or one whose very strength and impetuosity might force the president's hand." As stated by another study, in regards to the gains the Democrats made in the 1932 midterm elections, "The party gained ninety seats in the house and thirteen in the Senate. Even more significant, from the standpoint of potential support for urban programs, was that non-Southern Democrats represented a working majority in the House for the first of what would be only a few times in the twentieth century. Roosevelt's political instincts mood paralleled the mood of Congress, and he sought policies to tie the party's new urban supporters into a permanent majority coalition behind the Democratic Party." As noted by another study, "President Roosevelt's extraordinary legislative accomplishments between 1933 and 1938 owed much to his personal political qualities, but ideologically favourable large partisan majorities in the House and the Senate were a prerequisite of success." As one journal reflected in 1950: "Look back to the 1930's and you can see how winning in mid-terms years affects the kind of laws that are passed. A tremendous liberal majority was swept in with Franklin Roosevelt in 1932. In the 1934 mid-term races that liberal majority was increased. After 1936 it went even higher." From 1934 to 1938, there existed a "pro-spender" majority in Congress (drawn from two-party, competitive, non-machine, progressive and left party districts). In the 1938 midterm election, Roosevelt and his progressive supporters lost control of Congress to the bipartisan conservative coalition. Many historians distinguish between the First New Deal (1933–1934) and a Second New Deal (1935–1936), with the second one more progressive and more controversial. Franklin Delano Roosevelt had a magnetic appeal to the city dwellers—he brought relief and recognition of their ethnic leaders and ward bosses, as well as labor unions. Taxpayers, small business and the middle class voted for Roosevelt in 1936 but turned sharply against him after the recession of 1937-38 seemed to belie his promises of recovery. Roosevelt's New Deal Coalition discovered an entirely new use for city machines in his three reelection campaigns of the New Deal and the Second World War. Traditionally, local bosses minimized turnout so as to guarantee reliable control of their wards and legislative districts. To carry the electoral college, however, Roosevelt needed to carry the entire state, and thus needed massive majorities in the largest cities to overcome the hostility of suburbs and towns. With Harry Hopkins his majordomo, Roosevelt used the WPA as a national political machine. Men on relief could get WPA jobs regardless of their politics, but hundreds of thousands of well-paid supervisory jobs were given to the local Democratic machines. The 3.5 million voters on relief payrolls during the 1936 election cast 82% of their ballots for Roosevelt. The vibrant labor unions, heavily based in the cities, likewise did their utmost for their benefactor, voting 80% for him, as did Irish, Italian and Jewish voters. In all, the nation's 106 cities over 100,000 population voted 70% for FDR in 1936, compared to his 59% elsewhere. Roosevelt won reelection in 1940 thanks to the cities. In the North, the cities over 100,000 gave Roosevelt 60% of their votes, while the rest of the North favored Willkie 52%-48%. It was just enough to provide the critical electoral college margin. With the start of full-scale war mobilization in the summer of 1940, the cities revived. The new war economy pumped massive investments into new factories and funded round-the-clock munitions production, guaranteeing a job to anyone who showed up at the factory gate. Court-packing plan and jurisprudential shift When the Supreme Court started abolishing New Deal programs as unconstitutional, Roosevelt launched a surprise counter-attack in early 1937. He proposed adding five new justices, but conservative Democrats revolted, led by the Vice President. The Judiciary Reorganization Bill of 1937 failed—it never reached a vote. In addition, Senate Majority Leader and New Deal "marshal" Joseph T. Robinson would die on July 14, 1937, thus depriving the Democrat controlled-Senate of an influential "towering leader." Momentum in Congress and public opinion shifted to the right and very little new legislation was passed expanding the New Deal. However, retirements allowed Roosevelt to put supporters on the Court and it stopped killing New Deal programs. Recession of 1937 and recovery The Roosevelt administration was under assault during Roosevelt's second term,[clarification needed] which presided over a new dip in the Great Depression in the fall of 1937 that continued through most of 1938. Production and profits declined sharply. Unemployment jumped from 14.3% in May 1937 to 19.0% in June 1938. The downturn could have been explained by the familiar rhythms of the business cycle, but until 1937 Roosevelt had claimed responsibility for the excellent economic performance. That backfired in the recession and the heated political atmosphere of 1937. John Maynard Keynes did not think that the New Deal under Roosevelt single-handedly ended the Great Depression: "It is, it seems, politically impossible for a capitalistic democracy to organize expenditure on the scale necessary to make the grand experiments which would prove my case—except in war conditions." World War II and full employment The U.S. reached full employment after entering World War II in December 1941. Under the special circumstances of war mobilization, massive war spending doubled the gross national product (GNP). Military Keynesianism brought full employment and federal contracts were cost-plus. Instead of competitive bidding to get lower prices, the government gave out contracts that promised to pay all the expenses plus a modest profit. Factories hired everyone they could find regardless of their lack of skills—they simplified work tasks and trained the workers, with the federal government paying all the costs. Millions of farmers left marginal operations, students quit school and housewives joined the labor force. Legacy According to the Encyclopædia Britannica, "perhaps the greatest achievement of the New Deal was to restore faith in American democracy at a time when many people believed that the only choice left was between communism and fascism". Analysts agree the New Deal produced a new political coalition that sustained the Democratic Party as the majority party in national politics into the 1960s. A 2013 study found, "an average increase in New Deal relief and public works spending resulted in a 5.4 percentage point increase in the 1936 Democratic voting share and a smaller amount in 1940. The estimated persistence of this shift suggests that New Deal spending increased long-term Democratic support by 2 to 2.5 percentage points. Thus, it appears that Roosevelt's early, decisive actions created long-lasting positive benefits for the Democratic party... The New Deal did play an important role in consolidating Democratic gains for at least two decades". However, there is disagreement about whether it marked a permanent change in values. Cowie and Salvatore in 2008 argued that it was a response to Depression and did not mark a commitment to a welfare state because the U.S. has always been too individualistic. MacLean rejected the idea of a definitive political culture. She says they overemphasized individualism and ignored the enormous power that big capital wields, the Constitutional restraints on radicalism and the role of racism, antifeminism and homophobia. She warns that accepting Cowie and Salvatore's argument that conservatism's ascendancy is inevitable would dismay and discourage activists on the left. Klein responds that the New Deal did not die a natural death—it was killed off in the 1970s by a business coalition mobilized by such groups as the Business Roundtable, the Chamber of Commerce, trade organizations, conservative think tanks and decades of sustained legal and political attacks. Historians generally agree that during Roosevelt's 12 years in office there was a dramatic increase in the power of the federal government as a whole. Roosevelt also established the presidency as the prominent center of authority within the federal government. Roosevelt created a large array of agencies protecting various groups of citizens—workers, farmers, and others—who suffered from the crisis and thus enabled them to challenge the powers of the corporations. In this way, the Roosevelt administration generated a set of political ideas—known as New Deal Progressivism—that remained a source of inspiration and controversy for decades. New Deal liberalism lay the foundation of a new consensus. Between 1940 and 1980, there was the progressive consensus about the prospects for the widespread distribution of prosperity within an expanding capitalist economy. Especially Harry S. Truman's Fair Deal and in the 1960s Lyndon B. Johnson's Great Society used the New Deal as inspiration for a dramatic expansion of progressive programs. Recent historical scholarship emphasizes that the New Deal's policy design was shaped not only by Keynesian ideas but also by institutional economics. Scholars like Michael A. Bernstein argue that strict adherence to classical economic theory prolonged the Depression, prompting policymakers to explore alternative frameworks. Institutional economists, including John R. Commons and Thorstein Veblen, emphasized the importance of labor rights, market regulation, and legal structures, helping inspire reforms like the Wagner Act and the creation of the SEC. Economic historian Alexander J. Field notes that New Deal infrastructure investments stimulated long-term productivity growth, aligning with Keynesian demand-management principles even when not explicitly framed as such. Milton Friedman and Anna J. Schwartz, from a monetarist view, critique the Federal Reserve's failure to expand the money supply during the crisis, which they argue deepened the initial collapse into depression. The New Deal's enduring appeal on voters fostered its acceptance by moderate and progressive Republicans. As the first Republican president elected after Roosevelt, Dwight D. Eisenhower (1953–1961) built on the New Deal in a manner that embodied his thoughts on efficiency and cost-effectiveness. He sanctioned a major expansion of Social Security by a self-financed program. He supported such New Deal programs as the minimum wage and public housing—he greatly expanded federal aid to education and built the Interstate Highway system primarily as defense programs (rather than jobs program). In a private letter, Eisenhower wrote: Should any party attempt to abolish social security and eliminate labor laws and farm programs, you would not hear of that party again in our political history. There is a tiny splinter group of course, that believes you can do these things [...] Their number is negligible and they are stupid. In 1964, Barry Goldwater, an unreconstructed anti–New Dealer, was the Republican presidential candidate on a platform that attacked the New Deal. The Democrats under Lyndon B. Johnson won a massive landslide and Johnson's Great Society programs extended the New Deal. However, the supporters of Goldwater formed the New Right which helped to bring Ronald Reagan into the White House in the 1980 presidential election. Once an ardent supporter of the New Deal, Reagan turned against it, now viewing government as the problem rather than solution and, as president, moved the nation away from the New Deal model of government activism, shifting greater emphasis to the private sector. A 2016 review study of the existing literature in the Journal of Economic Literature summarized the findings of the research as follows: The studies find that public works and relief spending had state income multipliers of around one, increased consumption activity, attracted internal migration, reduced crime rates, and lowered several types of mortality. The farm programs typically aided large farm owners but eliminated opportunities for share croppers, tenants, and farm workers. The Home Owners' Loan Corporation's purchases and refinancing of troubled mortgages staved off drops in housing prices and home ownership rates at relatively low ex-post cost to taxpayers. The Reconstruction Finance Corporation's loans to banks and railroads appear to have had little positive impact, although the banks were aided when the RFC took ownership stakes. Historiography and evaluation of New Deal policies Historians debating the New Deal have generally been divided between progressives who support it, conservatives who oppose it, and some New Left historians who complain it was too favorable to capitalism and did too little for minorities. There is consensus on only a few points, with most commentators favorable toward the CCC and hostile toward the NRA. Consensus historians of the 1950s, such as Richard Hofstadter, according to Lary May: Progressive historians argue that Roosevelt restored hope and self-respect to tens of millions of desperate people, built labor unions, upgraded the national infrastructure, and saved capitalism in his first term when he could have destroyed it and easily nationalized the banks and the railroads. Historians generally agree that apart from building up labor unions, the New Deal did not substantially alter the distribution of power within American capitalism. "The New Deal brought about limited change in the nation's power structure". The New Deal preserved democracy in the United States in a historic period of uncertainty and crises when in many other countries democracy failed. The most common arguments can be summarized as follows: Julian Zelizer (2000) has argued that fiscal conservatism was a key component of the New Deal. A fiscally conservative approach was supported by Wall Street and local investors and most of the business community—mainstream academic economists believed in it as apparently did the majority of the public. Conservative southern Democrats, who favored balanced budgets and opposed new taxes, controlled Congress and its major committees. Even progressive Democrats at the time regarded balanced budgets as essential to economic stability in the long run, although they were more willing to accept short-term deficits. As Zelizer notes, public opinion polls consistently showed public opposition to deficits and debt. Throughout his terms, Roosevelt recruited fiscal conservatives to serve in his administration, most notably Lewis Douglas the Director of Budget in 1933–1934; and Henry Morgenthau Jr., Secretary of the Treasury from 1934 to 1945. They defined policy in terms of budgetary cost and tax burdens rather than needs, rights, obligations, or political benefits. Personally, Roosevelt embraced their fiscal conservatism, but politically he realized that fiscal conservatism enjoyed a strong wide base of support among voters, leading Democrats, and businessmen. On the other hand, there was enormous pressure to act and spending money on high visibility work programs with millions of paychecks a week. Douglas proved too inflexible and he quit in 1934. Morgenthau made it his highest priority to stay close to Roosevelt, no matter what. Douglas's position, like many of the Old Right, was grounded in a basic distrust of politicians and the deeply ingrained fear that government spending always involved a degree of patronage and corruption that offended his Progressive sense of efficiency. The Economy Act of 1933, passed early in the Hundred Days, was Douglas's great achievement. It reduced federal expenditures by $500 million, to be achieved by reducing veterans' payments and federal salaries. Douglas cut government spending through executive orders that cut the military budget by $125 million, $75 million from the Post Office, $12 million from Commerce, $75 million from government salaries and $100 million from staff layoffs. As Freidel concludes: "The economy program was not a minor aberration of the spring of 1933, or a hypocritical concession to delighted conservatives. Rather it was an integral part of Roosevelt's overall New Deal". Revenues were so low that borrowing was necessary (only the richest 3% paid any income tax between 1926 and 1940). Douglas, therefore, hated the relief programs, which he said reduced business confidence, threatened the government's future credit and had the "destructive psychological effects of making mendicants of self-respecting American citizens". Roosevelt was pulled toward greater spending by Hopkins and Ickes, and as the 1936 election approached he decided to gain votes by attacking big business. Morgenthau shifted with Roosevelt, but at all times tried to inject fiscal responsibility—he deeply believed in balanced budgets, stable currency, reduction of the national debt, and the need for more private investment. The Wagner Act met Morgenthau's requirement because it strengthened the party's political base and involved no new spending. In contrast to Douglas, Morgenthau accepted Roosevelt's double budget as legitimate—that is a balanced regular budget and an "emergency" budget for agencies, like the WPA, PWA, and CCC, that would be temporary until full recovery was at hand. He fought against the veterans' bonus until Congress finally overrode Roosevelt's veto and gave out $2.2 billion in 1936. His biggest success was the new Social Security program as he managed to reverse the proposals to fund it from general revenue and insisted it be funded by new taxes on employees. It was Morgenthau who insisted on excluding farm workers and domestic servants from Social Security because workers outside industry would not be paying their way. While many Americans suffered economically during the Great Depression, African Americans also had to deal with social ills, such as racism, discrimination, and segregation. Black workers were especially vulnerable to the economic downturn since most of them worked the most marginal jobs such as unskilled or service-oriented work, therefore they were the first to be discharged and additionally many employers preferred white workers. In all African American workers were much more likely to receive public assistance or relief than white workers. Thus In Detroit, blacks made up 4 percent of the population, and accounted for 25 percent of the relief cases. In Chicago, one-half of all black families were on relief. Roosevelt appointed an unprecedented number of African Americans to second-level leadership positions in his administration—these appointees were collectively called the Black Cabinet. The WPA, NYA, and CCC relief programs allocated 10% of their budgets to blacks (who comprised about 10% of the total population, and 20% of the poor). They operated separate all-black units with the same pay and conditions as white units. Some leading white New Dealers, especially Eleanor Roosevelt, Harold Ickes and Aubrey Williams, worked to ensure blacks received at least 10% of welfare assistance payments. However, these benefits were small in comparison to the economic and political advantages that whites received. Most unions excluded blacks from joining and enforcement of anti-discrimination laws in the South was virtually impossible, especially since most blacks worked in agricultural and hospitality sectors. The New Deal programs put millions of Americans immediately back to work or at least helped them to survive. The programs were not specifically targeted to alleviate the much higher unemployment rate of blacks. Some aspects of the programs were even unfavorable to blacks. The Agricultural Adjustment Acts, for example, helped land owners who were predominantly white but reduced the need of farmers to hire tenant farmers or sharecroppers which were predominantly black. Though the AAA stipulated that a farmer had to share the payments with those who worked the land, this policy was never enforced. The Farm Service Agency (FSA), a government relief agency for tenant farmers, created in 1937, made efforts to empower African Americans by appointing them to agency committees in the South. Senator James F. Byrnes of South Carolina raised opposition to the appointments because he stood for white land owners who were threatened by an agency that could organize and empower tenant farmers. Initially, the FSA stood behind their appointments, but after feeling national pressure FSA was forced to release the African Americans from their positions. The goals of the FSA were notoriously progressive and not cohesive with the southern voting elite. Some harmful New Deal measures inadvertently discriminated against blacks. Thousands of blacks were thrown out of work and replaced by whites on jobs where they were paid less than the NRA's wage minimums because some white employers considered the NRA's minimum wage "too much money for Negroes". By August 1933, blacks called the NRA the "Negro Removal Act". An NRA study found that the NIRA put 500,000 African Americans out of work. However, since blacks felt the sting of the depression even more severely than whites, they welcomed any help. In 1936, almost all African Americans (and many whites) shifted from the "Party of Lincoln" to the Democratic Party. This was a sharp realignment from 1932 when most African Americans who could vote chose the Republican ticket. New Deal policies helped establish a political alliance between blacks and the Democratic Party that survives into the 21st century. There was no attempt whatsoever to end segregation or to increase black civil rights in the South, and a number of leaders that promoted the New Deal were racist. The wartime Fair Employment Practices Commission (FEPC) executive orders that forbade job discrimination against African Americans, women, and ethnic groups was a major breakthrough that brought better jobs and pay to millions of minority Americans. Historians usually treat FEPC as part of the war effort and not part of the New Deal itself. The New Deal was racially segregated as blacks and whites rarely worked alongside each other in New Deal programs. The largest relief program by far was the WPA—it operated segregated units, as did its youth affiliate the NYA. Blacks were hired by the WPA as supervisors in the North, but of 10,000 WPA supervisors in the South only 11 were black. Historian Anthony Badger said, "New Deal programs in the South routinely discriminated against blacks and perpetuated segregation." In its first few weeks of operation, CCC camps in the North were integrated. By July 1935, practically all the camps in the United States were segregated, and blacks were strictly limited in the supervisory roles they were assigned. Kinker and Smith argue, "even the most prominent racial liberals in the New Deal did not dare to criticize Jim Crow." Secretary of the Interior Harold Ickes was one of the Roosevelt Administration's most prominent supporters of blacks and former president of the Chicago chapter of the NAACP. In 1937, when Senator Josiah Bailey Democrat of North Carolina accused him of trying to break down segregation laws Ickes wrote him to deny that: The New Deal's record came under attack by New Left historians in the 1960s for its pusillanimity in not attacking capitalism more vigorously, nor helping blacks achieve equality. The critics emphasize the absence of a philosophy of reform to explain the failure of New Dealers to attack fundamental social problems. They demonstrate the New Deal's commitment to save capitalism and its refusal to strip away private property. They detect a remoteness from the people and indifference to participatory democracy and call instead for more emphasis on conflict and exploitation. At first, the New Deal created programs primarily for men as it was assumed that the husband was the "breadwinner" (the provider) and if they had jobs the whole family would benefit. It was the social norm for women to give up jobs when they married—in many states, there were laws that prevented both husband and wife holding regular jobs with the government. So too in the relief world, it was rare for both husband and wife to have a relief job on FERA or the WPA. This prevailing social norm of the breadwinner failed to take into account the numerous households headed by women, but it soon became clear that the government needed to help women as well. Many women were employed on FERA projects run by the states with federal funds. The first New Deal program to directly assist women was the Works Progress Administration (WPA), begun in 1935. It hired single women, widows, or women with disabled or absent husbands. The WPA employed about 500,000 women and they were assigned mostly to unskilled jobs. 295,000 worked on sewing projects that made 300 million items of clothing and bedding to be given away to families on relief and to hospitals and orphanages. Women also were hired for the WPA's school lunch program. Both men and women were hired for the small but highly publicized arts programs (such as music, theater, and writing). The New Deal expanded the role of the federal government, particularly to help the poor, the unemployed, youth, the elderly and stranded rural communities. The Hoover administration started the system of funding state relief programs, whereby the states hired people on relief. With the CCC in 1933 and the WPA in 1935, the federal government now became involved in directly hiring people on relief in granting direct relief or benefits. Total federal, state and local spending on relief rose from 3.9% of GNP in 1929 to 6.4% in 1932 and 9.7% in 1934—the return of prosperity in 1944 lowered the rate to 4.1%. In 1935–1940, welfare spending accounted for 49% of the federal, state and local government budgets. In his memoirs, Milton Friedman said that the New Deal relief programs were an appropriate response. He and his wife were not on relief, but they were employed by the WPA as statisticians. Friedman said that programs like the CCC and WPA were justified as temporary responses to an emergency. Friedman said that Roosevelt deserved considerable credit for relieving immediate distress and restoring confidence. Roosevelt's New Deal Recovery programs focused on stabilizing the economy by creating long-term employment opportunities, decreasing agricultural supply to drive prices up, and helping homeowners pay mortgages and stay in their homes, which also kept the banks solvent. In a survey of economic historians conducted by Robert Whaples, Professor of Economics at Wake Forest University, anonymous questionnaires were sent to members of the Economic History Association. Members were asked to disagree, agree, or agree with provisos with the statement that read: "Taken as a whole, government policies of the New Deal served to lengthen and deepen the Great Depression". While only 6% of economic historians who worked in the history department of their universities agreed with the statement, 27% of those that work in the economics department agreed. Almost an identical percent of the two groups (21% and 22%) agreed with the statement "with provisos" (a conditional stipulation) while 74% of those who worked in the history department and 51% in the economic department disagreed with the statement outright. From 1933 to 1941, the economy expanded at an average rate of 7.7% per year. Despite high economic growth, unemployment rates fell slowly. John Maynard Keynes explained that situation as an underemployment equilibrium where skeptic business prospects prevent companies from hiring new employees. It was seen as a form of cyclical unemployment. There are different assumptions as well. According to Richard L. Jensen, cyclical unemployment was a grave matter primarily until 1935. Between 1935 and 1941, structural unemployment became the bigger problem. Especially the unions successes in demanding higher wages pushed management into introducing new efficiency-oriented hiring standards. It ended inefficient labor such as child labor, casual unskilled work for subminimum wages and sweatshop conditions. In the long term, the shift to efficiency wages led to high productivity, high wages and a high standard of living, but it necessitated a well-educated, well-trained, hard-working labor force. It was not before war time brought full employment that the supply of unskilled labor (that caused structural unemployment) downsized. At the beginning of the Great Depression, many economists traditionally argued against deficit spending. The fear was that government spending would "crowd out" private investment and would thus not have any effect on the economy, a proposition known as the Treasury view, but Keynesian economics rejected that view. They argued that by spending vastly more money—using fiscal policy—the government could provide the needed stimulus through the multiplier effect. Without that stimulus, business simply would not hire more people, especially the low skilled and supposedly "untrainable" men who had been unemployed for years and lost any job skill they once had. Keynes visited the White House in 1934 to urge President Roosevelt to increase deficit spending. Roosevelt afterwards complained, "he left a whole rigmarole of figures—he must be a mathematician rather than a political economist." The New Deal tried public works, farm subsidies and other devices to reduce unemployment, but Roosevelt never completely gave up trying to balance the budget. Between 1933 and 1941, the average federal budget deficit was 3% per year. Roosevelt did not fully utilize[clarification needed] deficit spending. The effects of federal public works spending were largely offset by Herbert Hoover's large tax increase in 1932, whose full effects for the first time were felt in 1933 and it was undercut by spending cuts, especially the Economy Act. According to Keynesians like Paul Krugman, the New Deal therefore was not as successful in the short run as it was in the long run. Following the Keynesian consensus (that lasted until the 1970s), the traditional view was that federal deficit spending associated with the war brought full-employment output while monetary policy was just aiding the process. In this view, the New Deal did not end the Great Depression, but halted the economic collapse and ameliorated the worst of the crises. More influential among economists has been the monetarist interpretation by Milton Friedman as put forth in A Monetary History of the United States,[citation needed] which includes a full-scale monetary history of what he calls the "Great Contraction". Friedman concentrated on the failures before 1933 and points out that between 1929 and 1932 the Federal Reserve allowed the money supply to fall by a third which is seen as the major cause that turned a normal recession into a Great Depression. Friedman especially criticized the decisions of Hoover and the Federal Reserve not to save banks going bankrupt. Friedman's arguments got an endorsement from a surprising source when Fed Governor Ben Bernanke made this statement: Let me end my talk by abusing slightly my status as an official representative of the Federal Reserve. I would like to say to Milton and Anna: Regarding the Great Depression, you're right. We did it. We're very sorry. But thanks to you, we won't do it again. Monetarists state that the banking and monetary reforms were a necessary and sufficient response to the crises. They reject the approach of Keynesian deficit spending. In an interview in 2000, Friedman said: You have to distinguish between two classes of New Deal policies. One class of New Deal policies was reform: wage and price control, the Blue Eagle, the national industrial recovery movement. I did not support those. The other part of the new deal policy was relief and recovery ... providing relief for the unemployed, providing jobs for the unemployed, and motivating the economy to expand ... an expansive monetary policy. Those parts of the New Deal I did support. Ben Bernanke and Martin Parkinson declared in "Unemployment, Inflation, and Wages in the American Depression" (1989), "the New Deal is better characterized as having cleared the way for a natural recovery (for example, by ending deflation and rehabilitating the financial system) rather than as being the engine of recovery itself." Challenging the traditional view, monetarists and New Keynesians like J. Bradford DeLong, Lawrence Summers and Christina Romer argued that recovery was essentially complete prior to 1942 and that monetary policy was the crucial source of pre-1942 recovery. The extraordinary growth in money supply beginning in 1933 lowered real interest rates and stimulated investment spending. According to Bernanke, there was also a debt-deflation effect of the depression which was clearly offset by a reflation through the growth in money supply. However, before 1992 scholars did not realize that the New Deal provided for a huge aggregate demand stimulus through a de facto easing of monetary policy. While Milton Friedman and Anna Schwartz argued in A Monetary History of the United States (1963) that the Federal Reserve System had made no attempt to increase the quantity in high-powered money and thus failed to foster recovery, they somehow did not investigate the impact of the monetary policy of the New Deal. In 1992, Christina Romer explained in "What Ended the Great Depression?" that the rapid growth in money supply beginning in 1933 can be traced back to a large unsterilized gold inflow to the U.S. which was partly due to political instability in Europe, but to a larger degree to the revaluation of gold through the Gold Reserve Act. The Roosevelt administration had chosen not to sterilize the gold inflow precisely because they hoped that the growth of money supply would stimulate the economy. Replying to DeLong et al. in the Journal of Economic History, J. R. Vernon argues that deficit spending leading up to and during World War II still played a large part in the overall recovery, according to his study "half or more of the recovery occurred during 1941 and 1942". According to Peter Temin, Barry Wigmore, Gauti B. Eggertsson and Christina Romer, the biggest primary impact of the New Deal on the economy and the key to recovery and to end the Great Depression was brought about by a successful management of public expectations. The thesis is based on the observation that after years of deflation and a very severe recession important economic indicators turned positive just in March 1933 when Roosevelt took office. Consumer prices turned from deflation to mild inflation, industrial production bottomed out in March 1933, investment doubled in 1933 with a turnaround in March 1933. There were no monetary forces to explain that turnaround. Money supply was still falling and short-term interest rates remained close to zero. Before March 1933, people expected a further deflation and recession so that even interest rates at zero did not stimulate investment. However, when Roosevelt announced major regime changes people[who?] began to expect inflation and an economic expansion. With those expectations, interest rates at zero began to stimulate investment just as they were expected to do. Roosevelt's fiscal and monetary policy regime change helped to make his policy objectives credible. The expectation of higher future income and higher future inflation stimulated demand and investments. The analysis suggests that the elimination of the policy dogmas of the gold standard, a balanced budget in times of crises and small government led endogenously to a large shift in expectation that accounts for about 70–80 percent of the recovery of output and prices from 1933 to 1937. If the regime change had not happened and the Hoover policy had continued, the economy would have continued its free-fall in 1933 and output would have been 30 percent lower in 1937 than in 1933. Followers of the real business-cycle theory believe that the New Deal caused the depression to persist longer than it would otherwise have. Harold L. Cole and Lee E. Ohanian say Roosevelt's policies prolonged the depression by seven years. According to their study, the "New Deal labor and industrial policies did not lift the economy out of the Depression", but that the "New Deal policies are an important contributing factor to the persistence of the Great Depression". They claim that the New Deal "cartelization policies are a key factor behind the weak recovery". They say that the "abandonment of these policies coincided with the strong economic recovery of the 1940s". The study by Cole and Ohanian is based on a real business-cycle theory model. Laurence Seidman noted that according to the assumptions of Cole and Ohanian, the labor market clears instantaneously, which leads to the incredible conclusion that the surge in unemployment between 1929 and 1932 (before the New Deal) was in their opinion both optimal and solely based on voluntary unemployment. Additionally, Cole and Ohanian's argument does not count workers employed through New Deal programs. Such programs built or renovated 2,500 hospitals, 45,000 schools, 13,000 parks and playgrounds, 7,800 bridges, 700,000 miles (1,100,000 km) of roads, 1,000 airfields and employed 50,000 teachers through programs that rebuilt the country's entire rural school system. The economic reforms were mainly intended to rescue the capitalist system by providing a more rational framework in which it could operate. The banking system was made less vulnerable. The regulation of the stock market and the prevention of some corporate abuses relating to the sale of securities and corporate reporting addressed the worst excesses. Roosevelt allowed trade unions to take their place in labor relations and created the triangular partnership between employers, employees and government. David M. Kennedy wrote, "the achievements of the New Deal years surely played a role in determining the degree and the duration of the postwar prosperity." Paul Krugman stated that the institutions built by the New Deal remain the bedrock of the United States economic stability. During the 2008 financial crisis, he explained that conditions would have been much worse if the New Deals Federal Deposit Insurance Corporation had not insured most bank deposits and older Americans would have felt much more insecure without Social Security. Economist Milton Friedman after 1960 attacked Social Security from a free market view stating that it had created welfare dependency. The New Deal banking reform has weakened since the 1980s. The repeal of the Glass-Steagall Act in 1999 allowed the shadow banking system to grow rapidly. Since it was neither regulated nor covered by a financial safety net, the shadow banking system was central to the 2008 financial crisis and the subsequent Great Recession. Though it is essentially consensus among historians and academics that the New Deal brought about a large increase in the power of the federal government, there has been some scholarly debate concerning the results of this federal expansion. Historians like Arthur M. Schlesinger and James T. Patterson have argued that the augmentation of the federal government exacerbated tensions between the federal and state governments. However, contemporaries such as Ira Katznelson have suggested that due to certain conditions on the allocation of federal funds, namely that the individual states get to control them, the federal government managed to avoid any tension with states over their rights. This is a prominent debate concerning the historiography of federalism in the United States and—as Schlesinger and Patterson have observed—the New Deal marked an era when the federal-state power balance shifted further in favor of the federal government, which heightened tensions between the two levels of government in the United States. Ira Katznelson has argued that although the federal government expanded its power and began providing welfare benefits on a scale previously unknown in the United States, it often allowed individual states to control the allocation of the funds provided for such welfare. This meant that the states controlled who had access to these funds, which in turn meant many Southern states were able to racially segregate—or in some cases, like a number of counties in Georgia, completely exclude African-Americans—the allocation of federal funds. This enabled these states to continue to relatively exercise their rights and also to preserve the institutionalization of the racist order of their societies. Though Katznelson has conceded that the expansion of the federal government had the potential to lead to federal-state tension, he has argued it was avoided as these states managed to retain some control. As Katznelson has observed, "they [state governments in the South] had to manage the strain that potentially might be placed on local practices by investing authority in federal bureaucracies [...]. To guard against this outcome, the key mechanism deployed was a separation of the source of funding from decisions about how to spend the new monies". However, Schlesinger has disputed Katznelson's claim and has argued that the increase in the power of the federal government was perceived to come at the cost of states' rights, thereby aggravating state governments, which exacerbated federal-state tensions. Schlesinger has utilized quotes from the time to highlight this point and has observed, "the actions of the New Deal, [Ogden L.] Mills said, 'abolish the sovereignty of the States. They make of a government of limited powers one of unlimited authority over the lives of us all.'" Moreover, Schlesinger has argued that this federal-state tension was not a one-way street and that the federal government became just as aggravated with the state governments as they did with it. State governments were often guilty of inhibiting or delaying federal policies. Whether through intentional methods, like sabotage, or unintentional ones, like simple administrative overload—either way, these problems aggravated the federal government and thus heightened federal-state tensions. Schlesinger has also noted, "students of public administration have never taken sufficient account of the capacity of lower levels of government to sabotage or defy even a masterful President." James T. Patterson has reiterated this argument, though he observes that this increased tension can be accounted for not just from a political perspective, but from an economic one too. Patterson has argued that the tension between the federal and state governments at least partly also resulted from the economic strain under which the states had been put by the federal government's various policies and agencies. Some states were either simply unable to cope with the federal government's demand and thus refused to work with them, or admonished the economic restraints and actively decided to sabotage federal policies. This was demonstrated, Patterson has noted, with the handling of federal relief money by Ohio governor, Martin L. Davey. The case in Ohio became so detrimental to the federal government that Harry Hopkins, supervisor of the Federal Emergency Relief Administration, had to federalize Ohio relief. Although this argument differs somewhat from Schlesinger's, the source of federal-state tension remained the growth of the federal government. As Patterson has asserted, "though the record of the FERA was remarkably good—almost revolutionary—in these respects it was inevitable, given the financial requirements imposed on deficit-ridden states, that friction would develop between governors and federal officials". In this dispute, it can be inferred that Katznelson and Schlesinger and Patterson have only disagreed on their inference of the historical evidence. While both parties have agreed that the federal government expanded and even that states had a degree of control over the allocation of federal funds, they have disputed the consequences of these claims. Katznelson has asserted that it created mutual acquiescence between the levels of government, while Schlesinger and Patterson have suggested that it provoked contempt for the state governments on the part of the federal government and vice versa, thus exacerbating their relations. In short, irrespective of the interpretation this era marked an important time in the historiography of federalism and also nevertheless provided some narrative on the legacy of federal-state relations. Criticism Worldwide, the Great Depression had the most profound impact in Germany and the United States. In both countries the pressure to reform and the perception of the economic crisis were strikingly similar. When Hitler came to power he was faced with exactly the same task that faced Roosevelt, overcoming mass unemployment and the global Depression. The political responses to the crises were essentially different: while American democracy remained strong, Germany replaced democracy with fascism, a Nazi dictatorship. The initial perception of the New Deal was mixed. On the one hand, the eyes of the world were upon the United States because many American and European democrats saw in Roosevelt's reform program a positive counterweight to the seductive powers of the two great alternative systems, communism and fascism. As the historian Isaiah Berlin wrote in 1955: "The only light in the darkness was the administration of Mr. Roosevelt and the New Deal in the United States". By contrast, enemies of the New Deal sometimes called it "fascist", but they meant very different things. Communists denounced the New Deal in 1933 and 1934 as fascist in the sense that it was under the control of big business. They dropped that line of thought when Stalin switched to the "Popular Front" plan of cooperation with progressives. In 1934, Roosevelt defended himself against those critics in a "fireside chat": [Some] will try to give you new and strange names for what we are doing. Sometimes they will call it 'Fascism', sometimes 'Communism', sometimes 'Regimentation', sometimes 'Socialism'. But, in so doing, they are trying to make very complex and theoretical something that is really very simple and very practical.... Plausible self-seekers and theoretical die-hards will tell you of the loss of individual liberty. Answer this question out of the facts of your own life. Have you lost any of your rights or liberty or constitutional freedom of action and choice? After 1945, only few observers continued to see similarities and later on some scholars such as Kiran Klaus Patel, Heinrich August Winkler and John Garraty came to the conclusion that comparisons of the alternative systems do not have to end in an apology for Nazism since comparisons rely on the examination of both similarities and differences. Their preliminary studies on the origins of the fascist dictatorships and the American (reformed) democracy came to the conclusion that besides essential differences "the crises led to a limited degree of convergence" on the level of economic and social policy. The most important cause was the growth of state interventionism since in the face of the catastrophic economic situation both societies no longer counted on the power of the market to heal itself. John Garraty wrote that the National Recovery Administration (NRA) was based on economic experiments in Nazi Germany and Fascist Italy, without establishing a totalitarian dictatorship. Contrary to that, historians such as Hawley have examined the origins of the NRA in detail, showing the main inspiration came from Senators Hugo Black and Robert F. Wagner and from American business leaders such as the Chamber of Commerce. The model for the NRA was Woodrow Wilson's War Industries Board, in which Johnson had been involved too. Historians argue that direct comparisons between Fascism and New Deal are invalid since there is no distinctive form of fascist economic organization. Gerald Feldman wrote that fascism has not contributed anything to economic thought and had no original vision of a new economic order replacing capitalism. His argument correlates with Mason's that economic factors alone are an insufficient approach to understand fascism and that decisions taken by fascists in power cannot be explained within a logical economic framework. In economic terms, both ideas were within the general tendency of the 1930s to intervene in the free market capitalist economy, at the price of its laissez-faire character, "to protect the capitalist structure endangered by endogenous crises tendencies and processes of impaired self-regulation". Stanley Payne, a historian of fascism, examined possible fascist influences in the United States by looking at the KKK and its offshoots and movements led by Father Coughlin and Huey Long. He concluded, "the various populist, nativist, and rightist movements in the United States during the 1920s and 1930s fell distinctly short of fascism." According to Kevin Passmore, lecturer in history at Cardiff University, the failure of fascism in the United States was due to the social policies of the New Deal that channelled anti-establishment populism into the left rather than the extreme right. The New Deal was generally held in very high regard in scholarship and textbooks. That changed in the 1960s when New Left historians began a revisionist critique calling the New Deal a band-aid for a patient that needed radical surgery to reform capitalism, put private property in its place and lift up workers, women and minorities. The New Left believed in participatory democracy and therefore rejected the autocratic machine politics typical of the big city Democratic organizations. In a 1968 essay, Barton J. Bernstein compiled a chronicle of missed opportunities and inadequate responses to problems. The New Deal may have saved capitalism from itself, Bernstein charged, but it had failed to help—and in many cases actually harmed—those groups most in need of assistance. In The New Deal (1967), Paul K. Conkin similarly chastised the government of the 1930s for its weak policies toward marginal farmers, for its failure to institute sufficiently progressive tax reform, and its excessive generosity toward select business interests. In 1966, Howard Zinn criticized the New Deal for working actively to actually preserve the worst evils of capitalism. In a 1950s interview for the Columbia University Oral History Project, Paul H. Appleby, who served as assistant to Secretary of Agriculture Henry A. Wallace, described the New Deal's Agricultural Adjustment Act as "militantly for the larger farmers." Dorothy Day believed that by accepting the New Deal, labor was helping in the creation of a servile state instead of ownership of the means of production, and was also critical of the bureaus created by the New Deal. By the 1970s, progressive historians were responding with a defense of the New Deal based on numerous local and microscopic studies. Praise increasingly focused on Eleanor Roosevelt, seen as a more appropriate crusading reformer than her husband. In a series of articles, political sociologist Theda Skocpol has emphasized the issue of "state capacity" as an often-crippling constraint. Ambitious reform ideas often failed, she argued, because of the absence of a government bureaucracy with significant strength and expertise to administer them.[citation needed] Other more recent works have stressed the political constraints that the New Deal encountered. Conservative skepticism about the efficacy of government was strong both in Congress and among many citizens. Thus some scholars have stressed that the New Deal was not just a product of its progressive backers, but also a product of the pressures of its conservative opponents.[citation needed] Some hard-right critics in the 1930s claimed that Roosevelt was state socialist or communist, including Charles Coughlin, Elizabeth Dilling, and Gerald L. K. Smith, The accusations generally targeted the New Deal. These conspiracy theories were grouped as the "red web" or "Roosevelt Red Record", based significantly on propaganda books by Dilling. There was significant overlap between these red-baiting accusations against Roosevelt and the isolationist America First Committee. Roosevelt was concerned enough about the accusations that in a September 29, 1936 speech in Syracuse, Roosevelt officially condemned communism. Other accusations of socialism or claimed communism came from Republican representative Robert F. Rich, and senators Simeon D. Fess, and Thomas D. Schall. The accusations of communism were widespread enough to misdirect from the real Soviet espionage that was occurring, leading the Roosevelt administration to miss the infiltration of various spy rings. Most of the Soviet spy rings actually sought to undermine the Roosevelt administration. The Communist Party of the United States of America (CPUSA) had been quite hostile to the New Deal until 1935, but acknowledging the danger of fascism worldwide, reversed positions and tried to form a "Popular front" with the New Dealers. The Popular Front saw a small amount of popularity and a relatively restricted level of influence, and declined with the Molotov–Ribbentrop Pact. From 1935, the head of CPUSA Earl Browder sought to avoid directly attacking the New Deal or Roosevelt. With the Soviet invasion of Poland in mid September 1939, Browder was ordered by the Comintern to adjust his position to oppose FDR, which led to disputes within the CPUSA. During the New Deal, the communists established a network of a dozen or so members working for the government. They were low level and had a minor influence on policies. Harold Ware led the largest group which worked in the Agriculture Adjustment Administration (AAA) until Secretary of Agriculture Wallace got rid of them all in a famous purge in 1935. Ware died in 1935 and some individuals such as Alger Hiss moved to other government jobs. Other communists worked for the National Labor Relations Board (NLRB), the National Youth Administration, the Works Progress Administration, the Federal Theater Project, the Treasury and the Department of State. Works of art and music The Works Progress Administration subsidized artists, musicians, painters and writers on relief with a group of projects called Federal One. While the WPA program was by far the most widespread, it was preceded by three programs administered by the US Treasury which hired commercial artists at usual commissions to add murals and sculptures to federal buildings. The first of these efforts was the short-lived Public Works of Art Project, organized by Edward Bruce, an American businessman and artist. Bruce also led the Treasury Department's Section of Painting and Sculpture (later renamed the Section of Fine Arts) and the Treasury Relief Art Project (TRAP). The Resettlement Administration (RA) and Farm Security Administration (FSA) had major photography programs. The New Deal arts programs emphasized regionalism, social realism, class conflict, proletarian interpretations and audience participation. The unstoppable collective powers of common man, contrasted to the failure of individualism, was a favorite theme. Post Office murals and other public art, painted by artists in this time, can still be found at many locations around the U.S. The New Deal particularly helped American novelists. For journalists and the novelists who wrote non-fiction, the agencies and programs that the New Deal provided, allowed these writers to describe what they really saw around the country. Many writers chose to write about the New Deal and whether they were for or against it and if it was helping the country out. Some of these writers were Ruth McKenney, Edmund Wilson and Scott Fitzgerald. Another subject that was very popular for novelists was the condition of labor. They ranged from subjects on social protest to strikes. Under the WPA, the Federal Theatre project flourished. Countless theatre productions around the country were staged. This allowed thousands of actors and directors to be employed, among them were Orson Welles, and John Huston. The FSA photography project is most responsible for creating the image of the Depression in the U.S. Many of the images appeared in popular magazines. The photographers were under instruction from Washington as to what overall impression the New Deal wanted to give out. Director Roy Stryker's agenda focused on his faith in social engineering, the poor conditions among cotton tenant farmers and the very poor conditions among migrant farm workers—above all he was committed to social reform through New Deal intervention in people's lives. Stryker demanded photographs that "related people to the land and vice versa" because these photographs reinforced the RA's position that poverty could be controlled by "changing land practices". Though Stryker did not dictate to his photographers how they should compose the shots, he did send them lists of desirable themes, such as "church", "court day", "barns". Films of the late New Deal era such as Citizen Kane (1941) ridiculed so-called "great men" while the heroism of the common man appeared in numerous movies, such as The Grapes of Wrath (1940). Thus in Frank Capra's famous films, including Mr. Smith Goes to Washington (1939), Meet John Doe (1941) and It's a Wonderful Life (1946), the common people come together to battle and overcome villains who are corrupt politicians controlled by very rich, greedy capitalists. By contrast, there was also a smaller but influential stream of anti–New Deal art. Gutzon Borglum's sculptures on Mount Rushmore emphasized great men in history (his designs had the approval of Calvin Coolidge). Gertrude Stein and Ernest Hemingway disliked the New Deal and celebrated the autonomy of perfected written work as opposed to the New Deal idea of writing as performative labor. The Southern Agrarians celebrated premodern regionalism and opposed the TVA as a modernizing, disruptive force. Cass Gilbert, a conservative who believed architecture should reflect historic traditions and the established social order, designed the new Supreme Court building (1935). Its classical lines and small size contrasted sharply with the gargantuan modernistic federal buildings going up in the Washington Mall that he detested. Hollywood managed to synthesize liberal and conservative streams as in Busby Berkeley's Gold Digger musicals, where the storylines exalt individual autonomy while the spectacular musical numbers show abstract populations of interchangeable dancers securely contained within patterns beyond their control. New Deal programs The New Deal had many programs and new agencies, most of which were universally known by their initials. Most were abolished during World War II while others remain in operation or formed into different programs. They included the following: Statistics "Most indexes worsened until the summer of 1932, which may be called the low point of the depression economically and psychologically". Economic indicators show the American economy reached nadir in summer 1932 to February 1933, then began recovering until the recession of 1937–1938. Thus the Federal Reserve Industrial Production Index hit its low of 52.8 on July 1, 1932, and was practically unchanged at 54.3 on March 1, 1933, but by July 1, 1933, it reached 85.5 (with 1935–39 = 100 and for comparison 2005 = 1,342). In Roosevelt's 12 years in office, the economy had an 8.5% compound annual growth of GDP, the highest growth rate in the history of any industrial country, but recovery was slow and by 1939 the gross domestic product (GDP) per adult was still 27% below trend. See also References Sources & further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-introducing_python-186] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#cite_note-32] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ethnic_groups_in_the_Middle_East] | [TOKENS: 558] |
Contents Ethnic groups in the Middle East Ethnic groups in the Middle East are ethnolinguistic groupings in the "transcontinental" region that is commonly a geopolitical term designating the intercontinental region comprising West Asia (including Cyprus) without the South Caucasus, and also comprising Egypt in North Africa. The Middle East has historically been a crossroad of different cultures and languages. Since the 1960s, the changes in political and economic factors (especially the enormous oil wealth in the region and conflicts) have significantly altered the ethnic composition of groups in the region. While some ethnic groups have been present in the region for millennia, others have arrived fairly recently through immigration. The largest ethnic groups in the region are Arabs, Turks, Persians, Kurds, and Azerbaijanis but there are dozens of other ethnic groups that have hundreds of thousands, and sometimes millions of members. Other indigenous, religious, or minority ethnic groups include: Antiochians, Armenians, Assyrians, Arameans in the Qalamoun Mountains, Baloch, Copts, Druze, Gilaks, Greeks (including Cypriots and Pontians), Jews, Kawliya, Laz, Lurs, Mandaeans, Maronites, Mazanderanis, Mhallami, Nawar, Samaritans, Shabaks, Talysh, Tats, Yazidis and Zazas. Diaspora ethnic groups living in the region include: Albanians, Bengalis, Britons, Bosniaks, Chechens, Chinese, Circassians, Crimean Tatars, Filipinos, French people, Georgians, Indians, Indonesians, Italians, Malays, Malayali, Pakistanis, Pashtuns, Punjabis, Romanians, Romani, Serbs, Sikhs, Sindhis, Somalis, Sri Lankans, Turkmens, and sub-Saharan Africans. Demographics Middle East Anatolia Cyprus Iranian Plateau Diaspora populations Because of the low population of many of the Arab States of the Persian Gulf and the demand for labor created by the large discoveries of oil in these countries there has been a steady stream of immigration to the region (mainly from South Asia). Ethnic groups which comprise the largest portions of this immigration include Afghans, Albanians, Armenians, Bengalis, Bosniaks, Britons, Chinese, Filipinos, Greeks, Indians, Indonesians, Italians, Malays, Nepalis, Pakistanis, Punjabis, Sikhs, Sindhis, Somalis, Sri Lankans, and Sub-Saharan Africans. Many of these people are denied certain political and legal rights in the countries in which they live and frequently face mistreatment by the native-born citizens of the host countries. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/IPv6] | [TOKENS: 7341] |
Contents IPv6 Internet Protocol version 6 (IPv6) is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion, and was intended to replace IPv4. In December 1998, IPv6 became a Draft Standard for the IETF, which subsequently ratified it as an Internet Standard on 14 July 2017. Devices on the Internet are assigned a unique IP address for identification and location definition. With the rapid growth of the Internet after commercialization in the 1990s, it became evident that far more addresses would be needed to connect devices than the 4 billion (232) addresses IPv4 made available. By 1998, the IETF had formalized the successor protocol, IPv6. This uses 128-bit addresses, yielding an address space of 2128, 1038, or 340 undecillion total addresses. Several transition mechanisms have been devised to allow IPv4 and IPv6 to interoperate, but IPv6 is not backwards-compatible with IPv4 and the two do not interoperate directly. IPv6 provides other technical benefits in addition to a larger addressing space. In particular, it permits hierarchical address allocation methods that facilitate route aggregation across the Internet, and thus limit the expansion of routing tables. The use of multicast addressing is expanded and simplified, and provides additional optimization for the delivery of services. Device mobility, security, and configuration aspects have been considered in the design of the protocol. IPv6 addresses are represented as eight groups of four hexadecimal digits each, separated by colons. The full representation may be shortened according to specific rules; for example, 2001:0db8:0000:0000:0000:8a2e:0370:7334 becomes 2001:db8::8a2e:370:7334. Main features IPv6 is an Internet Layer protocol for packet-switched internetworking and provides end-to-end datagram transmission across multiple IP networks, closely adhering to the design principles developed in the previous version of the protocol, Internet Protocol Version 4 (IPv4). In addition to offering more addresses, IPv6 also implements features not present in IPv4. It simplifies aspects of address configuration, network renumbering, and router announcements when changing network connectivity providers. It simplifies packet processing in routers by placing the responsibility for packet fragmentation in the end points. The IPv6 subnet size is standardized by fixing the size of the host identifier portion of an address to 64 bits. The addressing architecture of IPv6 allows three different types of transmission: unicast, anycast and multicast.: 210 IPv6 does not implement broadcast, and therefore has no notion of a broadcast address. Motivation and origin Internet Protocol Version 4 (IPv4) was the first publicly used version of the Internet Protocol. IPv4 was developed as a research project by the Defense Advanced Research Projects Agency (DARPA), a United States Department of Defense agency, before becoming the foundation for the Internet and the World Wide Web. IPv4 includes an addressing system that uses numerical identifiers consisting of 32 bits. These addresses are typically displayed in dot-decimal notation as decimal values of four octets, each in the range 0 to 255, or 8 bits per number. Thus, IPv4 provides an addressing capability of 232 or approximately 4.3 billion addresses. Address exhaustion was not initially a concern in IPv4 as this version was originally presumed to be a test of DARPA's networking concepts. During the first decade of operation of the Internet, it became apparent that methods had to be developed to conserve address space. In the early 1990s, even after the redesign of the addressing system using a classless network model, it became clear that this would not suffice to prevent IPv4 address exhaustion, and that further changes to the Internet infrastructure were needed. The last unassigned top-level address blocks of 16 million IPv4 addresses were allocated in February 2011 by the Internet Assigned Numbers Authority (IANA) to the five regional Internet registries (RIRs). However, each RIR still has available address pools and is expected to continue with standard address allocation policies until one /8 Classless Inter-Domain Routing (CIDR) block remains. After that, only blocks of 1,024 addresses (/22) will be provided from the RIRs to a local Internet registry (LIR). As of April 2025, all of Asia-Pacific Network Information Centre (APNIC), the Réseaux IP Européens Network Coordination Centre (RIPE NCC), Latin America and Caribbean Network Information Centre (LACNIC), African Network Information Centre (AFRINIC), and American Registry for Internet Numbers (ARIN) have reached this stage. RIPE NCC announced that it had fully run out of IPv4 addresses on 25 November 2019, and called for greater progress on the adoption of IPv6. Comparison with IPv4 On the Internet, data is transmitted in the form of network packets. IPv6 specifies a new packet format, designed to minimize packet header processing by routers. Because the headers of IPv4 packets and IPv6 packets are significantly different, the two protocols are not interoperable. However, most transport and application-layer protocols need little or no change to operate over IPv6; exceptions are application protocols that embed Internet-layer addresses, such as File Transfer Protocol (FTP) and Network Time Protocol (NTP), where the new address format may cause conflicts with existing protocol syntax. The main advantage of IPv6 over IPv4 is its larger address space. The size of an IPv6 address is 128 bits, compared to 32 bits in IPv4. The address space therefore has 2128 addresses (340 undecillion, approximately 3.4×1038). Some blocks of this space and some specific addresses are reserved for special uses. While this address space is very large, it was not the intent of the designers of IPv6 to assure geographical saturation with usable addresses. Rather, the longer addresses simplify allocation of addresses, enable efficient route aggregation, and allow implementation of special addressing features. In IPv4, complex Classless Inter-Domain Routing (CIDR) methods were developed to make the best use of the small address space. The network identifier part of the address in IPv6 is usually 264 addresses, about four billion times the size of the entire IPv4 address space. Thus, actual address space utilization will be small in IPv6, but network management and routing efficiency are improved by the large subnet space and hierarchical route aggregation. Multicasting, the transmission of a packet to multiple destinations in a single send operation, is part of the base specification in IPv6. In IPv4 this is an optional (although commonly implemented) feature. IPv6 multicast addressing has features and protocols in common with IPv4 multicast, but also provides changes and improvements by eliminating the need for certain protocols. IPv6 does not implement traditional IP broadcast, i.e. the transmission of a packet to all hosts on the attached link using a special broadcast address, and therefore does not define broadcast addresses. In IPv6, the same result is achieved by sending a packet to the link-local all nodes multicast group at address ff02::1, which is analogous to IPv4 multicasting to address 224.0.0.1. IPv6 also provides for new multicast implementations, including embedding rendezvous point addresses in an IPv6 multicast group address, which simplifies the deployment of inter-domain solutions. In IPv4 it is very difficult for an organization to get even one globally routable multicast group assignment, and the implementation of inter-domain solutions is arcane. Unicast address assignments by a local Internet registry for IPv6 have at least a 64-bit routing prefix, yielding the smallest subnet size available in IPv6 (also 64 bits). With such an assignment it is possible to embed the unicast address prefix into the IPv6 multicast address format, while still providing a 32-bit block, the least significant bits of the address, or approximately 4.2 billion multicast group identifiers. Thus each user of an IPv6 subnet automatically has available a set of globally routable source-specific multicast groups for multicast applications. IPv6 hosts configure themselves automatically. Every interface has a self-generated link-local address and, when connected to a network, conflict resolution is performed and routers provide network prefixes via router advertisements. Stateless configuration of routers can be achieved with a special router renumbering protocol. When necessary, hosts may configure additional stateful addresses via Dynamic Host Configuration Protocol version 6 (DHCPv6) or static addresses manually. Like IPv4, IPv6 supports globally unique IP addresses. The design of IPv6 intended to re-emphasize the end-to-end principle of network design that was originally conceived during the establishment of the early Internet by rendering network address translation obsolete. Therefore, every device on the network is globally addressable directly from any other device. A stable, unique, globally addressable IP address would facilitate tracking a device across networks. Therefore, such addresses are a particular privacy concern for mobile devices, such as laptops and cell phones. To address these privacy concerns, the SLAAC protocol includes what are typically called "privacy addresses" or, more correctly, "temporary addresses". Temporary addresses are random and unstable. A typical consumer device generates a new temporary address daily and will ignore traffic addressed to an old address after one week. Temporary addresses are used by default by Windows since XP SP1, macOS since (Mac OS X) 10.7, Android since 4.0, and iOS since version 4.3. Use of temporary addresses by Linux distributions varies. Renumbering an existing network for a new connectivity provider with different routing prefixes is a major effort with IPv4. With IPv6, however, changing the prefix announced by a few routers can in principle renumber an entire network, since the host identifiers (the least-significant 64 bits of an address) can be independently self-configured by a host. The SLAAC address generation method is implementation-dependent. IETF recommends that addresses be deterministic but semantically opaque. Internet Protocol Security (IPsec) was originally developed for IPv6, but found widespread deployment first in IPv4, for which it was re-engineered. IPsec was a mandatory part of all IPv6 protocol implementations, and Internet Key Exchange (IKE) was recommended, but with RFC 6434 the inclusion of IPsec in IPv6 implementations was downgraded to a recommendation because it was considered impractical to require full IPsec implementation for all types of devices that may use IPv6. However, as of RFC 4301 IPv6 protocol implementations that do implement IPsec need to implement IKEv2 and need to support a minimum set of cryptographic algorithms. This requirement will help to make IPsec implementations more interoperable between devices from different vendors. The IPsec Authentication Header (AH) and the Encapsulating Security Payload header (ESP) are implemented as IPv6 extension headers. The packet header in IPv6 is simpler than the IPv4 header. Many rarely used fields have been moved to optional header extensions. The IPv6 packet header has simplified the process of packet forwarding by routers. Although IPv6 packet headers are at least twice the size of IPv4 packet headers, processing of packets that only contain the base IPv6 header by routers may, in some cases, be more efficient, because less processing is required in routers due to the headers being aligned to match common word sizes. However, many devices implement IPv6 support in software (as opposed to hardware), thus resulting in very bad packet processing performance. Additionally, for many implementations, the use of Extension Headers causes packets to be processed by a router's CPU, leading to poor performance or even security issues. Moreover, an IPv6 header does not include a checksum. The IPv4 header checksum is calculated for the IPv4 header, and has to be recalculated by routers every time the time to live (called hop limit in the IPv6 protocol) is reduced by one. The absence of a checksum in the IPv6 header furthers the end-to-end principle of Internet design, which envisioned that most processing in the network occurs in the leaf nodes. Integrity protection for the data that is encapsulated in the IPv6 packet is assumed to be assured by both the link layer or error detection in higher-layer protocols, namely the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) on the transport layer. Thus, while IPv4 allowed UDP datagram headers to have no checksum (indicated by 0 in the header field), IPv6 requires a checksum in UDP headers. IPv6 routers do not perform IP fragmentation. IPv6 hosts are required to do one of the following: perform Path MTU Discovery, perform end-to-end fragmentation, or send packets no larger than the default maximum transmission unit (MTU), which is 1280 octets. Unlike mobile IPv4, mobile IPv6 avoids triangular routing and is therefore as efficient as native IPv6. IPv6 routers may also allow entire subnets to move to a new router connection point without renumbering. The IPv6 packet header has a minimum size of 40 octets (320 bits). Options are implemented as extensions. This provides the opportunity to extend the protocol in the future without affecting the core packet structure. However, RFC 7872 notes that some network operators drop IPv6 packets with extension headers when they traverse transit autonomous systems. IPv4 limits packets to 65,535 (216 − 1) octets of payload. An IPv6 node can optionally handle packets over this limit, referred to as jumbograms, which can be as large as 4,294,967,295 (232 − 1) octets. The use of jumbograms may improve performance over high-MTU links. The use of jumbograms is indicated by the Jumbo Payload Option extension header. IPv6 packets An IPv6 packet has two parts: a header and payload. The header consists of a fixed portion with minimal functionality required for all packets and may be followed by optional extensions to implement special features. The fixed header occupies the first 40 octets (320 bits) of the IPv6 packet. It contains the source and destination addresses, traffic class, hop count, and the type of the optional extension or payload which follows the header. This Next Header field tells the receiver how to interpret the data which follows the header. If the packet contains options, this field contains the option type of the next option. The "Next Header" field of the last option points to the upper-layer protocol that is carried in the packet's payload. The current use of the IPv6 Traffic Class field divides this between a 6 bit Differentiated Services Code Point. and a 2-bit Explicit Congestion Notification field. Extension headers carry options that are used for special treatment of a packet in the network, e.g., for routing, fragmentation, and for security using the IPsec framework. Without special options, a payload must be less than 64kB. With a Jumbo Payload option (in a Hop-By-Hop Options extension header), the payload must be less than 4 GB. Unlike with IPv4, routers never fragment a packet. Hosts are expected to use Path MTU Discovery to make their packets small enough to reach the destination without needing to be fragmented. See IPv6 packet fragmentation. Addressing IPv6 addresses have 128 bits. The design of the IPv6 address space implements a different design philosophy than in IPv4, in which subnetting was used to improve the efficiency of utilization of the small address space. In IPv6, the address space is deemed large enough for the foreseeable future, and a local area subnet always uses 64 bits for the host portion of the address, designated as the interface identifier, while the most-significant 64 bits are used as the routing prefix.: 9 While the myth has existed regarding IPv6 subnets being impossible to scan, RFC 7707 notes that patterns resulting from some IPv6 address configuration techniques and algorithms allow address scanning in many real-world scenarios. The 128 bits of an IPv6 address are represented in 8 groups of 16 bits each. Each group is written as four hexadecimal digits (sometimes called hextets or more formally hexadectets and informally a quibble or quad-nibble) and the groups are separated by colons (:). An example of this representation is 2001:0db8:0000:0000:0000:ff00:0042:8329. For convenience and clarity, the representation of an IPv6 address may be shortened with the following rules: An example of application of these rules: The loopback address is defined as 0000:0000:0000:0000:0000:0000:0000:0001 and is abbreviated to ::1 by using both rules. As an IPv6 address may have more than one representation, the IETF has issued a proposed standard for representing them in text. Because IPv6 addresses contain colons, and URLs use colons to separate the host from the port number, an IPv6 address used as the host-part of a URL should be enclosed in square brackets, e.g. http://[2001:db8:4006:812::200e] or http://[2001:db8:4006:812::200e]:8080/path/page.html. All interfaces of IPv6 hosts require a link-local address, which have the prefix fe80::/10. This prefix is followed by 54 bits that can be used for subnetting, although they are typically set to zeros, and a 64-bit interface identifier. The host can compute and assign the Interface identifier by itself without the presence or cooperation of an external network component like a DHCP server, in a process called link-local address autoconfiguration.[citation needed] The lower 64 bits of the link-local address (the suffix) were originally derived from the MAC address of the underlying network interface card. As this method of assigning addresses would cause undesirable address changes when faulty network cards were replaced, and as it also suffered from a number of security and privacy issues, RFC 8064 has replaced the original MAC-based method with the hash-based method specified in RFC 7217.[citation needed] IPv6 uses a new mechanism for mapping IP addresses to link-layer addresses (e.g. MAC addresses), because it does not support the broadcast addressing method, on which the functionality of the Address Resolution Protocol (ARP) in IPv4 is based. IPv6 implements the Neighbor Discovery Protocol (NDP, ND) in the link layer, which relies on ICMPv6 and multicast transmission.: 210 IPv6 hosts verify the uniqueness of their IPv6 addresses in a local area network (LAN) by sending a neighbor solicitation message asking for the link-layer address of the IP address. If any other host in the LAN is using that address, it responds. A host bringing up a new IPv6 interface first generates a unique link-local address using one of several mechanisms designed to generate a unique address. Should a non-unique address be detected, the host can try again with a newly generated address. Once a unique link-local address is established, the IPv6 host determines whether the LAN is connected on this link to any router interface that supports IPv6. It does so by sending out an ICMPv6 router solicitation message to the all-routers multicast group with its link-local address as source. If there is no answer after a predetermined number of attempts, the host concludes that no routers are connected. If it does get a response, known as a router advertisement, from a router, the response includes the network configuration information to allow establishment of a globally unique address with an appropriate unicast network prefix. There are also two flag bits that tell the host whether it should use DHCP to get further information and addresses: The assignment procedure for global addresses is similar to local-address construction. The prefix is supplied from router advertisements on the network. Multiple prefix announcements cause multiple addresses to be configured. Stateless address autoconfiguration (SLAAC) requires a /64 address block. Local Internet registries are assigned at least /32 blocks, which they divide among subordinate networks. The initial recommendation of September 2001 stated assignment of a /48 subnet to end-consumer sites. In March 2011 this recommendation was refined: The IETF "recommends giving home sites significantly more than a single /64, but does not recommend that every home site be given a /48 either". Blocks of /56s are specifically considered. It remains to be seen whether ISPs will honor this recommendation. For example, during initial trials, Comcast customers were given a single /64 network. IPv6 in the Domain Name System In the Domain Name System (DNS), hostnames are mapped to IPv6 addresses by AAAA ("quad-A") resource records. For reverse resolution, the IETF reserved the domain ip6.arpa, where the name space is hierarchically divided by the 1-digit hexadecimal representation of nibble units (4 bits) of the IPv6 address. When a dual-stack host queries a DNS server to resolve a fully qualified domain name (FQDN), the DNS client of the host sends two DNS requests, one querying AAAA records and the other querying A records, in that order, by default. If both types of addresses are returned by the DNS, and there is a route for it, the IPv6 address is preferred over the IPv4 address. However, the host operating system may be configured with an alternate preference for address selection. An alternative record type was used in early DNS implementations for IPv6, designed to facilitate network renumbering. The A6 resource record was used for the forward lookup, completed with a number of other innovations such as bit-string labels and DNAME records. After a discussion of the pros and cons of both schemes,) the use of A6 resource records has been deprecated to experimental status. Transition mechanisms IPv6 is not foreseen to supplant IPv4 instantaneously. Both protocols will continue to operate simultaneously for some time. Therefore, IPv6 transition mechanisms are needed to enable IPv6 hosts to reach IPv4 services and to allow isolated IPv6 hosts and networks to reach each other over IPv4 infrastructure. According to Silvia Hagen, a dual-stack implementation of the IPv4 and IPv6 on devices is the easiest way to migrate to IPv6. Many other transition mechanisms use tunneling to encapsulate IPv6 traffic within IPv4 networks and vice versa. This is an imperfect solution, which reduces the maximum transmission unit (MTU) of a link and therefore complicates Path MTU Discovery, and may increase latency. Dual-stack IP implementations provide complete IPv4 and IPv6 protocol stacks in the operating system of a computer or network device on top of the common physical layer implementation, such as Ethernet. This permits dual-stack hosts to participate in IPv6 and IPv4 networks simultaneously. A device with dual-stack implementation in the operating system has an IPv4 and IPv6 address, and can communicate with other nodes in the LAN or the Internet using either IPv4 or IPv6. The DNS protocol is used by both IP protocols to resolve fully qualified domain names and IP addresses, but dual stack requires that the resolving DNS server can resolve both types of addresses. Such a dual-stack DNS server holds IPv4 addresses in the A records and IPv6 addresses in the AAAA records. Depending on the destination that is to be resolved, a DNS name server may return an IPv4 or IPv6 IP address, or both. A default address selection mechanism, or preferred protocol, needs to be configured either on hosts or the DNS server. The IETF has published Happy Eyeballs to assist dual-stack applications, so that they can connect using both IPv4 and IPv6, but prefer an IPv6 connection if it is available. However, dual-stack also needs to be implemented on all routers between the host and the service for which the DNS server has returned an IPv6 address. Dual-stack clients should be configured to prefer IPv6 only if the network is able to forward IPv6 packets using the IPv6 versions of routing protocols. When dual-stack network protocols are in place the application layer can be migrated to IPv6. While dual-stack is supported by major operating system and network device vendors, legacy networking hardware and servers do not support IPv6. Internet service providers (ISPs) are increasingly providing their business and private customers with public-facing IPv6 global unicast addresses. If IPv4 is still used in the local area network (LAN), however, and the ISP can only provide one public-facing IPv6 address, the IPv4 LAN addresses are translated into the public facing IPv6 address using NAT64, a network address translation (NAT) mechanism. Some ISPs cannot provide their customers with public-facing IPv4 and IPv6 addresses, thus supporting dual-stack networking, because some ISPs have exhausted their globally routable IPv4 address pool. Meanwhile, ISP customers are still trying to reach IPv4 web servers and other destinations. A significant percentage of ISPs in all regional Internet registry (RIR) zones have obtained IPv6 address space. This includes many of the world's major ISPs and mobile network operators, such as Verizon Wireless, StarHub Cable, Chubu Telecommunications, Kabel Deutschland, Swisscom, T-Mobile, Internode and Telefónica. While some ISPs still allocate customers only IPv4 addresses, many ISPs allocate their customers only an IPv6 or dual-stack IPv4 and IPv6. ISPs report the share of IPv6 traffic from customers over their network to be anything between 20% and 40%, but by mid-2017 IPv6 traffic still only accounted for a fraction of total traffic at several large Internet exchange points (IXPs). AMS-IX reported it to be 2% and SeattleIX reported 7%. A 2017 survey found that many DSL customers that were served by a dual stack ISP did not request DNS servers to resolve fully qualified domain names into IPv6 addresses. The survey also found that the majority of traffic from IPv6-ready web-server resources were still requested and served over IPv4, mostly due to ISP customers that did not use the dual stack facility provided by their ISP and to a lesser extent due to customers of IPv4-only ISPs. The technical basis for tunneling, or encapsulating IPv6 packets in IPv4 packets, is outlined in RFC 4213. When the Internet backbone was IPv4-only, one of the frequently used tunneling protocols was 6to4. Teredo tunneling was also frequently used for integrating IPv6 LANs with the IPv4 Internet backbone. Teredo is outlined in RFC 4380 and allows IPv6 local area networks to tunnel over IPv4 networks, by encapsulating IPv6 packets within UDP. The Teredo relay is an IPv6 router that mediates between a Teredo server and the native IPv6 network. It was expected that 6to4 and Teredo would be widely deployed until ISP networks would switch to native IPv6, but by 2014 Google Statistics showed that the use of both mechanisms had dropped to almost 0. Hybrid dual-stack IPv6/IPv4 implementations recognize a special class of addresses, the IPv4-mapped IPv6 addresses.: §2.2.3 These addresses are typically written with a 96-bit prefix in the standard IPv6 format, and the remaining 32 bits are written in the customary dot-decimal notation of IPv4. Addresses in this group consist of an 80-bit prefix of zeros, the next 16 bits are ones, and the remaining, least-significant 32 bits contain the IPv4 address. For example, ::ffff:192.0.2.128 represents the IPv4 address 192.0.2.128. A previous format, called "IPv4-compatible IPv6 address", was ::192.0.2.128; however, this method is deprecated. Because of the significant internal differences between IPv4 and IPv6 protocol stacks, some of the lower-level functionality available to programmers in the IPv6 stack does not work the same when used with IPv4-mapped addresses. Some common IPv6 stacks do not implement the IPv4-mapped address feature, either because the IPv6 and IPv4 stacks are separate implementations (e.g., Microsoft Windows 2000, XP, and Server 2003), or because of security concerns (OpenBSD). On these operating systems, a program must open a separate socket for each IP protocol it uses. On some systems, e.g., the Linux kernel, NetBSD, and FreeBSD, this feature is controlled by the socket option IPV6_V6ONLY.: 22 The address prefix 64:ff9b::/96 is a class of IPv4-embedded IPv6 addresses for use in NAT64 transition methods. For example, 64:ff9b::192.0.2.128 represents the IPv4 address 192.0.2.128. Security A number of security implications may arise from the use of IPv6. Some of them may be related with the IPv6 protocols themselves, while others may be related with implementation flaws. The addition of nodes having IPv6 enabled by default by the software manufacturer may result in the inadvertent creation of shadow networks, causing IPv6 traffic flowing into networks having only IPv4 security management in place. This may also occur with operating system upgrades, when the newer operating system enables IPv6 by default, while the older one did not. Failing to update the security infrastructure to accommodate IPv6 can lead to IPv6 traffic bypassing it. Shadow networks have occurred on business networks in which enterprises are replacing Windows XP systems that do not have an IPv6 stack enabled by default, with Windows 7 systems, that do. Some IPv6 stack implementors have therefore recommended disabling IPv4 mapped addresses and instead using a dual-stack network where supporting both IPv4 and IPv6 is necessary. Research has shown that the use of fragmentation could be leveraged to evade network security controls, similar to IPv4. As a result, it is now required that the first fragment of an IPv6 packet contains the entire IPv6 header chain, such that some very pathological fragmentation cases are forbidden. Additionally, as a result of research on the evasion of RA-Guard, the use of fragmentation is deprecated with Neighbor Discovery, and discouraged with Secure Neighbor Discovery (SEND). Standardization through RFCs Due to the anticipated global growth of the Internet, the Internet Engineering Task Force (IETF) in the early 1990s started an effort to develop a next generation IP protocol.: 209 By the beginning of 1992, several proposals appeared for an expanded Internet addressing system and by the end of 1992 the IETF announced a call for white papers. In September 1993, the IETF created a temporary, ad hoc IP Next Generation (IPng) area to deal specifically with such issues. The new area was led by Allison Mankin and Scott Bradner, and had a directorate with 15 engineers from diverse backgrounds for direction-setting and preliminary document review: The working-group members were J. Allard (Microsoft), Steve Bellovin (AT&T), Jim Bound (Digital Equipment Corporation), Ross Callon (Wellfleet), Brian Carpenter (CERN), Dave Clark (MIT), John Curran (NEARNET), Steve Deering (Xerox), Dino Farinacci (Cisco), Paul Francis (NTT), Eric Fleischmann (Boeing), Mark Knopper (Ameritech), Greg Minshall (Novell), Rob Ullmann (Lotus), and Lixia Zhang (Xerox). The Internet Engineering Task Force adopted the IPng model on 25 July 1994, with the formation of several IPng working groups. By 1996, a series of RFCs was released defining Internet Protocol version 6 (IPv6), starting with RFC 1883. (Version 5 was used by the experimental Internet Stream Protocol.) The first RFC to standardize IPv6 was the RFC 1883 in 1995. In 1998 RFC 2460 became the RFC for IPv6.: 209 In July 2017 RFC 2460 was superseded by RFC 8200, which elevated IPv6 to "Internet Standard" (the highest maturity level for IETF protocols).. RFC 8201 is a related IPv6 standard that describes how to discover the most efficient network path from source to destination that has the largest packet size permitted to avoid IP packet fragmentation, and thus maintain packet transmission performance. Deployment The 1993 introduction of Classless Inter-Domain Routing (CIDR) in the routing and IP address allocation for the Internet, and the extensive use of network address translation (NAT), delayed IPv4 address exhaustion to allow for IPv6 deployment, which began in the mid-2000s. Universities were among the early adopters of IPv6. Virginia Tech deployed IPv6 at a trial location in 2004 and later expanded IPv6 deployment across the campus network. By 2016, 82% of the traffic on their network used IPv6. Imperial College London began experimental IPv6 deployment in 2003 and by 2016 the IPv6 traffic on their networks averaged between 20% and 40%. A significant portion of this IPv6 traffic was generated through their high energy physics collaboration with CERN, which relies entirely on IPv6. The Domain Name System (DNS) has supported IPv6 since 2008. In the same year, IPv6 was first used in a major world event during the Beijing 2008 Summer Olympics. By 2011, all major operating systems in use on personal computers and server systems had production-quality IPv6 implementations. Cellular telephone systems presented a large deployment field for Internet Protocol devices as mobile telephone service made the transition from 3G to 4G technologies, in which voice is provisioned as a voice over IP (VoIP) service that would leverage IPv6 enhancements. In 2009, the US cellular operator Verizon released technical specifications for devices to operate on its "next-generation" networks. The specification mandated IPv6 operation according to the 3GPP Release 8 Specifications (March 2009), and deprecated IPv4 as an optional capability. The deployment of IPv6 in the Internet backbone continued. In 2018 only 25.3% of the about 54,000 autonomous systems advertised both IPv4 and IPv6 prefixes in the global Border Gateway Protocol (BGP) routing database. A further 243 networks advertised only an IPv6 prefix. Internet backbone transit networks offering IPv6 support existed in every country globally, except in parts of Africa, the Middle East and China.: 6 By mid-2018 some major European broadband ISPs had deployed IPv6 for the majority of their customers. Sky UK provided over 86% of its customers with IPv6, Deutsche Telekom had 56% deployment of IPv6, XS4ALL in the Netherlands had 73% deployment and in Belgium the broadband ISPs VOO and Telenet had 73% and 63% IPv6 deployment respectively.: 7 In the United States the broadband ISP Xfinity had an IPv6 deployment of about 66%. In 2018 Xfinity reported an estimated 36.1 million IPv6 users, while AT&T reported 22.3 million IPv6 users.: 7–8 As of December 2025, Google's statistics show that approximately 44% of users access Google services over IPv6 native connectivity. Adoption varies significantly by region, with countries such as India, France, and Germany reporting adoption rates exceeding 70%, while others lag behind. Peering issues There is a peering dispute occurring between Hurricane Electric and Cogent Communications on IPv6, with the two network providers refusing to peer. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Works_Progress_Administration] | [TOKENS: 7179] |
Contents Works Progress Administration The Works Progress Administration (WPA; from 1935 to 1939, then known as the Work Projects Administration from 1939 to 1943) was an American New Deal agency that employed millions of jobseekers (mostly men who were not formally educated) to carry out public works projects, including the construction of public buildings and roads. It was set up on May 6, 1935, by presidential order, as a key part of the Second New Deal. The WPA's first appropriation in 1935 was $4.9 billion (about $15 per person in the U.S., around 6.7 percent of the 1935 GDP). Headed by Harry Hopkins, the WPA supplied paid jobs to the unemployed during the Great Depression in the United States, while building up the public infrastructure of the US, such as parks, schools, roads, and drains. Most of the jobs were in construction, building more than 620,000 miles (1,000,000 km) of streets and over 10,000 bridges, in addition to many airports and much housing. In 1942, the WPA played a key role in both building and staffing internment camps to incarcerate Japanese Americans. At its peak in 1938, it supplied paid jobs for three million unemployed men and women, as well as youth in a separate division, the National Youth Administration. Between 1935 and 1943, the WPA employed 8.5 million people (about half the population of New York). Hourly wages were typically kept well below industry standards.: 196 Full employment, which was reached in 1942 and appeared as a long-term national goal around 1944, was not the goal of the WPA; rather, it tried to supply one paid job for all families in which the breadwinner suffered long-term unemployment.: 64, 184 In one of its most famous projects, Federal Project Number One, the WPA employed musicians, artists, writers, actors and directors in arts, drama, media, and literacy projects. The five projects dedicated to these were the Federal Writers' Project (FWP), the Historical Records Survey (HRS), the Federal Theatre Project (FTP), the Federal Music Project (FMP), and the Federal Art Project (FAP). In the Historical Records Survey, for instance, many former slaves in the South were interviewed; these documents are of immense importance to American history. Theater and music groups toured throughout the United States and gave more than 225,000 performances. Archaeological investigations under the WPA were influential in the rediscovery of pre-Columbian Native American cultures, and the development of professional archaeology in the US. The WPA was a federal program that ran its own projects in cooperation with state and local governments, which supplied 10–30% of the costs. Usually, the local sponsor provided land and often trucks and supplies, with the WPA responsible for wages (and for the salaries of supervisors, who were not on relief). WPA sometimes took over state and local relief programs that had originated in the Reconstruction Finance Corporation (RFC) or Federal Emergency Relief Administration programs (FERA).: 63 It was liquidated on June 30, 1943, because of low unemployment during World War II. Robert D. Leininger asserted: "millions of people needed subsistence incomes. Work relief was preferred over public assistance (the dole) because it maintained self-respect, reinforced the work ethic, and kept skills sharp.": 228 Establishment On May 6, 1935, FDR issued executive order 7034, establishing the Works Progress Administration. The WPA superseded the work of the Federal Emergency Relief Administration, which was dissolved. Direct relief assistance was permanently replaced by a national work relief program—a major public works program directed by the WPA. The WPA was largely shaped by Harry Hopkins, supervisor of the Federal Emergency Relief Administration and close adviser to Roosevelt. Both Roosevelt and Hopkins believed that the route to economic recovery and the lessened importance of the dole would be in employment programs such as the WPA.: 56–57 Hallie Flanagan, national director of the Federal Theatre Project, wrote that "for the first time in the relief experiments of this country the preservation of the skill of the worker, and hence the preservation of his self-respect, became important.": 17 The WPA was organized into the following divisions: Employment These ordinary men and women proved to be extraordinary beyond all expectation. They were golden threads woven in the national fabric. In this, they shamed the political philosophy that discounted their value and rewarded the one that placed its faith in them, thus fulfilling the founding vision of a government by and for its people. All its people. — Nick Taylor, American-Made: The Enduring Legacy of the WPA: 530 The goal of the WPA was to employ most of the unemployed people on relief until the economy recovered. Harry Hopkins testified to Congress in January 1935 why he set the number at 3.5 million, using Federal Emergency Relief Administration data. Estimating costs at $1,200 per worker per year (equivalent to $28,000 in 2025), he asked for and received $4 billion (equivalent to $72 billion in 2025). Many women were employed, but they were few compared to men. In 1935 there were 20 million people on relief in the United States. Of these, 8.3 million were children under 16 years of age; 3.8 million were persons between the ages of 16 and 65 who were not working or seeking work. These included housewives, students in school, and incapacitated persons. Another 750,000 were person age 65 or over.: 562 Thus, of the total of 20 million persons then receiving relief, 13 million were not considered eligible for employment. This left a total of 7 million presumably employable persons between the ages of 16 and 65 inclusive. Of these, however, 1.65 million were said to be farm operators or persons who had some non-relief employment, while another 350,000 were, despite the fact that they were already employed or seeking work, considered incapacitated. Deducting this 2 million from the total of 7.15 million, there remained 5.15 million persons age 16 to 65, unemployed, looking for work, and able to work.: 562 Because of the assumption that only one worker per family would be permitted to work under the proposed program, this total of 5.15 million was further reduced by 1.6 million—the estimated number of workers who were members of families with two or more employable people. Thus, there remained a net total of 3.55 million workers in as many households for whom jobs were to be provided.: 562 The WPA reached its peak employment of 3,334,594 people in November 1938.: 547 To be eligible for WPA employment, an individual had to be an American citizen, 18 or older, able-bodied, unemployed, and certified as in need by a local public relief agency approved by the WPA. The WPA Division of Employment selected the worker's placement to WPA projects based on previous experience or training. Worker pay was based on three factors: the region of the country, the degree of urbanization, and the individual's skill. It varied from $19 per month to $94 per month, with the average wage being about $52.50 (equivalent to $1,200 in 2025). The goal was to pay the local prevailing wage, but limit the hours of work to 8 hours a day or 40 hours a week; the stated minimum being 30 hours a week, or 120 hours a month.: 213 Being a voter or a Democrat was not a prerequisite for a relief job. Federal law specifically prohibited any political discrimination against WPA workers. Vague charges were bandied about at the time. The consensus of experts is that: "In the distribution of WPA project jobs as opposed to those of a supervisory and administrative nature politics plays only a minor in comparatively insignificant role." However those who were hired were reminded at election time that FDR created their job and the Republicans would take it away. The great majority voted accordingly. Projects WPA projects were administered by the Division of Engineering and Construction and the Division of Professional and Service Projects. Most projects were initiated, planned and sponsored by states, counties or cities. Nationwide projects were sponsored until 1939. The WPA built traditional infrastructure of the New Deal such as roads, bridges, schools, libraries, courthouses, hospitals, sidewalks, waterworks, and post-offices, but also constructed museums, swimming pools, parks, community centers, playgrounds, coliseums, markets, fairgrounds, tennis courts, zoos, botanical gardens, auditoriums, waterfronts, city halls, gyms, and university unions. Most of these are still in use today.: 226 The amount of infrastructure projects of the WPA included 40,000 new and 85,000 improved buildings. These new buildings included 5,900 new schools; 9,300 new auditoriums, gyms, and recreational buildings; 1,000 new libraries; 7,000 new dormitories; and 900 new armories. In addition, infrastructure projects included 2,302 stadiums, grandstands, and bleachers; 52 fairgrounds and rodeo grounds; 1,686 parks covering 75,152 acres; 3,185 playgrounds; 3,026 athletic fields; 805 swimming pools; 1,817 handball courts; 10,070 tennis courts; 2,261 horseshoe pits; 1,101 ice-skating areas; 138 outdoor theatres; 254 golf courses; and 65 ski jumps.: 227 Total expenditures on WPA projects through June 1941 totaled approximately $11.4 billion—the equivalent of $187 billion in 2024. Over $4 billion was spent on highway, road, and street projects; more than $1 billion on public buildings, including the Dock Street Theatre in Charleston, the Griffith Observatory in Los Angeles, and Timberline Lodge in Oregon's Mount Hood National Forest.: 252–253 More than $1 billion—$16.4 billion in 2024—was spent on publicly owned or operated utilities; and another $1 billion on welfare projects, including sewing projects for women, the distribution of surplus commodities, and school lunch projects.: 129 One construction project was the Merritt Parkway in Connecticut, the bridges of which were each designed as architecturally unique. In its eight-year run, the WPA built 325 firehouses and renovated 2,384 of them across the United States. The 20,000 miles (32,000 km) of water mains, installed by their hand as well, contributed to increased fire protection across the country.: 69 The direct focus of the WPA projects changed with need. In 1935 priority projects were to improve infrastructure; roads, extension of electricity to rural areas, water conservation, sanitation and flood control. In 1936, as outlined in that year's Emergency Relief Appropriations Act, public facilities became a focus; parks and associated facilities, public buildings, utilities, airports, and transportation projects were funded. The following year saw the introduction of agricultural improvements, such as the production of marl fertilizer and the eradication of fungus pests. As the Second World War approached, and then eventually began, WPA projects became increasingly defense related.: 70 One project of the WPA was funding state-level library service demonstration projects, to create new areas of library service to underserved populations and to extend rural service. Another project was the Household Service Demonstration Project, which trained 30,000 women for domestic employment. South Carolina had one of the larger statewide library service demonstration projects. At the end of the project in 1943, South Carolina had twelve publicly funded county libraries, one regional library, and a funded state library agency. A significant aspect of the Works Progress Administration was the Federal Project Number One, which had five different parts: the Federal Art Project, the Federal Music Project, the Federal Theatre Project, the Federal Writers' Project, and the Historical Records Survey. The government wanted to provide new federal cultural support instead of just providing direct grants to private institutions. After only one year, over 40,000 artists and other talented workers had been employed through this project in the United States. Cedric Larson stated that "The impact made by the five major cultural projects of the WPA upon the national consciousness is probably greater in total than anyone readily realizes. As channels of communication between the administration and the country at large, both directly and indirectly, the importance of these projects cannot be overestimated, for they all carry a tremendous appeal to the eye, the ear, or the intellect—or all three.": 491 This project was directed by Holger Cahill, and in 1936 employment peaked at over 5,300 artists. The Arts Service Division created illustrations and posters for the WPA writers, musicians, and theaters. The Exhibition Division had public exhibitions of artwork from the WPA, and artists from the Art Teaching Division were employed in settlement houses and community centers to give classes to an estimated 50,000 children and adults. They set up over 100 art centers around the country that served an estimated eight million individuals. Directed by Nikolai Sokoloff, former principal conductor of the Cleveland Orchestra, the Federal Music Project employed over 16,000 musicians at its peak. Its purpose was to create jobs for unemployed musicians, It established new ensembles such as chamber groups, orchestras, choral units, opera units, concert bands, military bands, dance bands, and theater orchestras. They gave 131,000 performances and programs to 92 million people each week. The Federal Music Project performed plays and dances, as well as radio dramas.: 494 In addition, the Federal Music Project gave music classes to an estimated 132,000 children and adults every week, recorded folk music, served as copyists, arrangers, and librarians to expand the availability of music, and experimented in music therapy. Sokoloff stated, "Music can serve no useful purpose unless it is heard, but these totals on the listeners' side are more eloquent than statistics as they show that in this country there is a great hunger and eagerness for music.": 494 In 1929, Broadway alone had employed upwards of 25,000 workers, onstage and backstage; in 1933, only 4,000 still had jobs. The Actors' Dinner Club and the Actors' Betterment Association were giving out free meals every day. Every theatrical district in the country suffered as audiences dwindled. The New Deal project was directed by playwright Hallie Flanagan, and employed 12,700 performers and staff at its peak. They presented more than 1,000 performances each month to almost one million people, produced 1,200 plays in the four years it was established, and introduced 100 new playwrights. Many performers later became successful in Hollywood including Orson Welles, John Houseman, Burt Lancaster, Joseph Cotten, Canada Lee, Will Geer, Joseph Losey, Virgil Thomson, Nicholas Ray, E.G. Marshall and Sidney Lumet. The Federal Theatre Project was the first project to end; it was terminated in June 1939 after Congress zeroed out the funding. This project was directed by Henry Alsberg and employed 6,686 writers at its peak in 1936. The FWP created the American Guide Series which, when completed, consisted of 378 books and pamphlets providing a thorough analysis of the history, social life and culture for every state, city and village in the United States including descriptions of towns, waterways, historic sites, oral histories, photographs, and artwork. An association or group that put up the cost of publication sponsored each book, the cost was anywhere from $5,000 to $10,000. In almost all cases, the book sales were able to reimburse their sponsors.: 494 Additionally, another important part of this project was to record oral histories to create archives such as the Slave Narratives and collections of folklore. These writers also participated in research and editorial services to other government agencies. This project was the smallest of Federal Project Number One and served to identify, collect, and conserve United States' historical records. It is one of the biggest bibliographical efforts and was directed by Luther H. Evans. At its peak, this project employed more than 4,400 workers.: 494 Before the Great Depression, it was estimated that one-third of the population in the United States did not have reasonable access to public library services. With the onset of the Depression, local governments facing declining revenues were unable to maintain social services, including libraries. This lack of revenue exacerbated problems of library access. Acknowledgement of the need, not only to maintain existing facilities but to expand library services, led to the establishment of the WPA's Library Projects. In 1934, only two states, Massachusetts and Delaware, provided their total population access to public libraries. In many rural areas, there were no libraries, and where they did exist, reading opportunities were minimal. Sixty-six percent of the South's population did not have access to any public library. Libraries that existed circulated one book per capita. The early emphasis of the WPA Library Services Project was on extending library services to rural populations by creating libraries in areas that lacked facilities. The program also greatly augmented reader services in metropolitan and urban centers. By 1938, the WPA Library Services Project had established 2,300 new libraries, 3,400 reading rooms in existing libraries, and 53 traveling libraries for sparsely settled areas. Federal money for these projects could only be spent on worker wages, therefore local municipalities had to provide upkeep on properties and purchase equipment and materials. At the local level, WPA libraries relied on funding from county or city officials or funds raised by local community organizations such as women's clubs. Due to limited funding, many WPA libraries were "little more than book distribution stations: tables of materials under temporary tents, a tenant home to which nearby readers came for their books, a school superintendents' home, or a crossroads general store." The public response to the WPA libraries was extremely positive. For many, "the WPA had become 'the breadline of the spirit.'" At its height in 1938, there were 38,324 people, primarily women, employed in the WPA Library Programs: 25,625 in library services and 12,696 in bookbinding and repair. Because book repair was an activity that could be taught to unskilled workers and after training could be conducted with little supervision, repair and mending became the main activity of the WPA Library Project. The basic rationale for this change was that the mending and repair projects saved public and school libraries thousands of dollars in acquisition costs while employing needy women who were often heads of households. By 1940, the WPA Library Project, now the Library Services Program, began to shift its focus as the entire WPA began to move operations towards goals of national defense. The Program served those goals in two ways: 1.) Existing WPA libraries distributed materials to the public on the nature of an imminent national defense emergency and the need for national defense preparation, and 2.) the project provided supplementary library services to military camps and defense-impacted communities. By December 1941, the number of people employed in WPA library work had declined to 16,717. In May of 1942, all statewide Library Projects were reorganized as WPA War Information Services Programs, and by early 1943, the work of closing war information centers had begun. The last week of service for remaining WPA library workers was March 15, 1943. While it is difficult to quantify the success or failure of WPA Library Projects relative to other WPA programs, "what is incontestable is the fact that the library projects provided much-needed employment for mostly female workers, recruited many to librarianship in at least semiprofessional jobs, and retained librarians who may have left the profession for other work had employment not come through federal relief...the WPA subsidized several new ventures in readership services such as the widespread use of bookmobiles and supervised reading rooms – services that became permanent in post-depression and postwar American libraries." In extending library services to people who lost their libraries or never had a library to begin with, WPA Library Services Projects achieved phenomenal success, made significant permanent gains, and had a profound impact on library life in America. Incarceration of Japanese Americans in internment camps The WPA spent $4.47 million on removal and internment between March and November 1942, slightly more than the $4.43 million spent by the Army for that purpose during that period. Jason Scott Smith observes that "the eagerness of many WPA administrators to place their organization in the forefront of this wartime enterprise is striking." The WPA was on the ground helping with removal and relocation even before the creation of the WRA. On March 11, Rex L. Nicholson, the WPA's regional director, took charge of the "Reception and Induction" centers that controlled the first thirteen assembly centers. Nicholson's old WPA associates played key roles in the administration of the camps. WPA veterans involved in internment included Clayton E. Triggs, the first manager of the Manzanar Relocation Center in California, a facility that, according to one insider, was "manned just about 100% by the WPA." Drawing on experiences derived from New Deal–era road building, he supervised the installation of such features as guard towers and spotlights. Then-Secretary of Commerce Harry Hopkins praised his successor as WPA administrator, Howard O. Hunter, for the "building of those camps for the War Department for the Japanese evacuees on the West Coast." African Americans The share of Federal Emergency Relief Administration and WPA benefits for African Americans exceeded their proportion of the general population. The FERA's first relief census reported that more than two million African Americans were on relief during early 1933, a proportion of the African-American population (17.8%) that was nearly double the proportion of white Americans on relief (9.5%). This was during the period of Jim Crow and racial segregation in the South, when black Americans were largely disenfranchised. By 1935, there were 3,500,000 African Americans (men, women and children) on relief, almost 35 percent of the African-American population; plus another 250,000 African-American adults were working on WPA projects. Altogether during 1938, about 45 percent of the nation's African-American families were either on relief or were employed by the WPA. Civil rights leaders initially objected that African Americans were proportionally underrepresented. African American leaders made such a claim with respect to WPA hires in New Jersey, stating, "In spite of the fact that Blacks indubitably constitute more than 20 percent of the State's unemployed, they composed 15.9% of those assigned to W.P.A. jobs during 1937.": 287 Nationwide in 1940, 9.8% of the population were African American. However, by 1941, the perception of discrimination against African Americans had changed to the point that the NAACP magazine Opportunity hailed the WPA: It is to the eternal credit of the administrative officers of the WPA that discrimination on various projects because of race has been kept to a minimum and that in almost every community Negroes have been given a chance to participate in the work program. In the South, as might have been expected, this participation has been limited, and differential wages on the basis of race have been more or less effectively established; but in the northern communities, particularly in the urban centers, the Negro has been afforded his first real opportunity for employment in white-collar occupations.: 295 The WPA mostly operated segregated units, as did its youth affiliate, the National Youth Administration. Blacks were hired by the WPA as supervisors in the North; however of 10,000 WPA supervisors in the South, only 11 were black. Historian Anthony Badger argues, "New Deal programs in the South routinely discriminated against blacks and perpetuated segregation." People with physical disabilities The League of the Physically Handicapped in New York was organized in May 1935 to end discrimination by the WPA against the physically disabled unemployed. The city's Home Relief Bureau coded applications by the physically disabled applicants as "PH" ("physically handicapped"). Thus they were not hired by the WPA. In protest, the League held two sit-ins in 1935. The WPA relented and created 1,500 jobs for physically disabled workers in New York City. Women About 15% of the household heads on relief were women, and youth programs were operated separately by the National Youth Administration. The average worker was about 40 years old (about the same as the average family head on relief). WPA policies were consistent with the strong belief of the time that husbands and wives should not both be working (because the second person working would take one job away from some other breadwinner). A study of 2,000 female workers in Philadelphia showed that 90% were married, but wives were reported as living with their husbands in only 18 percent of the cases. Only 2 percent of the husbands had private employment. Of the 2,000 women, all were responsible for one to five additional people in the household.: 283 In rural Missouri, 60% of the WPA-employed women were without husbands (12% were single; 25% widowed; and 23% divorced, separated or deserted). Thus, only 40% were married and living with their husbands, but 59% of the husbands were permanently disabled, 17% were temporarily disabled, 13% were too old to work, and remaining 10% were either unemployed or disabled. Most of the women worked with sewing projects, where they were taught to use sewing machines and made clothing and bedding, as well as supplies for hospitals, orphanages, and adoption centers.: 283 One WPA-funded project, the Pack Horse Library Project, mainly employed women to deliver books to rural areas in eastern Kentucky. Many of the women employed by the project were the sole breadwinners for their families. Criticism The WPA had numerous critics. The strongest attacks were that it was the prelude for a national political machine on behalf of Roosevelt. Reformers secured the Hatch Act of 1939 that largely depoliticized the WPA. Others complained that far left elements played a major role, especially in the New York City unit. Representative J. Parnell Thomas of the House Committee on Un-American Activities claimed in 1938 that divisions of the WPA were a "hotbed of Communists" and "one more link in the vast and unparalleled New Deal propaganda network." Much of the criticism of the distribution of projects and funding allotment is a result of the view that the decisions were politically motivated. The South, despite being the poorest region of the United States, received 75% less in federal relief and public works funds per capita than the West. Critics would point to the fact that Roosevelt's Democrats could be sure of voting support from the South, whereas the West was less of a sure thing; swing states took priority over the other states.: 70 There was a perception that WPA employees were not diligent workers, and that they had little incentive to give up their busy work in favor of productive jobs. Some employers said that the WPA instilled poor work habits and encouraged inefficiency. Some job applicants found that a WPA work history was viewed negatively by employers, who said they had formed poor work habits. A Senate committee reported that, "To some extent the complaint that WPA workers do poor work is not without foundation. ... Poor work habits and incorrect techniques are not remedied. Occasionally a supervisor or a foreman demands good work." The WPA and its workers were ridiculed as being lazy. The organization's initials were said to stand for "We Poke Along" or "We Putter Along" or "We Piddle Around" or "Whistle, Piss and Argue." These were sarcastic references to WPA projects that sometimes slowed down deliberately because foremen had an incentive to keep going, rather than finish a project. The WPA's Division of Investigation proved so effective in preventing political corruption "that a later congressional investigation couldn't find a single serious irregularity it had overlooked," wrote economist Paul Krugman. "This dedication to honest government wasn't a sign of Roosevelt's personal virtue; rather, it reflected a political imperative. FDR's mission in office was to show that government activism works. To maintain that mission's credibility he needed to keep his administration's record clean. And he did." Many complaints were recorded from private industry at the time that the existence of WPA works programs made hiring new workers difficult. The WPA claimed to counter this by keeping hourly wages well below private wages and encouraging relief workers to actively seek private employment and accept job offers if they got them.: 196 Evolution On December 23, 1938, after leading the WPA for three and a half years, Harry Hopkins resigned and became the Secretary of Commerce. To succeed him Roosevelt appointed Francis C. Harrington, a colonel in the Army Corps of Engineers and the WPA's chief engineer, who had been leading the Division of Engineering and Construction.: 417–420 Following the passage of the Reorganization Act of 1939 in April 1939, the WPA was grouped with the Bureau of Public Roads, Public Buildings Branch of the Procurement Division, Branch of Buildings Management of the National Park Service, United States Housing Authority and the Public Works Administration under the newly created Federal Works Agency. Created at the same time, the Federal Security Agency assumed the WPA's responsibility for the National Youth Administration. "The name of the Works Progress Administration has been changed to Work Projects Administration in order to make its title more descriptive of its major purpose," President Roosevelt wrote when announcing the reorganization. As WPA projects became more subject to the state, local sponsors were called on to provide 25% of project costs. As the number of public works projects slowly diminished, more projects were dedicated to preparing for war.: 227 Having languished since the end of World War I, the American military services were depopulated and served by crumbling facilities; when Germany occupied Czechoslovakia in 1938, the U.S. Army numbered only 176,000 soldiers.: 494 On May 26, 1940, FDR delivered a fireside chat to the American people about "the approaching storm", and on June 6 Harrington reprioritized WPA projects, anticipating a major expansion of the U.S. military. "Types of WPA work to be expedited in every possible way to include, in addition to airports and military airfields, construction of housing and other facilities for enlarged military garrisons, camp and cantonment construction, and various improvements in navy yards," Harrington said. He observed that the WPA had already made substantial contributions to national defense over its five years of existence, by building 85 percent of the new airports in the U.S. and making $420 million in improvements to military facilities. He predicted there would be 500,000 WPA workers on defense-related projects over the next 12 months, at a cost of $250 million.: 492–493 The estimated number of WPA workers needed for defense projects was soon revised to between 600,000 and 700,000. Vocational training for war industries was also begun by the WPA, with 50,000 trainees in the program by October 1940.: 494 "Only the WPA, having employed millions of relief workers for more than five years, had a comprehensive awareness of the skills that would be available in a full-scale national emergency," wrote journalist Nick Taylor. "As the country began its preparedness buildup, the WPA was uniquely positioned to become a major defense agency.": 494–495 Harrington died suddenly, aged 53, on September 30, 1940. Notably apolitical—he boasted that he had never voted—he had deflected Congressional criticism of the WPA by bringing attention to its building accomplishments and its role as an employer.: 504 Harrington's successor, Howard O. Hunter, served as head of the WPA until May 1, 1942.: 517 Termination Unemployment ended with war production for World War II, as millions of men joined the services, and cost-plus contracts made it attractive for companies to hire unemployed men and train them.[page needed] Concluding that a national relief program was no longer needed, Roosevelt directed the Federal Works Administrator to end the WPA in a letter December 4, 1942. "Seven years ago I was convinced that providing useful work is superior to any and every kind of dole. Experience had amply justified this policy," FDR wrote: By building airports, schools, highways, and parks; by making huge quantities of clothing for the unfortunate; by serving millions of lunches to school children; by almost immeasurable kinds and quantities of service the Work Projects Administration has reached a creative hand into every county in this Nation. It has added to the national wealth, has repaired the wastage of depression, and has strengthened the country to bear the burden of war. By employing eight millions of Americans, with thirty millions of dependents, it has brought to these people renewed hope and courage. It has maintained and increased their working skills; and it has enabled them once more to take their rightful places in public or in private employment. Roosevelt ordered a prompt end to WPA activities to conserve funds that had been appropriated. Operations in most states ended February 1, 1943. With no funds budgeted for the next fiscal year, the WPA ceased to exist after June 30, 1943. Legacy "The agencies of the Franklin D. Roosevelt administration had an enormous and largely unrecognized role in defining the public space we now use", wrote sociologist Robert D. Leighninger. "In a short period of ten years, the Public Works Administration, the Works Progress Administration, and the Civilian Conservation Corps built facilities in practically every community in the country. Most are still providing service half a century later. It is time we recognized this legacy and attempted to comprehend its relationship to our contemporary situation.": 226 See also References Further reading External links WPA posters: Libraries and the WPA: WPA murals: |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/XOTcl] | [TOKENS: 104] |
Contents XOTcl XOTcl is an object-oriented extension for the Tool Command Language created by Gustaf Neumann and Uwe Zdun. It is a derivative of MIT OTcl and is based on a dynamic object system with metaclasses influenced by CLOS. Class and method definitions in XOTcl are completely dynamic. The language also provides support for design patterns through filters and decorator mixins. See also References External links This programming-language-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Muki_(mythology)] | [TOKENS: 1216] |
Contents Muki (mythology) The muki (Quechua for asphyxia, also for a goblin who lives in caves, also spelled muqui or mooqui) is a goblin-like creature in the mythology of the Central Andes in Bolivia, Peru, Ecuador and Colombia. He is known to be a miner and his existence is constrained to underground spaces: The muki lives inside the mines. Despite the distance and the isolation of the mining camps, the belief and the description of the muki is consistent throughout Peru, from the highlands of Puno in the south to Cajamarca in the north. Nonetheless, the names differ: chinchiliku (Moquegua and Arequipa), anchanchu or janchanchu (Puno), jusshi (Cajamarca) and muki (Pasco and Andean regions of Bolivia). Appearance The muki is considered to be a dwarf due to its height, since it is no taller than 2 feet (0.61 meters). In the traditions of Cerro de Pasco, the muki is a small brawny creature with a disproportionate body. His head is attached to his body, but he lacks a neck. His voice is deep and husky, not matching his appearance, his long hair is bright blonde, his face is hairy and reddish, with a long white beard. His look is deep, aggressive and hypnotic and his eyes reflect the light as if they were made of metal. In some mining traditions, he has two horns that are used to break the rocks and point at the mineral veins. His skin is very pale and he carries a mining lantern. Sometimes he is described as having pointy ears. As noted above, there is more than one type of muki in the legends. Just like there is diversity in the mining elves at a universal level, there are many varieties of muki in the underground world of the Andes. They are known for the places where they became visible. The oral traditions of each mine help to identify them by region. Thus a muki can be from the Cerro de Pasco, Ticlio, Huacracocha , Morococha, and Casapalca (in Huarochirí [es], Department of Lima), Goyllar , in Central Peru, also more particularly El Diamante, Excélsior, Santander (of Cerro de Pasco, or Cerro de Pasco Mining Company); Mina Tentadora and Mina Julcani (Huancavelica region). Following the safety regulations of his work, the muki wears a helmet, a miner's outfit and studded boots. In other traditions, he is described as a small elf with a green outfit, sometimes with a very fine vicuña cape or with the waterproof outfit proper of a miner. He usually carries a lantern or a flashlight, depending on the technological level of the mine. He also walks like a duck because his feet are of abnormal size, and sometimes his legs can take the shape of a goose or crow’s. But the description of the muki changes with time. Around the 1930s, he was said to wander the mines while holding a gas lantern and wearing a vicuña poncho. He was described as having two small shiny horns and to speak with a soft voice. Nowadays he has a more updated look: mining outfit, rain boots and a battery flashlight. Sometimes he shape-shifts into an animal or a blonde white man to appear to the miners and deceive them. Behavior The muki lives in lonely places, and its attacks inspire fear in their victims and adversaries. They are known for stealing defenseless children. Elders advise that, when dealing with the muki, one should use his/her belt to battle him without succumbing to fear. The fusion (syncretism) of the Andean and Christian cultures brought European beliefs into this myth, such that the main victims of these goblins were the morito (unbaptized) children. Or, as it is said in the southern regions of Peru, that these unbaptized children are the ones that become mukis themselves. In some tales, the unbaptized children are kidnapped by the goblins, who live in fig or banana trees, and kept until they turn into goblins too. The skin of the children who spend time with these creatures turns very pale and it is advised to take the victims to church at once so they can receive the sacrament. The belief in the muki comes from old Andean traditions about demons and small creatures who inhabit the Ukhu Pacha (“world of below”) and the miners need to explain many of the extraordinary daily occurrences of their lives. The muki can be by himself or in groups, but they are known to prefer living on their own. They live in a timeless world of eternal darkness and they don't age, as if they were not affected by the pass of time. The muki likes to whistle loudly, and thus warn of danger to the miners of their liking. The muki is a goblin with a lot of power: he can make the metal veins appear and disappear, sense the moods and emotions of the miners, help with the miner's work by softening or hardening the metal veins, etc. He is known to help miners and sometimes to make pacts with them. He gravitates towards discreet and honest people, who will fulfill their promises and not share the details of their interaction with him. Many stories coincide on the fact that it is possible to capture the Mooqui and make a pact with him. Very often he offers to do the miners’ work for some coca, alcohol or the company of a woman, as that helps him feel less lonely. Yet, the outcome tends to be tragic due to the miner rarely being able to do as promised. When this occurs, the muki takes the miner's life. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Maor_Farid#cite_ref-9] | [TOKENS: 1458] |
Contents Maor Farid Dr. Maor Farid (Hebrew: מאור פריד; born April 20, 1992) is an Israeli scientist, engineer and artificial intelligence researcher at Massachusetts Institute of Technology, social activist, and author. He is the founder and CEO of Learn to Succeed (Hebrew: ללמוד להצליח) for empowering of youths from the Israeli socio-economic periphery and youths at risk, a regional manager of the Israeli center of ScienceAbroad at MIT, and an activist in the American Technion Society. He is an alumnus of Unit 8200, and a fellow of Fulbright Program and the Israel Scholarship Educational Foundation [he]. Dr. Farid was elected to the Forbes 30 Under 30 list of 2019, and won the Moskowitz Prize for Zionism. Early life Maor was born in Ness Ziona, a city in central Israel, as the eldest son for parents from immigrating families of Mizrahi Jews from Iraq and Libya. Maor suffered from Attention deficit hyperactivity disorder (ADHD) from a young age, and was classified as a problematic and violent student. His ADHD issues were diagnosed only after he began his university studies. However, inspired by his parents' background, he aspired to excel at school for a better future for his family. During elementary school, Maor attended local quizzes about Jewish history and Zionism, which significantly shaped his identity and national perspective. Farid graduated high school with the highest GPA in school. Later he was recruited to the Israel Defense Forces and drafted to the Brakim Program [he] – an excellence program of the Israeli Intelligence Corps for training leading R&D officers for the Israeli military and defense industry. Maor graduated the program with honors and was elected by the Israeli Prime Minister's Office and Unit 8200, where he served as an artificial intelligence researcher, officer, and commander. During his Military service, he received various honors and awards, such as the Excellent Scientist Award, given to the top three academics serving in the Israel Defense Forces. In 2019, Farid completed his military service in the rank of a Captain. Education and academic career As part of the (4 years) Brakim Program, Maor completed his Bachelor's and Master's degrees at the Technion in Mechanical Engineering with honors. Then, he initiated his Ph.D. research as a collaboration with the Israel Atomic Energy Commission (IAEC) in parallel to his duty military service. The main goals of his Ph.D. research were predicting irreversible effects of major earthquakes on Israel's nuclear facilities, and improving their seismic resistance using energy absorption technologies. The mathematical models developed by Farid were able to forecast earthquake effects on facilities with major hazard potential, and predicted the failure of liquid storage tanks due to earthquakes took place in Italy (2012) and Mexico (2017). The energy absorption technologies used, increased in up to 90% the seismic resistance abilities of those sensitive facilities. The research results were published in multiple papers in peer-reviewed academic journals and presented in international academic conferences. Later, this research expanded to an official collaboration between the Technion and the Shimon Peres Negev Nuclear Research Center, which aims to implement the findings obtained on existing sensitive systems, and won funding of 1.5 million NIS from the Pazy foundation of the Israel Atomic Energy Commission and the Council for Higher Education. In 2017, Farid completed his Ph.D. and as the youngest graduate at the Technion for that year, at the age of 24. In the graduation ceremonies, he honored his parents to receive the diplomas on his behalf. At the same year, he served as a lecturer at Ben-Gurion University in an original course he developed as a solution for knowledge gaps he identified in the Israeli defense industry. In 2018, Dr. Farid served as an artificial intelligence researcher at a Data Science team of Unit 8200, where he developed machine learning-based solutions for military and operational needs. In 2019, Farid won the Fulbright and the Israel Scholarship Educational Foundation scholarships, and was accepted to post-doctoral position at Massachusetts Institute of Technology where he develops real-time methods for predicting earthquake effects using machine learning techniques. In 2020, Farid was accepted to the Emerging Leaders Program at Harvard Kennedy School in Cambridge, Massachusetts. At the same year, he received the excellence research grant of the Israel Academy of Sciences and Humanities for leading his research in collaboration between MIT and the Technion. Social activism Farid social activism focuses on empowering youths from disadvantaged backgrounds from an early age. In 2010–2015, he served as a mentor of a robotics team from Dimona in FIRST Robotics Competition, a mathematics tutor in "Aharai!" [he] program for high-school students at risk in Dimona and Be'er Sheva, and a mentor and private tutor of adolescence and reserve duty soldiers from disadvantaged backgrounds. In 2010, he initiated "Learn to Succeed" (Hebrew: ללמוד להצליח) project, for mitigating the social gaps in the Israeli society by empowering youths from the social, economical, and geographical periphery for excellence, self-fulfillment and gaining formal education. In 2018, Learn to Succeed became an official non-profit organization. At the same year, Farid led a crowdfunding project of 150,000 NIS in order to expand the organization to a national scale. In 2019, he published the book "Learn to Succeed", in which he describes his struggle with ADHD, the violent environment in which he grew up, and the changing process he went through from being a violent teenager to becoming the youngest Ph.D. graduate at the Technion. The book was given to more than two thousand youths at risk and became a top seller in Israel shortly after its publication. Maor dedicated the book to his parents and to the memorial of his friend Captain Tal Nachman who was killed in operational activity during his military service in 2014. The organization consists of hundreds of volunteers, gives full scholarships to STEM students from the periphery who serve as mentors of youths, both Jews and Arabs, from disadvantaged backgrounds, runs a hotline which gives online practical and mental support to hundreds of youths, parents and educators, initiates inspirational activities with military orientation to increase the motivation of its teen-age members for significant military service, and gives inspirational lectures to more than 5,000 youths each year. In 2019, Maor initiated a collaboration with Unit 8200 in which tens of the program's members are being interviewed to the unit. This opportunity is usually given to students with the highest grades in the matriculate exams in each class. In 2020, Dr. Farid established the ScienceAbroad center at MIT, aiming to strengthen the connections between Israeli researchers in the institute and the state of Israel. Moreover, he serves as a volunteer in the American Technion Society. Honors and awards Personal life Farid is married to Michal. Interviews and articles References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Conker_(series)] | [TOKENS: 1379] |
Contents Conker (series) Conker is a series of platform video games created and produced by Rare. It chronicles the events of Conker the Squirrel, an anthropomorphic red squirrel that made his debut as a playable character in Diddy Kong Racing. While debuting as a family-friendly series, starting with Conker's Pocket Tales, it has shifted focus to mature audiences with the development and release of Conker's Bad Fur Day; during development, the game was modified to incorporate graphic violence, strong language, offensive humor, and other mature content. These changes resulted in the title receiving a Mature rating from the ESRB, accompanied by a content advisory displayed on the packaging and the title screen when immediately powered on to caution unsuspecting viewers who might otherwise mistake it for a platformer intended for a younger audience. A graphically improved remake of Conker's Bad Fur Day, along with new multiplayer modes, was released as Conker: Live & Reloaded on June 21, 2005 in North America for the original Xbox. The uncensored Conker's Bad Fur Day was released on Rare Replay and Live & Reloaded has been made backward compatible with the Xbox One and the Xbox Series X. Games Development Conker was introduced at the Electronic Entertainment Expo in 1997. The game Conker's Quest was presented by Rare as a 3D platformer aimed at a young audience for the Nintendo 64. Later the same year, Conker's inclusion in Diddy Kong Racing for the Nintendo 64 was confirmed. In early 1998, Conker's Quest was renamed Twelve Tales: Conker 64. In 1999, Conker made his first solo debut in Conker's Pocket Tales for the Game Boy Color. During development of the Conker series, Rareware had struggled to release Twelve Tales: Conker 64, formerly named Conker's Quest, citing issues with project management in addition to oversaturation of Mario 64-style games in the gaming market at the time, Realizing that Conker 64 lacked any uniqueness as a platform game, they cancelled Conker 64 and restarted the project. Multiple delays and a lack of updates led the press to believe that Twelve Tales was quietly cancelled. In 2000, Twelve Tales: Conker 64 was retooled into Conker's Bad Fur Day with a large amount of scatological humour. Conker the Squirrel, who previously appeared as a family-friendly character, was retooled to be a foul-mouthed, fourth-wall breaking alcoholic armed with guns, throwing knives, and a frying pan. After E3, Chris Seavor came on board as designer. The first level, the beehive, added machine guns shooting wasps which Rare found funny and kept going with this idea to be raunchy and different. After two more years of development, the game emerged as Conker's Bad Fur Day, which targets adults rather than children with its mature content. According to Rare co-founder Chris Stamper: "When people grow up on games, they don't stop playing. There aren't games for people who grew up on the early systems". The game suffered from relatively poor sales, but received a cult following. After the release of Conker's Bad Fur Day, Rare began development of a new Conker game referred to as Conker's Other Bad Day. Designer Chris Seavor said that it was to be a direct sequel dealing with "Conker's somewhat unsuccessful tenure as King. He spends all the treasury money on beer, parties and hookers. Thrown into prison, Conker is faced with the prospect of execution and the game starts with his escape, ball and chain attached, from the Castle's highest tower". It was never confirmed which console Conker's Other Bad Day was for, but it was likely the Nintendo GameCube as with Donkey Kong Racing. In 2002, Microsoft purchased Rare from Nintendo, so instead of finishing and releasing the game, Rare remade Conker's Bad Fur Day for the Xbox in 2005, renaming it Conker: Live & Reloaded. It features improved graphics and minor alterations to gameplay, and was also censored. It has a new multiplayer adaptation for Xbox Live. After Live & Reloaded, Rare started development on Conker: Gettin' Medieval, an online multiplayer third-person shooter game, but it was ultimately cancelled. At E3 2014, Conker was announced as a character in Project Spark. In 2015, Conker returned in a new episodic campaign for Project Spark. The campaign, titled Conker's Big Reunion, is set ten years after the events of Bad Fur Day and Seavor reprised his voice role. The first episode was released on 23 April the same year for Project Spark; however, before any more additional episodes could be made, Project Spark's online services were shut down and the game was abandoned. In 2015, Conker's Bad Fur Day was included in the Rare Replay video game compilation for Xbox One. In 2016, Microsoft announced Young Conker as the next installment into the series, released for the Microsoft HoloLens. The trailer was released in February and was almost universally panned by the public, with many complaining that it lacked the humour and overall style of its predecessors. The trailer received more than 30,000 dislikes against just over 1,000 likes. A petition was created to cancel the game's release but failed. Some video game critics and general YouTube commentators have boycotted the game. Reception Reception for the Conker series has been largely focused on the protagonist of the series, Conker the Squirrel, and the critical success of the games Conker's Bad Fur Day and Live and Reloaded. The contrast between Conker's innocent appearance and his coarse behavior has been well-received by the public. Critics have noted that the storylines and variety of characters for Conker's Bad Fur Day and Live and Reloaded in combination with the crude humour and seemingly innocent graphics are noteworthy appeal to mature audiences. Rare listed Conker as the fifth Rare's video game character who most improved with age. Jordan Devore of Destructoid stated about Conker's appearance in Project Spark (Conker's Big Reunion DLC) that there "was no getting around the disappointment of seeing a long abandoned (but never forgotten!) character return not in his own adventure, but in a DLC pack for a videogame about making games." Conker's appearance on the Microsoft HoloLens trailer for Young Conker received mostly negative reviews. Chris Plante of The Verge criticized it and said that "Young Conker doesn't feature the original Conker." Sam Loveridge of Digital Spy claimed that the scene of Conker and the bees is "weird." References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_note-:0-80] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Crackdown_(video_game_series)] | [TOKENS: 3176] |
Contents Crackdown (video game series) Crackdown is a series of open world action-adventure video games created by David Jones and published by Xbox Game Studios. The series takes place in futuristic dystopian cities controlled and enforced by a law enforcement organization called the Agency. The games center on the Agency's supersoldiers, known as "Agents", as they fight threats ranging from various criminal syndicates, a terrorist group known as "Cell", a zombie-like monsters called "Freaks", and a powerful megacorporation known as TerraNova. Games of the series have been developed by various game developers, with the first game, Crackdown, completed by Realtime Worlds on February 20, 2007, and a sequel titled Crackdown 2 developed by Ruffian Games on July 6, 2010. Both games were released for the Xbox 360. A third installment, Crackdown 3 developed by Sumo Digital, was released on February 15, 2019, for the Xbox One and Microsoft Windows. Critics praised the sandbox-style third-person shooters for allowing the ability to cause massive destruction in a non-linear gameplay, garnering mostly positive reception and commercial success, and becoming an influential series in the sandbox superhero genre. Titles Crackdown takes place in Pacific City, a dystopian metropolis that is suffering from an increase in crime rate. Criminal syndicates - namely Los Muertos, The Volk, and The Shai-Gen Corporation - have taken control of its three main territories, and they are armed with military-grade weapons that make it difficult for law enforcement to combat them. A secret organization known as the Agency took it upon themselves to eliminate the city's organized crime using their wide resources and genetically modified human beings called Agents. The Agent successfully brought down each criminal syndicate but it was later found out that it was the Agency itself who supplied the criminals with weapons. They planned for the city to go down in anarchy so that they can step in, stop the criminals, and be hailed heroes when they take over. The sequel takes place 10 years after the events of the first game. While organized crime has been quelled in Pacific City, a terrorist group calling themselves 'Cell' started a revolt to overthrow the Agency, which has taken control of the city. Cell's leader and former Agency scientist, named Catalina Thorne, released a deadly strain of the "Freak" virus that infected many citizens and turned them into mindless zombie-like monsters called Freaks. The Agency tried to stop this epidemic by building Project Sunburst, a weapon that used sunlight to destroy Freaks. However, Cell took control of Project Sunburst's generators before the Agency could use the weapon. This forced the Agency to send out their newest and better-equipped Agent to combat both the terrorists and the Freaks. The third game takes place a decade after Crackdown 2, and features a new city called New Providence, new supporting casts, and Agents with actual names and personalities, including Commander Isaiah Jaxon (played by Terry Crews). Crackdown 3 sees the Agency faced with a new, more technologically advanced and capitalistic adversary called TerraNova, who had taken over New Providence. Their first foray into the city ended in near-disaster after TerraNova anticipated their arrival with an ambush. The surviving Agents were then forced to ally with a rebel group known as Militia, specifically their tough young member named Echo. A combined assault by the Agency and the Militia led to a grand final battle involving the leader of TerraNova, Elizabeth Niemand, piloting a giant mechanized dragon. Development With the intent of going beyond the sandbox gameplay made popular by Grand Theft Auto III, developer Realtime Studios spent time with various testers, as well as former developers from the Grand Theft Auto series, in experimenting and refining the genre, with the use of additional content, items and rewards. Creator David Jones described the concept of the game as "how do we reward somebody for just having fun?" Crackdown was released on February 20, 2007, for the Xbox 360 console. Originally in development for the Xbox console in 2002, Microsoft suggested in 2004 that Realtime Worlds release the game for the then-upcoming Xbox 360. A demo was showcased at the 2006 E3 Convention. Due to the waving interest in player testers during the game's late development, Microsoft decided to release it with access codes to the Halo 3 multiplayer beta to help its sales during release. Although Realtime Worlds confirmed that they would create a series to follow the success of the first Crackdown, delays with budgeting between Microsoft and Realtime resulted in the developer cancelling the sequel. Microsoft still owned the intellectual property of Crackdown, and they hired fellow Scottish development company Ruffian Games - who had members with previous experience in Realtime Worlds developing the first game - to make the sequel. The latter game company A trailer for Crackdown 2 was released at the 2009 E3 Conference. A third game was developed by Sumo Digital, with directions from the original game's creator David Jones and assistance from previous developer Ruffian Games. The game was revealed as Crackdown 3 during Microsoft's Gamescom 2015 press conference on August 4, 2015. The game was originally set to be released for Xbox One and Microsoft Windows on November 7, 2017, but was delayed to 2019. Creative director Ken Lobb asserted that the game will be set in the future of the first game but represents an alternate timeline from what Crackdown 2 provided, though this was changed prior to release. One of the major elements advertised by David Jones for Crackdown 3 was the inclusion of a near-indestructible multiplayer map mode known as the Wrecking Zone. Half-way into development, however, Jones departed the project for Epic Games, pulling out said element and using it for the game Fortnite, much to the dismay of fans and other interested consumers. Besides Jones, original developing teams Cloudgine and Reagent Games also left the game. These controversial changes hounded Crackdown 3 and resulted in multiple delays from an original 2016 release, to 2017, again to 2018, and a final release on 2019. Many critics noted on this abandonment and missed opportunity, including editor Alex Donaldson from gaming site VG247, stating, "It's a neutered delivery of the promises made far too early in this console generation when, for a moment, it looked like it could be something truly revolutionary." Common elements In each game, players assume the role of super-powered law enforcers called Agents who protect Pacific City with the use of high-tech vehicles and weapons. Players can choose different races and armors for their Agents. Using a third-person camera, the Agent can dispatch enemies by shooting them with firearms, blowing them up with explosives, or engaging in melee combat. Being a genetically enhanced human being gives the Agent various skills, namely "Strength" (punching and lifting power), "Agility" (jumping and movement speed), "Driving" (handling vehicles), "Explosives" (creating explosions), and "Firearms" (shooting ability). These skills can be upgraded by collecting specific orbs or killing enemies. Agents are also covered in high-tech armors with rechargeable shields, which also evolve and unlock additional abilities such as shockwave creation and flight. The player can also enjoy various minigames such as on-foot and vehicle racing as well as street and aerial stunts. Multiplayer is also available in every game and uses the same gameplay elements in single-player. The first Crackdown game offered players cooperative gameplay of up to 2 players. The second Crackdown game improved the coop mode to accommodate 4 players while also adding new modes such as Rocket Tag, Vehicle Tag, Capture the Orb, Deathmatch, and Team Deathmatch. Crackdown 3 further expanded the series' coop by allowing over 8 players to participate. The series is known for its artistic use of cel-shading visuals together with its rich color palettes, stylized ambience, and crisp and strong real-time shadows. Developer Realtime Worlds was heavily influenced by comic books in creating the first Crackdown game and they used highlighted ink-like outlines to give it a comic feel. The game was also praised for its use of large draw distances that was seldom seen in other open-world games of its time. Crackdown 2 had a more dilapidated and post-apocalyptic setting but still with the use of the same engine. Ruffian Games used a more advanced crowd system, which allows more NPCs to be in the game while not affecting its play flow. Ruffian further tweaked Crackdown's draw distances by rendering the engine to allow the display of a larger vista of Pacific City. The third installment's New Providence setting offered a more neon-lit futuristic sandbox environment compared to the previous two. Each game soundtrack is made up of licensed music from a variety of commercial, independent, and video game musicians. Crackdown's music supervisor Peter Davenport was allowed by Microsoft to select music from any source for the game. Deciding to give it an electronic "dark and ominous" vibe, he selected music from Amon Tobin, Atlas Plug, Celldweller, and Hybrid that he put in each mission and premise. In Crackdown 2, music from Public Enemy, Bob Dylan, Johnny Cash, R.E.M., and Whodini were used to give the game a rebellious feel. The music for the third installment was created by Kristofor Mellroth, the Head of Audio for Microsoft Studios Global Publishing, together with composers Brian Trifon & Brian Lee White of the production group Finishing Move. Their hip-hop inspired work included composing interactive music for the open-world setting, as well as detailed audio physics, and mixing strategies for dialogues and sound effects. The third game also featured a Dolby Atmos soundtrack. Other media A webcomic titled the "Pacific City Archives" was also released by Microsoft to accompany the worldwide release of Crackdown 2. Containing over 5 episodes, the webcomic series bridged the gap between the first and second Crackdown games by expanding character backstory and game lore. The Agent is also an unlockable character in the Xbox Live Arcade game Perfect Dark. Dynamite Comics released a four-issue comic book tie-in to Crackdown 3, simply entitled Crackdown, in May 2019. The series was written by Jonathan Goff and drawn by Ricardo Jaime. It tells the story of several Agents sent to pacify a city riddled with crime and violence, with the team losing members in every issue, before ultimately meeting their objectives. Besides comics, the series was also included or mentioned in several literature. The games were cited in the non-fiction book "The Post-9/11 Video Game: A Critical Examination" by Marc A. Ouellette and Jason C. Thompson. The series was also mentioned in another non-fiction videogame-themed book entitled "Games' Most Wanted: The Top 10 Book of Players, Pawns, and Power-ups" by Ben H. Rome. Legacy The first game was both a critical and commercial success, becoming the top-selling game of February 22, 2007, during its first release week in North America, Japan, and the UK. The game was the top-selling game in North America for the month of February 2007, selling 427,000 units. Ultimately, by the end of 2007, the game sold 1.5 million copies worldwide. The game also won numerous awards such as the "Best Action and Adventure Game" and "Best Use of Audio" in the 2007 BAFTA, "Best Debut" award at the 2008 Game Developers Choice Awards, and the Innovation Award at the 2007 Develop Magazine Awards. Game Informer listed it as one of the top 50 games of 2007, citing its unique experience and several other elements, as well as listing the Agents as the number eight "Top Heroes of 2007" and listing climbing the tallest building in the city as the number nine "Top Moment of 2007." Various video game websites considered the Crackdown series as one of the best open-world video games to date. Ron Whitaker from The Escapist included it in its "8 Awesome Open World Games" list, stating that "open world games have improved a lot since then, but Crackdown is still a stellar example of the genre." Game Journalist James Alexander Callum from Pixel Bedlam dubbed Crackdown as one of the most underrated video games of all time, adding also that the game was "more than just a Grand Theft Auto clone on steroids." On the other hand, Ritwik Mitra of Game Rant ranked Crackdown 2 at #7 in his "Best Open-World Games Where Players Don't Need To Think Too Much", considering it as "the best game in the series". The games left a large impact on the open-world genre. James Hunt of Den of Geek described the first Crackdown game as "the first in a line of original, postmodern superhero creations on games consoles, and great fun to boot." Its formula of controlling super-powered beings in a massive sandbox environment, using their abilities to cause mayhem and destruction, as well as ollecting orbs in an open world environment to increase a character's abilities, have influenced other video game series such as Infamous, Prototype, Saints Row, The Saboteur, and Just Cause 2. Keiichiro Toyama cited Crackdown as a big influence in developing his award-winning game Gravity Rush, stating that he "really liked the aspect of unlocking skills and becoming more powerful, and achieving a higher level of freedom as you become more powerful". While the first game was highly praised for its innovation, Crackdown 2 and Crackdown 3 were considered to be one of the most disappointing sequels in video game history. James Stephanie Sterling, during their time at Destructoid, reviewed the second game and called it "the most pointless, unnecessary, and insulting "sequels" ever created." John Almond from Gonevis ranked Crackdown 2 at #5 in his list of most disappointing video game sequels, stating, "[It] was panned by critics and consumers because of missing gameplay features from the first game like transforming cars, strategy-building in taking out targets, unique weapons like invisibility, and simply being able to aim a sniper rifle through a scope. Making it worse was the reusage of the first game's engine and setting, making Crackdown 2 feel more like an expansion pack than a sequel." The third game was also met with disappointment. Writer Super Philip added the third installment in his own "Most Disappointing Video Game Sequels" list, describing its release as a "miracle" and adding, "[It] was in the making for so long and the end result is so similar to past games. What was deemed fresh and modern back when Crackdown originally released isn't so much in the present, over a decade later." Christopher Byrd of The Washington Post called it a "remnant from another console-era, a time in which open-world games were still a novelty". He further stated, "Arranged about the neon city of New Providence, where the game is set, are communication towers to scale, many enemy operations to assault, an unmemorable pack of bosses to kill and some side activities to participate in. None of these activities are particularly different than those found in any number of games." References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Milky_Way_galaxy] | [TOKENS: 11693] |
Contents Milky Way The Milky Way or Milky Way Galaxy[c] is the galaxy that includes the Solar System, with the name describing the galaxy's appearance from Earth: a hazy band of light seen in the night sky formed from stars in other arms of the galaxy, which are so far away that they cannot be individually distinguished by the naked eye. The Milky Way is a barred spiral galaxy with a D25 isophotal diameter estimated at 26.8 ± 1.1 kiloparsecs (87,400 ± 3,600 light-years), but only about 1,000 light-years thick at the spiral arms (more at the bar). Recent simulations suggest that a dark matter area, also containing some visible stars, may extend up to a diameter of almost 2 million light-years (613 kpc). The Milky Way has several satellite galaxies and is part of the Local Group of galaxies, forming part of the Virgo Supercluster which is itself a component of the Laniakea Supercluster. It is estimated to contain 100–400 billion stars and at least that number of planets. The Solar System is located at a radius of about 27,000 light-years (8.3 kpc) from the Galactic Center, on the inner edge of the Orion Arm, one of the spiral-shaped concentrations of gas and dust. The stars in the innermost 10,000 light-years form a bulge and one or more bars that radiate from the bulge. The Galactic Center is an intense radio source known as Sagittarius A*, a supermassive black hole of 4.100 (± 0.034) million solar masses. The oldest stars in the Milky Way are nearly as old as the universe itself and thus probably formed shortly after the Dark Ages of the Big Bang. Galileo Galilei first resolved the band of light into individual stars with his telescope in 1610. Until the early 1920s, most astronomers thought that the Milky Way contained all the stars in the universe. Following the 1920 Great Debate between the astronomers Harlow Shapley and Heber Doust Curtis, observations by Edwin Hubble in 1923 showed that the Milky Way was just one of many galaxies. Mythology In the Babylonian epic poem Enūma Eliš, the Milky Way is created from the severed tail of the primeval salt water dragon Tiamat, set in the sky by Marduk, the Babylonian national god, after slaying her. This story was once thought to have been based on an older Sumerian version in which Tiamat is instead slain by Enlil of Nippur, but is now thought to be purely an invention of Babylonian propagandists with the intention of showing Marduk as superior to the Sumerian deities. Etymology In Greek mythology, Zeus places Heracles, his infant son born to Alcmene, on Hera's breast while she is asleep so the baby will drink her divine milk and become immortal. Hera wakes up while breastfeeding and then realizes she is nursing an unknown baby: she pushes the baby away, some of her milk spills, and it produces the band of light known as the Milky Way. In another Greek story, the abandoned Heracles is given by Athena to Hera for feeding, but Heracles' forcefulness causes Hera to rip him from her breast in pain. In Western culture, the name "Milky Way" is derived from its appearance as a dim unresolved "milky" glowing band arching across the night sky. The term is a translation of the Classical Latin via lactea, in turn derived from the Hellenistic Greek γαλαξίας, short for γαλαξίας κύκλος (galaxías kýklos), meaning "milky circle". The Ancient Greek γαλαξίας (galaxias) – from root γαλακτ-, γάλα ("milk") + -ίας (forming adjectives) – is also the root of "galaxy", the name for our, and later all such, collections of stars. The Milky Way, or "milk circle", was just one of 11 "circles" the Greeks identified in the sky, others being the zodiac, the meridian, the horizon, the equator, the tropics of Cancer and Capricorn, the Arctic Circle and the Antarctic Circle, and two colure circles passing through both poles. The English term can be traced back to a story by Geoffrey Chaucer c. 1380: See yonder, lo, the Galaxyë Which men clepeth the Milky Wey, For hit is whyt: and somme, parfey, — The House of Fame Appearance The Milky Way is visible as a hazy band of white light, some 30° wide, arching in the night sky. Although all the individual naked-eye stars in the entire sky are part of the Milky Way Galaxy, the term "Milky Way" is limited to this band of light. The light originates from the accumulation of unresolved stars and other material located in the direction of the galactic plane. Brighter regions around the band appear as soft visual patches known as star clouds. The most conspicuous of these is the Large Sagittarius Star Cloud, a portion of the central bulge of the galaxy. Dark regions within the band, such as the Great Rift and the Coalsack, are areas where interstellar dust blocks light from distant stars. Peoples of the southern hemisphere, including the Inca and Australian Aboriginals, identified these regions as dark cloud constellations. The area of sky that the Milky Way obscures is called the Zone of Avoidance. The Milky Way has a relatively low surface brightness. Its visibility can be greatly reduced by background light, such as light pollution or moonlight. The sky needs to be darker than about 20.2 magnitude per square arcsecond in order for the Milky Way to be visible. It should be visible if the limiting magnitude is approximately +5.1 or better and shows a great deal of detail at +6.1. This makes the Milky Way difficult to see from brightly lit urban or suburban areas, but very prominent when viewed from rural areas when the Moon is below the horizon.[d] Maps of artificial night sky brightness show that more than one-third of Earth's population cannot see the Milky Way from their homes due to light pollution. As viewed from Earth, the visible region of the Milky Way's galactic plane occupies an area of the sky that includes 30 constellations.[e] The Galactic Center lies in the direction of Sagittarius, where the Milky Way is brightest. From Sagittarius, the hazy band of white light appears to pass around to the galactic anticenter in Auriga. The band then continues the rest of the way around the sky, back to Sagittarius, dividing the sky into two roughly equal hemispheres. The galactic plane is inclined by about 60° to the ecliptic (the path of the Sun in the sky). It is tilted at an angle of 63° to the celestial equator. Astronomical history In Meteorologica, Aristotle (384–322 BC) states that the Greek philosophers Anaxagoras (c. 500–428 BC) and Democritus (460–370 BC) proposed that the Milky Way is the glow of stars not directly visible due to Earth's shadow, while other stars receive their light from the Sun, but have their glow obscured by solar rays. Aristotle himself believed that the Milky Way was part of the Earth's upper atmosphere, along with the stars, and that it was a byproduct of stars burning that did not dissipate because of its outermost location in the atmosphere, composing its great circle. He said that the milky appearance of the Milky Way Galaxy is due to the refraction of the Earth's atmosphere. The Neoplatonist philosopher Olympiodorus the Younger (c. 495–570 AD) criticized this view, arguing that if the Milky Way were sublunary, it should appear different at different times and places on Earth, and that it should have parallax, which it does not. In his view, the Milky Way is celestial. This idea would be influential later in the Muslim world. The Persian astronomer Al-Biruni (973–1048) proposed that the Milky Way is "a collection of countless fragments of the nature of nebulous stars". The Andalusian astronomer Avempace (died 1138) proposed that the Milky Way was made up of many stars but appeared to be a continuous image in the Earth's atmosphere, citing his observation of a conjunction of Jupiter and Mars in 1106 or 1107 as evidence. The Persian astronomer Nasir al-Din al-Tusi (1201–1274) in his Tadhkira wrote: "The Milky Way, i.e. the Galaxy, is made up of a very large number of small, tightly clustered stars, which, on account of their concentration and smallness, seem to be cloudy patches. Because of this, it was likened to milk in color." Ibn Qayyim al-Jawziyya (1292–1350) proposed that the Milky Way is "a myriad of tiny stars packed together in the sphere of the fixed stars". Proof of the Milky Way consisting of many stars came in 1610 when Galileo Galilei used a telescope to study the Milky Way and discovered that it was composed of a huge number of faint stars. Galileo also concluded that the appearance of the Milky Way was due to refraction of the Earth's atmosphere. In a treatise in 1755, Immanuel Kant, drawing on earlier work by Thomas Wright, speculated (correctly) that the Milky Way might be a rotating body of a huge number of stars, held together by gravitational forces akin to the Solar System but on much larger scales. The resulting disk of stars would be seen as a band in the sky from our perspective inside the disk. Wright and Kant also conjectured that some of the nebulae visible in the night sky might be separate "galaxies" themselves, similar to our own. Kant referred to both the Milky Way and the "extragalactic nebulae" as "island universes", a term still current up to the 1930s. The first attempt to describe the shape of the Milky Way and the position of the Sun within it was carried out by William Herschel in 1785 by carefully counting the number of stars in different regions of the visible sky. He produced a diagram of the shape of the Milky Way with the Solar System close to the center. In 1845, Lord Rosse constructed a new telescope and was able to distinguish between elliptical and spiral-shaped nebulae. He also managed to make out individual point sources in some of these nebulae, lending credence to Kant's earlier conjecture. In 1904, studying the proper motions of stars, Jacobus Kapteyn reported that these were not random, as it was believed in that time; stars could be divided into two streams, moving in nearly opposite directions. It was later realized that Kapteyn's data had been the first evidence of the rotation of the Milky Way, which ultimately led to the finding of galactic rotation by Bertil Lindblad and Jan Oort. In 1917, Heber Doust Curtis had observed the nova S Andromedae within the Great Andromeda Nebula (Messier object 31). Searching the photographic record, he found 11 more novae. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred within the Milky Way. As a result, he was able to come up with a distance estimate of 150,000 parsecs. He became a proponent of the "island universes" hypothesis, which held that the spiral nebulae were independent galaxies. In 1920 the Great Debate took place between Harlow Shapley and Heber Curtis, concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the Universe. To support his claim that the Great Andromeda Nebula is an external galaxy, Curtis noted the appearance of dark lanes resembling the dust clouds in the Milky Way, as well as the significant Doppler shift. The controversy was conclusively settled by Edwin Hubble in the early 1920s using the Mount Wilson observatory 2.5 m (100 in) Hooker telescope. With the light-gathering power of this new telescope, he was able to produce astronomical photographs that resolved the outer parts of some spiral nebulae as collections of individual stars. He was also able to identify some Cepheid variables that he could use as a benchmark to estimate the distance to the nebulae. He found that the Andromeda Nebula is 275,000 parsecs from the Sun, far too distant to be part of the Milky Way. The ESA spacecraft Gaia provides distance estimates by determining the parallax of a billion stars and is mapping the Milky Way. Data from Gaia has been described as "transformational". It has been estimated that Gaia has expanded the number of observations of stars from about 2 million stars, as of the 1990s, to 2 billion. It has expanded the measurable volume of space by a factor of 100 in radius and a factor of 1,000 in precision. A study in 2020 concluded that Gaia detected a wobbling motion of the galaxy, which might be caused by "torques from a misalignment of the disc's rotation axis with respect to the principal axis of a non-spherical halo, or from accreted matter in the halo acquired during late infall, or from nearby, interacting satellite galaxies and their consequent tides". In April 2024, initial studies and related maps, involving the magnetic fields of the Milky Way were reported. Astrography The Sun is near the inner rim of the Orion Arm, within the Local Fluff of the Local Bubble, between the Radcliffe wave and Split linear structures (formerly Gould Belt). Based upon studies of stellar orbits around Sgr A* by Gillessen et al. (2016), the Sun lies at an estimated distance of 27.14 ± 0.46 kly (8.32 ± 0.14 kpc) from the Galactic Center. Boehle et al. (2016) found a smaller value of 25.64 ± 0.46 kly (7.86 ± 0.14 kpc), also using a star orbit analysis. The Sun is currently 5–30 parsecs (16–98 ly) above, or north of, the central plane of the Galactic disk. The distance between the local arm and the next arm out, the Perseus Arm, is about 2,000 parsecs (6,500 ly). The Sun, and thus the Solar System, is located in the Milky Way's galactic habitable zone. There are about 208 stars brighter than absolute magnitude 8.5 within a sphere with a radius of 15 parsecs (49 ly) from the Sun, giving a density of one star per 69 cubic parsecs, or one star per 2,360 cubic light-years (from List of nearest bright stars). On the other hand, there are 64 known stars (of any magnitude, not counting 4 brown dwarfs) within 5 parsecs (16 ly) of the Sun, giving a density of about one star per 8.2 cubic parsecs, or one per 284 cubic light-years (from List of nearest stars). This illustrates the fact that there are far more faint stars than bright stars: in the entire sky, there are about 500 stars brighter than apparent magnitude 4 but 15.5 million stars brighter than apparent magnitude 14. The apex of the Sun's way, or the solar apex, is the direction that the Sun travels through the Local standard of rest in the Milky Way. The general direction of the Sun's Galactic motion is towards the star Deneb near the constellation of Cygnus, at an angle of roughly 90 sky degrees to the direction of the Galactic Center. The Sun's orbit about the Milky Way is expected to be roughly elliptical with the addition of perturbations due to the Galactic spiral arms and non-uniform mass distributions. In addition, the Sun passes through the Galactic plane approximately 2.7 times per orbit. This is very similar to how a simple harmonic oscillator works with no drag force (damping) term. These oscillations were until recently thought to coincide with mass lifeform extinction periods on Earth. A reanalysis of the effects of the Sun's transit through the spiral structure based on CO data has failed to find a correlation. It takes the Solar System about 240 million years to complete one orbit of the Milky Way (a galactic year), so the Sun is thought to have completed 18–20 orbits during its lifetime and 1/1250 of a revolution since the origin of humans. The orbital speed of the Solar System about the center of the Milky Way is approximately 220 km/s (490,000 mph) or 0.073% of the speed of light. The Sun moves through the heliosphere at 84,000 km/h (52,000 mph). At this speed, it takes around 1,400 years for the Solar System to travel a distance of 1 light-year, or 8 days to travel 1 AU (astronomical unit). The Solar System is headed in the direction of the zodiacal constellation Scorpius, which follows the ecliptic. A galactic quadrant, or quadrant of the Milky Way, refers to one of four circular sectors in the division of the Milky Way. In astronomical practice, the delineation of the galactic quadrants is based upon the galactic coordinate system, which places the Sun as the origin of the mapping system. Quadrants are described using ordinals – for example, "1st galactic quadrant", "second galactic quadrant", or "third quadrant of the Milky Way". Viewing from the north galactic pole with 0° (zero degrees) as the ray that runs starting from the Sun and through the Galactic Center, the quadrants are: with the galactic longitude (ℓ) increasing in the counter-clockwise direction (positive rotation) as viewed from north of the Galactic Center (a view-point several hundred thousand light-years distant from Earth in the direction of the constellation Coma Berenices); if viewed from south of the Galactic Center (a view-point similarly distant in the constellation Sculptor), ℓ would increase in the clockwise direction (negative rotation). General characteristics The Milky Way is one of the two largest galaxies in the Local Group (the other being the Andromeda Galaxy), although the size for its galactic disc and how much it defines the isophotal diameter is not well understood. It is estimated that the significant bulk of stars in the galaxy lies within the 26 kiloparsecs (80,000 light-years) diameter, and that the number of stars beyond the outermost disc dramatically reduces to a very low number, with respect to an extrapolation of the exponential disk with the scale length of the inner disc. There are several methods being used in astronomy in defining the size of a galaxy, and each of them can yield different results with respect to one another. The most commonly employed method is the D25 standard – the isophote where the photometric brightness of a galaxy in the B-band (445 nm wavelength of light, in the blue part of the visible spectrum) reaches 25 mag/arcsec2. An estimate from 1997 by Goodwin and others compared the distribution of Cepheid variable stars in 17 other spiral galaxies to the ones in the Milky Way, and modelling the relationship to their surface brightnesses. This gave an isophotal diameter for the Milky Way at 26.8 ± 1.1 kiloparsecs (87,400 ± 3,600 light-years), by assuming that the galactic disc is well represented by an exponential disc and adopting a central surface brightness of the galaxy (μ0) of 22.1±0.3 B-mag/arcsec−2 and a disk scale length (h) of 5.0 ± 0.5 kpc (16,300 ± 1,600 ly). This is significantly smaller than the Andromeda Galaxy's isophotal diameter, and slightly below the mean isophotal sizes of the galaxies being at 28.3 kpc (92,000 ly). The paper concludes that the Milky Way and Andromeda Galaxy were not overly large spiral galaxies, nor were among the largest known (if the former not being the largest) as previously widely believed, but rather average ordinary spiral galaxies. To compare the relative physical scale of the Milky Way, if the Solar System out to Neptune were the size of a US quarter (24.3 mm (0.955 in)), the Milky Way would be approximately at least the greatest north–south line of the contiguous United States. An even older study from 1978 gave a lower diameter for Milky Way of about 23 kpc (75,000 ly). A 2015 paper discovered that there is a ring-like filament of stars called Triangulum–Andromeda Ring (TriAnd Ring) rippling above and below the relatively flat galactic plane, which alongside Monoceros Ring were both suggested to be primarily the result of disk oscillations and wrapping around the Milky Way, at a diameter of at least 50 kpc (160,000 ly), which may be part of the Milky Way's outer disk itself, hence making the stellar disk larger by increasing to this size. A more recent 2018 paper later somewhat ruled out this hypothesis, and supported a conclusion that the Monoceros Ring, A13 and TriAnd Ring were stellar overdensities rather kicked out from the main stellar disk, with the velocity dispersion of the RR Lyrae stars found to be higher and consistent with halo membership. Another 2018 study revealed the very probable presence of disk stars at 26–31.5 kpc (84,800–103,000 ly) from the Galactic Center or perhaps even farther, significantly beyond approximately 13–20 kpc (40,000–70,000 ly), in which it was once believed to be the abrupt drop-off of the stellar density of the disk, meaning that few or no stars were expected to be above this limit, save for stars that belong to the old population of the galactic halo. A 2020 study predicted the edge of the Milky Way's dark matter halo being around 292 ± 61 kpc (952,000 ± 199,000 ly), which translates to a diameter of 584 ± 122 kpc (1.905 ± 0.3979 Mly). The Milky Way's stellar disk is also estimated to be approximately up to 1.35 kpc (4,000 ly) thick. The Milky Way is approximately 0.88 trillion times the mass of the Sun in total (8.8×1011 solar masses), using a cutoff of 200kpc to define the galaxy. Estimates of the mass of the Milky Way vary, depending upon the method and data used. The low end of the estimate range is 5.8×1011 solar masses (M☉), somewhat less than that of the Andromeda Galaxy. Measurements using the Very Long Baseline Array in 2009 found velocities as large as 254 km/s (570,000 mph) for stars at the outer edge of the Milky Way. Because the orbital velocity depends on the total mass inside the orbital radius, this suggests that the Milky Way is more massive, roughly equaling the mass of Andromeda Galaxy at 7×1011 M☉ within 160,000 ly (49 kpc) of its center. In 2010, a measurement of the radial velocity of halo stars found that the mass enclosed within 80 kiloparsecs is 7×1011 M☉. In a 2014 study, the mass of the entire Milky Way is estimated to be 8.5×1011 M☉, but this is only half the mass of the Andromeda Galaxy. A recent 2019 mass estimate for the Milky Way is 1.29×1012 M☉. Much of the mass of the Milky Way seems to be dark matter, an unknown and invisible form of matter that interacts gravitationally with ordinary matter. A dark matter halo is conjectured to spread out relatively uniformly to a distance beyond one hundred kiloparsecs (kpc) from the Galactic Center. Mathematical models of the Milky Way suggest that the mass of dark matter is 1–1.5×1012 M☉. 2013 and 2014 studies indicate a range in mass, as large as 4.5×1012 M☉ and as small as 8×1011 M☉. By comparison, the total mass of all the stars in the Milky Way is estimated to be between 4.6×1010 M☉ and 6.43×1010 M☉. In addition to the stars, there is also interstellar gas, comprising 90% hydrogen and 10% helium by mass, with two thirds of the hydrogen found in the atomic form and the remaining one-third as molecular hydrogen. The mass of the Milky Way's interstellar gas is equal to between 10% and 15% of the total mass of its stars. Interstellar dust accounts for an additional 1% of the total mass of the gas. In March 2019, astronomers reported that the virial mass of the Milky Way Galaxy is 1.54×1012 solar masses within a radius of about 39.5 kpc (130,000 ly), over twice as much as was determined in earlier studies, suggesting that about 90% of the mass of the galaxy is dark matter. In September 2023, astronomers reported that the virial mass of the Milky Way Galaxy is only 2.06×1011 solar masses, only a tenth of the mass of previous studies. The mass was determined from data of the Gaia spacecraft. The stars and gas in the Milky Way rotate about its center differentially, meaning that the rotation period varies with location. As is typical for spiral galaxies, the orbital speed of most stars in the Milky Way does not depend strongly on their distance from the center. Away from the central bulge or outer rim, the typical stellar orbital speed is between 200 and 220 km/s. Hence the orbital period of the typical star is approximately proportional to the length of the path traveled. This is unlike the situation in the Solar System, where two-body gravitational dynamics dominate, and different orbits have significantly different velocities associated with them. The rotation curve (shown in the figure) describes this rotation. If the Milky Way contained only the mass observed in stars, gas, and other baryonic (ordinary) matter, the rotational speed would decrease with distance from the center. However, the observed curve is relatively flat, indicating that there is additional mass that cannot be detected directly with electromagnetic radiation. This inconsistency is attributed to dark matter. The rotation curve of the Milky Way agrees with the universal rotation curve of spiral galaxies, the best evidence for the existence of dark matter in galaxies. Alternatively, a minority of astronomers propose that a modification of the law of gravity may explain the observed rotation curve. Although special relativity states that there is no "preferred" inertial frame of reference in space with which to compare the Milky Way, the Milky Way does have a velocity with respect to cosmological frames of reference. One such frame of reference is the Hubble flow, the apparent motions of galaxy clusters due to the expansion of space. Individual galaxies, including the Milky Way, have peculiar velocities relative to the average flow. Thus, to compare the Milky Way to the Hubble flow, one must consider a volume large enough so that the expansion of the Universe dominates over local, random motions. A large enough volume means that the mean motion of galaxies within this volume is equal to the Hubble flow. Astronomers believe the Milky Way is moving at approximately 630 km/s (1,400,000 mph) with respect to this local co-moving frame of reference. The Milky Way is moving in the general direction of the Great Attractor and other galaxy clusters, including the Shapley Supercluster, behind it. The Local Group, a cluster of gravitationally bound galaxies containing, among others, the Milky Way and the Andromeda Galaxy, is part of a supercluster called the Local Supercluster, centered near the Virgo Cluster: although they are moving away from each other at 967 km/s (2,160,000 mph) as part of the Hubble flow, this velocity is less than would be expected given the 16.8 million pc distance due to the gravitational attraction between the Local Group and the Virgo Cluster. Another reference frame is provided by the cosmic microwave background (CMB), in which the CMB temperature is least distorted by Doppler shift (zero dipole moment). The Milky Way is moving at 552 ± 6 km/s (1,235,000 ± 13,000 mph) with respect to this frame, toward 10.5 right ascension, −24° declination (J2000 epoch, near the center of Hydra). This motion is observed by satellites such as the Cosmic Background Explorer (COBE) and the Wilkinson Microwave Anisotropy Probe (WMAP) as a dipole contribution to the CMB, as photons in equilibrium in the CMB frame get blue-shifted in the direction of the motion and red-shifted in the opposite direction. Contents The Milky Way contains between 100 and 400 billion stars and at least that many planets. An exact figure would depend on counting the number of very-low-mass stars, which are difficult to detect, especially at distances of more than 300 ly (90 pc) from the Sun. As a comparison, the neighboring Andromeda Galaxy contains an estimated one trillion (1012) stars. The Milky Way may contain ten billion white dwarfs, a billion neutron stars, and a hundred million stellar black holes.[f] Filling the space between the stars is a disk of gas and dust called the interstellar medium. This disk has at least a comparable extent in radius to the stars, whereas the thickness of the gas layer ranges from hundreds of light-years for the colder gas to thousands of light-years for the warmer gas. The disk of stars in the Milky Way does not have a sharp edge beyond which there are no stars. Rather, the concentration of stars decreases with distance from the center of the Milky Way. Beyond a radius of roughly 40,000 light years (13 kpc) from the center, the number of stars per cubic parsec drops much faster with radius. Surrounding the galactic disk is a spherical galactic halo of stars and globular clusters that extends farther outward, but is limited in size by the orbits of two Milky Way satellites, the Large and Small Magellanic Clouds, whose closest approach to the Galactic Center is about 180,000 ly (55 kpc). At this distance or beyond, the orbits of most halo objects would be disrupted by the Magellanic Clouds. Hence, such objects would probably be ejected from the vicinity of the Milky Way. The integrated absolute visual magnitude of the Milky Way is estimated to be around −20.9.[g] Both gravitational microlensing and planetary transit observations indicate that there may be at least as many planets bound to stars as there are stars in the Milky Way, and microlensing measurements indicate that there are more rogue planets not bound to host stars than there are stars. The Milky Way contains an average of at least one planet per star, resulting in 100–400 billion planets, according to a January 2013 study of the five-planet star system Kepler-32 by the Kepler space observatory. A different January 2013 analysis of Kepler data estimated that at least 17 billion Earth-sized exoplanets reside in the Milky Way. In November 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs within the Milky Way. 11 billion of these estimated planets may be orbiting Sun-like stars. The nearest exoplanet may be 4.2 light-years away, orbiting the red dwarf Proxima Centauri, according to a 2016 study. Such Earth-sized planets may be more numerous than gas giants, though harder to detect at great distances given their small size. Besides exoplanets, "exocomets", comets beyond the Solar System, have also been detected and may be common in the Milky Way. More recently, in November 2020, over 300 million habitable exoplanets are estimated to exist in the Milky Way Galaxy. When compared to other more distant galaxies in the universe, the Milky Way galaxy has a below average amount of neutrino luminosity making our galaxy a "neutrino desert". Structure The Milky Way consists of a bar-shaped core region surrounded by a warped disk of gas, dust and stars. The mass distribution within the Milky Way closely resembles the type Sbc in the Hubble classification, which represents spiral galaxies with relatively loosely wound arms. Astronomers first began to conjecture that the Milky Way is a barred spiral galaxy, rather than an ordinary spiral galaxy, in the 1960s. These conjectures were confirmed by the Spitzer Space Telescope observations in 2005 that showed the Milky Way's central bar to be larger than previously thought. The Sun is 25,000–28,000 ly (7.7–8.6 kpc) from the Galactic Center. This value is estimated using geometric-based methods or by measuring selected astronomical objects that serve as standard candles, with different techniques yielding various values within this approximate range. In the inner few kiloparsecs (around 10,000 light-years radius) is a dense concentration of mostly old stars in a roughly spheroidal shape called the bulge. It has been proposed that the Milky Way lacks a bulge due to a collision and merger between previous galaxies, and that instead it only has a pseudobulge formed by its central bar. However, confusion in the literature between the (peanut shell)-shaped structure created by instabilities in the bar, versus a possible bulge with an expected half-light radius of 0.5 kpc, abounds. The Galactic Center is marked by an intense radio source named Sagittarius A* (pronounced Sagittarius A-star). The motion of material around the center indicates that Sagittarius A* harbors a massive, compact object. This concentration of mass is best explained as a supermassive black hole[h] (SMBH) with an estimated mass of 4.1–4.5 million times the mass of the Sun. The rate of accretion of the SMBH is consistent with an inactive galactic nucleus, being estimated at 1×10−5 M☉ per year. Observations indicate that there are SMBHs located near the center of most normal galaxies. The nature of the Milky Way's bar is actively debated, with estimates for its half-length and orientation spanning from 1 to 5 kpc (3,000–16,000 ly) and 10–50 degrees relative to the line of sight from Earth to the Galactic Center. Certain authors advocate that the Milky Way features two distinct bars, one nestled within the other. However, RR Lyrae-type stars do not trace a prominent Galactic bar. The bar may be surrounded by a ring called the "5 kpc ring" that contains a large fraction of the molecular hydrogen present in the Milky Way, as well as most of the Milky Way's star formation activity. Viewed from the Andromeda Galaxy, it would be the brightest feature of the Milky Way. X-ray emission from the core is aligned with the massive stars surrounding the central bar and the Galactic ridge. In June 2023, astronomers led by Naoko Kurahashi Neilson reported using a new cascade neutrino technique to detect, for the first time, the release of neutrinos from the galactic plane of the Milky Way galaxy, creating the first neutrino view of the Milky Way. Since 1970, various gamma-ray detection missions have discovered 511-keV gamma rays coming from the general direction of the Galactic Center. These gamma rays are produced by positrons (antielectrons) annihilating with electrons. In 2008 it was found that the distribution of the sources of the gamma rays resembles the distribution of low-mass X-ray binaries, seeming to indicate that these X-ray binaries are sending positrons (and electrons) into interstellar space where they slow down and annihilate. The observations were made by both NASA and ESA's satellites. In 1970 gamma ray detectors found that the emitting region was about 10,000 light-years across with a luminosity of about 10,000 Suns. In 2010, two gigantic spherical bubbles of high energy gamma-emission were detected to the north and the south of the Milky Way core, using data from the Fermi Gamma-ray Space Telescope. The diameter of each of the bubbles is about 25,000 light-years (7.7 kpc) (or about 1/4 of the galaxy's estimated diameter); they stretch up to Grus and to Virgo on the night-sky of the Southern Hemisphere. Subsequently, observations with the Parkes Telescope at radio frequencies identified polarized emission that is associated with the Fermi bubbles. These observations are best interpreted as a magnetized outflow driven by star formation in the central 640 ly (200 pc) of the Milky Way. Later, on January 5, 2015, NASA reported observing an X-ray flare 400 times brighter than usual, a record-breaker, from Sagittarius A*. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*. Outside the gravitational influence of the Galactic bar, the structure of the interstellar medium and stars in the disk of the Milky Way is organized into four spiral arms. Spiral arms typically contain a higher density of interstellar gas and dust than the Galactic average as well as a greater concentration of star formation, as traced by H II regions and molecular clouds. The Milky Way's spiral structure is uncertain, and there is currently no consensus on the nature of the Milky Way's arms. Perfect logarithmic spiral patterns only crudely describe features near the Sun, because galaxies commonly have arms that branch, merge, twist unexpectedly, and feature a degree of irregularity. The possible scenario of the Sun within a spur / Local arm emphasizes that point and indicates that such features are probably not unique, and exist elsewhere in the Milky Way. Estimates of the pitch angle of the arms range from about 7° to 25°. There are thought to be four spiral arms that all start near the Milky Way Galaxy's center. These are named as follows, with the positions of the arms shown in the image: Two spiral arms, the Scutum–Centaurus arm and the Carina–Sagittarius arm, have tangent points inside the Sun's orbit about the center of the Milky Way. If these arms contain an overdensity of stars compared to the average density of stars in the Galactic disk, it would be detectable by counting the stars near the tangent point. Two surveys of near-infrared light, which is sensitive primarily to red giants and not affected by dust extinction, detected the predicted overabundance in the Scutum–Centaurus arm but not in the Carina–Sagittarius arm: the Scutum–Centaurus Arm contains approximately 30% more red giants than would be expected in the absence of a spiral arm. This observation suggests that the Milky Way possesses only two major stellar arms: the Perseus arm and the Scutum–Centaurus arm. The rest of the arms contain excess gas but not excess old stars. In December 2013, astronomers found that the distribution of young stars and star-forming regions matches the four-arm spiral description of the Milky Way. Thus, the Milky Way appears to have two spiral arms as traced by old stars and four spiral arms as traced by gas and young stars. The explanation for this apparent discrepancy is unclear. The Near 3 kpc Arm (also called the Expanding 3 kpc Arm or simply the 3 kpc Arm) was discovered in the 1950s by astronomer van Woerden and collaborators through 21 centimeter radio measurements of HI (atomic hydrogen). It was found to be expanding away from the central bulge at more than 50 km/s. It is located in the fourth galactic quadrant at a distance of about 5.2 kpc from the Sun and 3.3 kpc from the Galactic Center. The Far 3 kpc Arm was discovered in 2008 by astronomer Tom Dame (Center for Astrophysics | Harvard & Smithsonian). It is located in the first galactic quadrant at a distance of 3 kpc (about 10,000 ly) from the Galactic Center. A simulation published in 2011 suggested that the Milky Way may have obtained its spiral arm structure as a result of repeated collisions with the Sagittarius Dwarf Elliptical Galaxy. It has been suggested that the Milky Way contains two different spiral patterns: an inner one, formed by the Sagittarius arm, that rotates fast and an outer one, formed by the Carina and Perseus arms, whose rotation velocity is slower and whose arms are tightly wound. In this scenario, suggested by numerical simulations of the dynamics of the different spiral arms, the outer pattern would form an outer pseudoring, and the two patterns would be connected by the Cygnus arm. Outside of the major spiral arms is the Monoceros Ring (or Outer Ring), a ring of gas and stars torn from other galaxies billions of years ago. However, several members of the scientific community recently restated their position affirming the Monoceros structure is nothing more than an over-density produced by the flared and warped thick disk of the Milky Way. The structure of the Milky Way's disk is warped along an "S" curve. The Galactic disk is surrounded by a spheroidal halo of old stars and globular clusters, of which 90% lie within 100,000 light-years (30 kpc) of the Galactic Center. However, a few globular clusters have been found farther, such as PAL 4 and AM 1 at more than 200,000 light-years from the Galactic Center. About 40% of the Milky Way's clusters are on retrograde orbits, which means they move in the opposite direction from the Milky Way rotation. The globular clusters can follow rosette orbits about the Milky Way, in contrast to the elliptical orbit of a planet around a star. Although the disk contains dust that obscures the view at some wavelengths, the halo component does not. Active star formation takes place in the disk (especially in the spiral arms, which represent areas of high density), but does not take place in the halo, as there is little cool gas to collapse into stars. Open clusters are also located primarily on the disk. Discoveries in the early 21st century have added dimension to the knowledge of the Milky Way's structure. With the discovery that the disk of the Andromeda Galaxy (M31) extends much farther than previously thought, the possibility of the disk of the Milky Way extending farther is apparent, and this is supported by evidence from the discovery of the Outer Arm extension of the Cygnus Arm and of a similar extension of the Scutum–Centaurus Arm. With the discovery of the Sagittarius Dwarf Elliptical Galaxy came the discovery of a ribbon of galactic debris as the polar orbit of the dwarf and its interaction with the Milky Way tears it apart. Upon the 2004 discovery of a ring of galactic debris in an in-plane orbit around the Milky Way, it was initially believed that the debris was the remnant of a system dubbed the Canis Major Dwarf Galaxy. Other scholars believed it to be due to the Galactic warp, a view which has been supported by more recent evidence as of 2021. The Sloan Digital Sky Survey of the northern sky shows a huge and diffuse structure (spread out across an area around 5,000 times the size of a full moon) within the Milky Way that does not seem to fit within current models. The collection of stars rises close to perpendicular to the plane of the spiral arms of the Milky Way. The proposed likely interpretation is that a dwarf galaxy is merging with the Milky Way. This galaxy is tentatively named the Virgo Stellar Stream and is found in the direction of Virgo about 30,000 light-years (9 kpc) away. In addition to the stellar halo, the Chandra X-ray Observatory, XMM-Newton, and Suzaku have provided evidence that there is also a gaseous halo containing a large amount of hot gas. This halo extends for hundreds of thousands of light-years, much farther than the stellar halo and close to the distance of the Large and Small Magellanic Clouds. The mass of this hot halo is nearly equivalent to the mass of the Milky Way itself. The temperature of this halo gas is between 1 and 2.5 million K (1.8 and 4.5 million °F). Observations of distant galaxies indicate that the Universe had about one-sixth as much baryonic (ordinary) matter as dark matter when it was just a few billion years old. However, only about half of those baryons are accounted for in the modern Universe based on observations of nearby galaxies like the Milky Way. If the finding that the mass of the halo is comparable to the mass of the Milky Way is confirmed, it could be the identity of the missing baryons around the Milky Way. Formation The Milky Way began as one or several small overdensities in the mass distribution in the Universe shortly after the Big Bang 13.61 billion years ago. Some of these overdensities were the seeds of globular clusters in which the oldest remaining stars in what is now the Milky Way formed. Nearly half the matter in the Milky Way may have come from other distant galaxies. These stars and clusters now comprise the stellar halo of the Milky Way. Within a few billion years of the birth of the first stars, the mass of the Milky Way was large enough so that it was spinning relatively quickly. Due to conservation of angular momentum, this led the gaseous interstellar medium to collapse from a roughly spheroidal shape to a disk. Therefore, later generations of stars formed in this spiral disk. Most younger stars, including the Sun, are observed to be in the disk. Since the first stars began to form, the Milky Way has grown through both galaxy mergers (particularly early in the Milky Way's growth) and accretion of gas directly from the Galactic halo. The Milky Way is currently accreting material from several small galaxies, including two of its largest satellite galaxies, the Large and Small Magellanic Clouds, through the Magellanic Stream. Direct accretion of gas is observed in high-velocity clouds like the Smith Cloud. Cosmological simulations indicate that, 11 billion years ago, it merged with a particularly large galaxy that has been labeled the Kraken. Properties of the Milky Way such as stellar mass, angular momentum, and metallicity in its outermost regions suggest it has undergone no mergers with large galaxies in the last 10 billion years. This lack of recent major mergers is unusual among similar spiral galaxies. Its neighbour the Andromeda Galaxy appears to have a more typical history shaped by more recent mergers with relatively large galaxies. According to recent studies, the Milky Way as well as the Andromeda Galaxy lie in what in the galaxy color–magnitude diagram is known as the "green valley", a region populated by galaxies in transition from the "blue cloud" (galaxies actively forming new stars) to the "red sequence" (galaxies that lack star formation). Star-formation activity in green valley galaxies is slowing as they run out of star-forming gas in the interstellar medium. In simulated galaxies with similar properties, star formation will typically have been extinguished within about five billion years from now, even accounting for the expected, short-term increase in the rate of star formation due to the collision between both the Milky Way and the Andromeda Galaxy. Measurements of other galaxies similar to the Milky Way suggest it is among the reddest and brightest spiral galaxies that are still forming new stars and it is just slightly bluer than the bluest red sequence galaxies. Globular clusters are among the oldest objects in the Milky Way, which thus set a lower limit on the age of the Milky Way. The ages of individual stars in the Milky Way can be estimated by measuring the abundance of long-lived radioactive elements such as thorium-232 and uranium-238, then comparing the results to estimates of their original abundance, a technique called nucleocosmochronology. These yield values of about 12.5 ± 3 billion years for CS 31082-001 and 13.8 ± 4 billion years for BD +17° 3248. Once a white dwarf is formed, it begins to undergo radiative cooling and the surface temperature steadily drops. By measuring the temperatures of the coolest of these white dwarfs and comparing them to their expected initial temperatures, an age estimate can be made. With this technique, the age of the globular cluster M4 was estimated as 12.7 ± 0.7 billion years. Age estimates of the oldest of these clusters give a best fit estimate of 12.6 billion years, and a 95% confidence upper limit of 16 billion years. In November 2018, astronomers reported the discovery of one of the oldest stars in the universe. About 13.5 billion-years-old, 2MASS J18082002-5104378 B is a tiny ultra metal-poor (UMP) star made almost entirely of materials released from the Big Bang, and is possibly one of the first stars. The discovery of the star in the Milky Way Galaxy suggests that the galaxy may be at least 3 billion years older than previously thought. Several individual stars have been found in the Milky Way's halo with measured ages very close to the 13.80-billion-year age of the Universe. In 2007, a star in the galactic halo, HE 1523-0901, was estimated to be about 13.2 billion years old. As the oldest known object in the Milky Way at that time, this measurement placed a lower limit on the age of the Milky Way. This estimate was made using the UV-Visual Echelle Spectrograph of the Very Large Telescope to measure the relative strengths of spectral lines caused by the presence of thorium and other elements created by the R-process. The line strengths yield abundances of different elemental isotopes, from which an estimate of the age of the star can be derived using nucleocosmochronology. Another star, HD 140283, has been estimated at either 13.7 ± 0.7 billion years, 12.2 ± 0.6 billion years, or 12.0 ± 0.5 billion years. According to observations utilizing adaptive optics to correct for Earth's atmospheric distortion, stars in the galaxy's bulge date to about 12.8 billion years old. The age of stars in the galactic thin disk has also been estimated using nucleocosmochronology. Measurements of thin disk stars yield an estimate that the thin disk formed 8.8 ± 1.7 billion years ago. These measurements suggest there was a hiatus of almost 5 billion years between the formation of the galactic halo and the thin disk. Recent analysis of the chemical signatures of thousands of stars suggests that stellar formation might have dropped by an order of magnitude at the time of disk formation, 10 to 8 billion years ago, when interstellar gas was too hot to form new stars at the same rate as before. The satellite galaxies surrounding the Milky Way are not randomly distributed but seem to be the result of a breakup of some larger system producing a ring structure 500,000 light-years in diameter and 50,000 light-years wide. Close encounters between galaxies, like that expected in 4 billion years with the Andromeda Galaxy, can rip off huge tails of gas, which, over time can coalesce to form dwarf galaxies in a ring at an arbitrary angle to the main disc. Intergalactic neighborhood The Milky Way and the Andromeda Galaxy are a binary system of giant spiral galaxies belonging to a group of 50 closely bound galaxies known as the Local Group, surrounded by a Local Void, itself being part of the Local Sheet and in turn the Virgo Supercluster. Surrounding the Virgo Supercluster are a number of voids, devoid of many galaxies, the Microscopium Void to the "north", the Sculptor Void to the "left", the Boötes Void to the "right" and the Canes-Major Void to the "south". These voids change shape over time, creating filamentous structures of galaxies. The Virgo Supercluster, for instance, is being drawn towards the Great Attractor, which in turn forms part of a greater structure, called Laniakea. Two smaller galaxies and a number of dwarf galaxies in the Local Group orbit the Milky Way. The largest of these is the Large Magellanic Cloud with a diameter of 32,200 light-years. It has a close companion, the Small Magellanic Cloud. The Magellanic Stream is a stream of neutral hydrogen gas extending from these two small galaxies across 100° of the sky. The stream is thought to have been dragged from the Magellanic Clouds in tidal interactions with the Milky Way. Some of the dwarf galaxies orbiting the Milky Way are Canis Major Dwarf (the closest), Sagittarius Dwarf Elliptical Galaxy, Ursa Minor Dwarf, Sculptor Dwarf, Sextans Dwarf, Fornax Dwarf, and Leo I Dwarf. The smallest dwarf galaxies of the Milky Way are only 500 light-years in diameter. These include Carina Dwarf, Draco Dwarf, and Leo II Dwarf. There may still be undetected dwarf galaxies that are dynamically bound to the Milky Way, which is supported by the detection of nine new satellites of the Milky Way in a relatively small patch of the night sky in 2015. There are some dwarf galaxies that have already been absorbed by the Milky Way, such as the progenitor of Omega Centauri. In 2005 with further confirmation in 2012 researchers reported that most satellite galaxies of the Milky Way lie in a very large disk and orbit in the same direction. This came as a surprise: according to standard cosmology, satellite galaxies should form in dark matter halos, and they should be widely distributed and moving in random directions. This discrepancy is still not explained. In January 2006, researchers reported that the heretofore unexplained warp in the disk of the Milky Way has now been mapped and found to be a ripple or vibration set up by the Large and Small Magellanic Clouds as they orbit the Milky Way, causing vibrations when they pass through its edges. Previously, these two galaxies, at around 2% of the mass of the Milky Way, were considered too small to influence the Milky Way. However, in a computer model, the movement of these two galaxies creates a dark matter wake that amplifies their influence on the larger Milky Way. Current measurements suggest the Andromeda Galaxy is approaching the Milky Way at 100 to 140 km/s (220,000 to 310,000 mph). In 4.3 billion years, there may be an Andromeda–Milky Way collision, depending on the importance of unknown lateral components to the galaxies' relative motion. If they collide, the chance of individual stars colliding with each other is extremely low, but instead the two galaxies will merge to form a single elliptical galaxy or perhaps a large disk galaxy over the course of about six billion years. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Nimerigar] | [TOKENS: 657] |
Contents Nimerigar The Nimerigar are a legendary race of little people found in the folklore of the Shoshone people of North America's Rocky Mountains. According to Shoshone tales, the Nimerigar were an aggressive people who would shoot poisoned arrows from tiny bows. Nimerigar roughly translated from Shoshone and Paiute languages means "people eaters". They were believed to kill their own people with a blow to the head if they became too ill to be a participating member of their society. Archaeology Although thought to be mythical, the reality of Nimerigar tales was called into question in 1932 with the discovery of the San Pedro Mountains mummy, a 14 in (36 cm)-tall mummy (6.5 in (17 cm) seated) found in a cave 60 miles south of Casper, Wyoming. Extensive tests were carried out on the mummy, with the initial belief that it was a hoax. Tests performed by the American Museum of Natural History and certified genuine by the Anthropology Department at Harvard University stated that the mummy was estimated to be the body of a full grown adult, approximately 65 years old. The mummy's damaged spine, broken collarbone, and smashed in skull (exposing brain tissue and congealed blood) suggested that it had been violently killed. Adding to its strangeness, the mummy had a full set of canine teeth, all of which were overly pointed. When examined by the University of Wyoming, the body was found to be that of a deceased anencephalic infant "whose cranial deformity gave it the appearance of a miniature adult." A second mummy examined by University of Wyoming anthropologist George Gill and the Denver Children's Hospital in the 1990s was also shown to be an anencephalic infant. DNA testing showed it to be Native American and radiocarbon dating dated it to about 1700. Historical accounts Historical accounts from the missionary David Zeisberger in 1778 also point to the possible existence of Nimerigar or other little peoples in North America. Near Coshocton, Ohio, Zeisberger wrote of a burial ground that reportedly had numerous remains of a pygmy race, approximately 3 ft (91 cm) in height. "The long rows of graves of the pygmy race at Coshocton were regularly arranged with heads to the west, a circumstance which has given rise to the theory that these people were sun-worshippers, facing the daily approach of the sun god over the eastern hills. Acceptance of the sun-worship surmise does not necessarily imply a deduction that this pygmy race may have descended from the river-people of Hindustan or Egypt. Primeval man, wherever found, seems to have been a sun-worshipper". These burial grounds are no longer in existence as a result of extensive farming and modern inhabitation of the land. However, acceding to the missionary observations, these primitive people understood the use of the stone ax, the making of good pottery, and the division of land areas into squares. Other than this small amount of information, the story of this strange race remains untold. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/HD_37017] | [TOKENS: 392] |
Contents HD 37017 HD 37017 is a binary star system in the equatorial constellation of Orion. It has the variable star designation V1046 Orionis; HD 37017 is the identifier from the Henry Draper Catalogue. The system is a challenge to view with the naked eye, being close to the lower limit of visibility with a combined apparent visual magnitude of 6.6. It is located at a distance of approximately 1,230 light years based on parallax, and is drifting further away with a radial velocity of +32 km/s. The system is part of star cluster NGC 1981. The binary nature of this system was suggested by A. Blaauw and T. S. van Albada in 1963. It is a double-lined spectroscopic binary with an orbital period of 18.6556 days and an eccentricity of 0.31. The eccentricity is considered unusually large for such a close system. It has been suspected of being an eclipsing binary or rotating ellipsoidal variable, and the primary is also am SX Arietis variable. The primary is a helium-strong, magnetic chemically peculiar star with a stellar classification of B1.5 Vp. It has a magnetic field strength of 7,700 G, and the helium concentrations are located at the magnetic poles. V1046 Orionis was found to be a variable star by L. A. Balona in 1997, and is now classified as an SX Arietis variable. The star undergoes periodic changes in visual brightness, magnetic field strength, and spectral characteristics with a cycle time of 0.901175 days – the star's presumed rotation period. Radio emission has been detected that varies with the rotation period. The secondary component has an estimated 4.5 times the mass of the Sun. The class has been estimated as type B6III-IV. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Shehecheyanu] | [TOKENS: 537] |
Contents Shehecheyanu The Shehecheyanu berakhah (blessing) (Hebrew: בִּרְכַּת שֶׁהֶחֱיָנוּ, "Who has given us life") is a common Jewish prayer to celebrate special occasions. It expresses gratitude to God for new and unusual experiences or possessions. The blessing was recorded in the Talmud over 1500 years ago. Recitation The blessing of Shehecheyanu is recited in thanks or commemoration of: Some have the custom of saying it at the ceremony of the Birkat Hachama, which is recited once every 28 years in the month of Nisan/Adar II. When several reasons apply (such as the beginning of Passover, together with the mitzvot of matzah, marror, etc.), the blessing is only said once. It is not recited at a brit milah by Ashkenazim, since the circumcision involves pain, nor at the Counting of the Omer, since that is a task that does not give pleasure and causes sadness at the thought that the actual Omer ceremony cannot be performed because of the destruction of the Temple. However, it is recited by Sephardim at the berith milah ceremony. Text Although the most prevalent custom is to recite lazman in accordance with the usual rules of dikduk (Hebrew language grammar), some, including Chabad, have the custom to say lizman ("to [this] season"); this custom follows the ruling of the Mishnah Berurah and Aruch Hashulchan, following Magen Avraham, Mateh Moshe and Maharshal. Modern history Avshalom Haviv finished his speech in court on June 10, 1947, with the Shehecheyanu blessing. The Israeli Declaration of Independence was publicly read in Tel Aviv on May 14, 1948, before the expiration of the British Mandate at midnight. After the first Prime Minister of Israel, David Ben-Gurion, read the Declaration of Independence, Rabbi Yehuda Leib Maimon recited the Shehecheyanu blessing, and the Declaration of Independence was signed. The ceremony concluded with the singing of "Hatikvah." There is a common[according to whom?] musical rendition of the blessing composed by Meyer Machtenberg, an Eastern European choirmaster who composed it in the United States in the 19th century. Media See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Rotational_symmetry#Rotational_symmetry_with_respect_to_any_angle] | [TOKENS: 1330] |
Contents Rotational symmetry Rotational symmetry, also known as radial symmetry in geometry, is the property a shape has when it looks the same after some rotation by a partial turn. An object's degree of rotational symmetry is the number of distinct orientations in which it looks exactly the same for each rotation. Certain geometric objects are partially symmetrical when rotated at certain angles such as squares rotated 90°, however the only geometric objects that are fully rotationally symmetric at any angle are spheres, circles and other spheroids. Formal treatment Formally the rotational symmetry is symmetry with respect to some or all rotations in m-dimensional Euclidean space. Rotations are direct isometries, i.e., isometries preserving orientation. Therefore, a symmetry group of rotational symmetry is a subgroup of E +(m) (see Euclidean group). Symmetry with respect to all rotations about all points implies translational symmetry with respect to all translations, so space is homogeneous, and the symmetry group is the whole E(m). With the modified notion of symmetry for vector fields the symmetry group can also be E +(m). For symmetry with respect to rotations about a point we can take that point as origin. These rotations form the special orthogonal group SO(m), the group of m × m orthogonal matrices with determinant 1. For m = 3 this is the rotation group SO(3). In another definition of the word, the rotation group of an object is the symmetry group within E +(n), the group of direct isometries; in other words, the intersection of the full symmetry group and the group of direct isometries. For chiral objects it is the same as the full symmetry group. Laws of physics are SO(3)-invariant if they do not distinguish different directions in space. Because of Noether's theorem, the rotational symmetry of a physical system is equivalent to the angular momentum conservation law. Rotational symmetry of order n, also called n-fold rotational symmetry, or discrete rotational symmetry of the nth order, with respect to a particular point (in 2D) or axis (in 3D) means that rotation by an angle of 360 ∘ n {\displaystyle {\tfrac {360^{\circ }}{n}}} (180°, 120°, 90°, 72°, 60°, 51 3⁄7°, etc.) does not change the object. A "1-fold" symmetry is no symmetry (all objects look alike after a rotation of 360°). The notation for n-fold symmetry is Cn or simply n. The actual symmetry group is specified by the point or axis of symmetry, together with the n. For each point or axis of symmetry, the abstract group type is cyclic group of order n, Zn. Although for the latter also the notation Cn is used, the geometric and abstract Cn should be distinguished: there are other symmetry groups of the same abstract group type which are geometrically different, see cyclic symmetry groups in 3D. The fundamental domain is a sector of 360 ∘ n . {\displaystyle {\tfrac {360^{\circ }}{n}}.} Examples without additional reflection symmetry: Cn is the rotation group of a regular n-sided polygon in 2D and of a regular n-sided pyramid in 3D. If there is e.g. rotational symmetry with respect to an angle of 100°, then also with respect to one of 20°, the greatest common divisor of 100° and 360°. A typical 3D object with rotational symmetry (possibly also with perpendicular axes) but no mirror symmetry is a propeller. For discrete symmetry with multiple symmetry axes through the same point, there are the following possibilities: In the case of the Platonic solids, the 2-fold axes are through the midpoints of opposite edges, and the number of them is half the number of edges. The other axes are through opposite vertices and through centers of opposite faces, except in the case of the tetrahedron, where the 3-fold axes are each through one vertex and the center of one face. Rotational symmetry with respect to any angle is, in two dimensions, circular symmetry. The fundamental domain is a half-line. In three dimensions we can distinguish cylindrical symmetry and spherical symmetry (no change when rotating about one axis, or for any rotation). That is, no dependence on the angle using cylindrical coordinates and no dependence on either angle using spherical coordinates. The fundamental domain is a half-plane through the axis, and a radial half-line, respectively. Axisymmetric and axisymmetrical are adjectives which refer to an object having cylindrical symmetry, or axisymmetry (i.e. rotational symmetry with respect to a central axis) like a doughnut (torus). An example of approximate spherical symmetry is the Earth (with respect to density and other physical and chemical properties). In 4D, continuous or discrete rotational symmetry about a plane corresponds to corresponding 2D rotational symmetry in every perpendicular plane, about the point of intersection. An object can also have rotational symmetry about two perpendicular planes, e.g. if it is the Cartesian product of two rotationally symmetry 2D figures, as in the case of e.g. the duocylinder and various regular duoprisms. 2-fold rotational symmetry together with single translational symmetry is one of the Frieze groups. A rotocenter is the fixed, or invariant, point of a rotation. There are two rotocenters per primitive cell. Together with double translational symmetry the rotation groups are the following wallpaper groups, with axes per primitive cell: Scaling of a lattice divides the number of points per unit area by the square of the scale factor. Therefore, the number of 2-, 3-, 4-, and 6-fold rotocenters per primitive cell is 4, 3, 2, and 1, respectively, again including 4-fold as a special case of 2-fold, etc. 3-fold rotational symmetry at one point and 2-fold at another one (or ditto in 3D with respect to parallel axes) implies rotation group p6, i.e. double translational symmetry and 6-fold rotational symmetry at some point (or, in 3D, parallel axis). The translation distance for the symmetry generated by one such pair of rotocenters is 2 3 {\displaystyle 2{\sqrt {3}}} times their distance. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Maor_Farid#cite_ref-10] | [TOKENS: 1458] |
Contents Maor Farid Dr. Maor Farid (Hebrew: מאור פריד; born April 20, 1992) is an Israeli scientist, engineer and artificial intelligence researcher at Massachusetts Institute of Technology, social activist, and author. He is the founder and CEO of Learn to Succeed (Hebrew: ללמוד להצליח) for empowering of youths from the Israeli socio-economic periphery and youths at risk, a regional manager of the Israeli center of ScienceAbroad at MIT, and an activist in the American Technion Society. He is an alumnus of Unit 8200, and a fellow of Fulbright Program and the Israel Scholarship Educational Foundation [he]. Dr. Farid was elected to the Forbes 30 Under 30 list of 2019, and won the Moskowitz Prize for Zionism. Early life Maor was born in Ness Ziona, a city in central Israel, as the eldest son for parents from immigrating families of Mizrahi Jews from Iraq and Libya. Maor suffered from Attention deficit hyperactivity disorder (ADHD) from a young age, and was classified as a problematic and violent student. His ADHD issues were diagnosed only after he began his university studies. However, inspired by his parents' background, he aspired to excel at school for a better future for his family. During elementary school, Maor attended local quizzes about Jewish history and Zionism, which significantly shaped his identity and national perspective. Farid graduated high school with the highest GPA in school. Later he was recruited to the Israel Defense Forces and drafted to the Brakim Program [he] – an excellence program of the Israeli Intelligence Corps for training leading R&D officers for the Israeli military and defense industry. Maor graduated the program with honors and was elected by the Israeli Prime Minister's Office and Unit 8200, where he served as an artificial intelligence researcher, officer, and commander. During his Military service, he received various honors and awards, such as the Excellent Scientist Award, given to the top three academics serving in the Israel Defense Forces. In 2019, Farid completed his military service in the rank of a Captain. Education and academic career As part of the (4 years) Brakim Program, Maor completed his Bachelor's and Master's degrees at the Technion in Mechanical Engineering with honors. Then, he initiated his Ph.D. research as a collaboration with the Israel Atomic Energy Commission (IAEC) in parallel to his duty military service. The main goals of his Ph.D. research were predicting irreversible effects of major earthquakes on Israel's nuclear facilities, and improving their seismic resistance using energy absorption technologies. The mathematical models developed by Farid were able to forecast earthquake effects on facilities with major hazard potential, and predicted the failure of liquid storage tanks due to earthquakes took place in Italy (2012) and Mexico (2017). The energy absorption technologies used, increased in up to 90% the seismic resistance abilities of those sensitive facilities. The research results were published in multiple papers in peer-reviewed academic journals and presented in international academic conferences. Later, this research expanded to an official collaboration between the Technion and the Shimon Peres Negev Nuclear Research Center, which aims to implement the findings obtained on existing sensitive systems, and won funding of 1.5 million NIS from the Pazy foundation of the Israel Atomic Energy Commission and the Council for Higher Education. In 2017, Farid completed his Ph.D. and as the youngest graduate at the Technion for that year, at the age of 24. In the graduation ceremonies, he honored his parents to receive the diplomas on his behalf. At the same year, he served as a lecturer at Ben-Gurion University in an original course he developed as a solution for knowledge gaps he identified in the Israeli defense industry. In 2018, Dr. Farid served as an artificial intelligence researcher at a Data Science team of Unit 8200, where he developed machine learning-based solutions for military and operational needs. In 2019, Farid won the Fulbright and the Israel Scholarship Educational Foundation scholarships, and was accepted to post-doctoral position at Massachusetts Institute of Technology where he develops real-time methods for predicting earthquake effects using machine learning techniques. In 2020, Farid was accepted to the Emerging Leaders Program at Harvard Kennedy School in Cambridge, Massachusetts. At the same year, he received the excellence research grant of the Israel Academy of Sciences and Humanities for leading his research in collaboration between MIT and the Technion. Social activism Farid social activism focuses on empowering youths from disadvantaged backgrounds from an early age. In 2010–2015, he served as a mentor of a robotics team from Dimona in FIRST Robotics Competition, a mathematics tutor in "Aharai!" [he] program for high-school students at risk in Dimona and Be'er Sheva, and a mentor and private tutor of adolescence and reserve duty soldiers from disadvantaged backgrounds. In 2010, he initiated "Learn to Succeed" (Hebrew: ללמוד להצליח) project, for mitigating the social gaps in the Israeli society by empowering youths from the social, economical, and geographical periphery for excellence, self-fulfillment and gaining formal education. In 2018, Learn to Succeed became an official non-profit organization. At the same year, Farid led a crowdfunding project of 150,000 NIS in order to expand the organization to a national scale. In 2019, he published the book "Learn to Succeed", in which he describes his struggle with ADHD, the violent environment in which he grew up, and the changing process he went through from being a violent teenager to becoming the youngest Ph.D. graduate at the Technion. The book was given to more than two thousand youths at risk and became a top seller in Israel shortly after its publication. Maor dedicated the book to his parents and to the memorial of his friend Captain Tal Nachman who was killed in operational activity during his military service in 2014. The organization consists of hundreds of volunteers, gives full scholarships to STEM students from the periphery who serve as mentors of youths, both Jews and Arabs, from disadvantaged backgrounds, runs a hotline which gives online practical and mental support to hundreds of youths, parents and educators, initiates inspirational activities with military orientation to increase the motivation of its teen-age members for significant military service, and gives inspirational lectures to more than 5,000 youths each year. In 2019, Maor initiated a collaboration with Unit 8200 in which tens of the program's members are being interviewed to the unit. This opportunity is usually given to students with the highest grades in the matriculate exams in each class. In 2020, Dr. Farid established the ScienceAbroad center at MIT, aiming to strengthen the connections between Israeli researchers in the institute and the state of Israel. Moreover, he serves as a volunteer in the American Technion Society. Honors and awards Personal life Farid is married to Michal. Interviews and articles References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Nodule_(geology)] | [TOKENS: 403] |
Contents Nodule (geology) In geology and particularly in sedimentology, a nodule is a small, irregularly rounded knot, mass, or lump of a mineral or mineral aggregate that typically has a contrasting composition from the enclosing sediment or sedimentary rock. Examples include pyrite nodules in coal, a chert nodule in limestone, or a phosphorite nodule in marine shale. Normally, a nodule has a warty or knobby surface and exists as a discrete mass within the host strata. In general, they lack any internal structure except for the preserved remnants of original bedding or fossils. Nodules are closely related to concretions and sometimes these terms are used interchangeably. Minerals that typically form nodules include calcite, chert, apatite (phosphorite), anhydrite, and pyrite. Nodular is used to describe a sediment or sedimentary rock composed of scattered to loosely packed nodules in a matrix of like or unlike character. It is also used to describe mineral aggregates that occur in the form of nodules, e.g. colloform mineral aggregate with a bulbed surface. Nodule is also used for widely scattered concretionary lumps of manganese, cobalt, iron, and nickel found on the floors of the world's oceans. This is especially true of manganese nodules. Manganese and phosphorite nodules form on the seafloor and are syndepositional in origin. Thus, technically speaking, they are concretions instead of nodules. Chert and flint nodules are often found in beds of limestone and chalk. They form from the redeposition of amorphous silica arising from the dissolution of siliceous spicules of sponges, or debris from radiolaria and the postdepositional replacement of either the enclosing limestone or chalk by this silica. See also References This mineralogy article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Crimson_Skies] | [TOKENS: 3693] |
Contents Crimson Skies Crimson Skies is a tabletop and a video game media franchise created by Jordan Weisman and Dave McCoy, first released as a board game in 1998 and then as a PC game in 2000. The series' intellectual property is currently owned by Microsoft Corporation through its Xbox Game Studios division. Weisman's former company, Smith & Tinker Inc., had announced in 2007 that it had licensed the electronic entertainment rights to the franchise, but no new titles were developed. The Crimson Skies trademark was renewed in 2013 by Microsoft for video games, although the trademark for other related media has been abandoned. The series is set within an alternate history of the 1930s invented by Weisman and McCoy. Within this divergent timeline, the United States has collapsed, and air travel has become the most popular mode of transportation in North America; as a result, air pirates thrive in the world of Crimson Skies. In describing the concept of Crimson Skies, Jordan Weisman stated he wanted to "take the idea of 16th century Caribbean piracy and translate into a 1930s American setting". Crimson Skies was first conceived as a PC game known as Corsairs!, but was released first as a board game from FASA. The franchise has since expanded to include a collectible miniatures game from Wizkids, a miniature wargame from Ral Partha, as well as a series of books. The series also includes two arcade flight-based video games published by Microsoft Game Studios – Crimson Skies for the PC and Crimson Skies: High Road to Revenge for the Xbox. Both games were well received by critics, though only the second was commercially successful. Crimson Skies is an example of the dieselpunk genre, though it predated the genre name. Development history According to series creator Jordan Weisman, the original idea for Crimson Skies came after he had completed research on the early years of aviation; the era and historical characters inspired him to create a game about the period. For their game, Weisman and Dave McCoy settled on a post WWI European setting revolving around the "knights of the air". However, a game with a similar idea came out then; Weisman and McCoy subsequently moved the setting to the U.S. and changed the concept to placing air pirates in a modern setting. From there, they crafted an alternate history to simulate the conditions that gave rise to piracy in the Caribbean in a 1930s setting. Weisman later said about the development of the universe: Whenever I create different universes—MechWarrior, Shadowrun, Crimson Skies—to me, it's all about looking at 'What are the fantasies that excited us when we were 5?' And if we can find a new and more sophisticated way to tap into that fantasy [...] Crimson Skies is just combining two classic male fantasies: You get to be a pirate; you get to be a pilot. Work on Crimson Skies began under the name Corsairs!. Development started for Virtual World Entertainment, and was later moved to a PC game when Virtual World merged with FASA Corporation. Although the Corsairs! project was shelved, Weisman and a group of FASA employees worked outside of business hours to create the Crimson Skies board game. According to Weisman: "The board game was borne purely out of the fact that I needed to get this universe out of our heads and into the world, and it was the best venue to do so quickly". Developer John Howard has stated that the board game was built to "showcase the Crimson Skies property, with an eye towards expanding on it in other ways". When FASA Interactive became a part of Microsoft, Weisman and his team were able to start a new game, and work on the PC version of Crimson Skies began; the game was developed by Zipper Interactive. The game utilizes arcade flight mechanics, focusing on action, as opposed to a realistic portrayal of the physics of flight. The game's relaxed physics as well as its focus on barnstorming led GameSpot to comment that "Crimson Skies is very much based on a 'movie reality' where if it's fun and looks good, it works". The Xbox game Crimson Skies: High Road to Revenge was later developed as a first-party title for Microsoft Game Studios by FASA Studio. Like the previous game, arcade flight elements were incorporated in order to focus gameplay on action instead of flight mechanics. Early in the game's production, developers decided upon a "playable movie" concept, but found that gameplay would be restricted by this approach. Consequently, the game's release date was pushed back by approximately one year to allow the development team time to retool the game. The results of this extra development period include more open-ended gameplay features and Xbox Live support. After development concluded on High Road to Revenge, the developers moved to work on another Crimson Skies title for Microsoft; development, however, was cancelled shortly into the project. When FASA Studio was later shut down, Microsoft retained the video game rights to Crimson Skies, although it had no immediate plans for the IP. Weisman's latest company, Smith & Tinker, later "licensed from Microsoft the electronic entertainment rights" to Crimson Skies. Although the company has made no formal announcement as to its plans with the franchise, Weisman has assured fans that there will be a new entry in the series. Universe The Crimson Skies series takes place in an alternate 1930s in which the U.S. has broken apart into a number of independent nation states. According to series creator Jordan Weisman: I needed to create a geo-political situation that would result in air-pirates, so I looked at the real political situation that gave rise to the pirates of the Caribbean in the 16th and 17th centuries. We needed a balkanized era so that pirates could escape quickly into another country's territory, we needed things of value to be moved by air, and we needed a constantly churning political environment so that things did not settle down quickly. [...] It took only three little changes in the history of the United States to get us the dynamic world of Crimson Skies. This alternate timeline incorporates both fictional and actual historic events. According to the series' official backstory, the divergent timeline begins after World War I, when a "Regionalist movement" gains popularity in America following the Spanish influenza pandemic, rallying behind an isolationist platform. Meanwhile, President Wilson's authority was undercut when Prohibition failed as a constitutional amendment leaving the matter to be decided on the state level. The nation soon became polarized between "wet" and "dry" states and checkpoints became a common sight on state borders to stop the flow of alcohol into "dry" states. As the decade progressed, state governments seized more authority, encroaching into areas formerly the responsibility of the federal government, and formed regional power blocs. The optimism of the Roaring Twenties was upset in 1927 when an outbreak of a deadly strain of influenza in America prompted states to close their borders, further dividing the Union. Though not as deadly as the 1918 pandemic, the epidemic had immense political fallout, bolstering regionalist "strong state" views and decreasing voter turnout in the 1928 election. Shortly after the Wall Street crash of 1929, Texas seceded from the United States, reforming the Republic of Texas on January 1, 1930. New York was the next state to secede, and persuaded Pennsylvania and New Jersey to merge with it to form the Empire State. California followed suit, creating the Nation of Hollywood, as did Utah, which had already come in conflict with the federal government after the establishment of the Smith Law in 1928 that made Mormonism the state religion. Washington, D.C., essentially powerless, was unable to stop the country from falling apart. The federal government made its last stand against the "People's Revolt" of the bread basket states. When the US Army was defeated by the People's Collective (formerly the Midwest) forces in 1931, the fate of the United States was sealed, and the rest of the country dissolved into independent nations by the end of 1932 with the last legal remnant of the US being the neutral nation of Columbia in what used to be whatever area around Washington could be seized. Though not directly affected by the Texas Secession, Canada found itself dragged down by the collapse of the U.S., with Quebec seceding in 1930 and the rest of the provinces siding with their nascent southern neighbors: New Brunswick and parts of Quebec joined the Maritime Provinces of Maine, New Hampshire, and Vermont; Newfoundland joined Quebec; Manitoba joined the People's Collective as did parts of Saskatchewan, with the Lakota nation laying claim to the rest; British Columbia merged with Oregon and Washington in the Pacific Northwest; and Alaska claimed the Yukon territories. The core of the former Canadian government established the Protectorate of Ontario. While Ottawa's authority technically extends to Alberta and the Northwest Territories, these areas are mostly no-man's land, while Nova Scotia and Prince Edward Island comprise a self-governing body, commonly referred to as the Northumberland Association. In 1931, the Territorial Government of Hawaii was left defenseless in the wake of the fragmenting country and was overthrown in favor of reestablishing the Hawaiian monarchy with Jonah Kūhiō as its king. Likewise, America's territorial holdings overseas were surrendered following the nation's formal collapse and the formation of the Federal Republic of Columbia on March 1, 1932. The resulting nation states that formed were no longer unified—distrust between them strained diplomatic relations to the point that several small-scale wars broke out. After the dissolution of the United States, the country's interstate railroad and highway systems fell into disrepair or were sabotaged as they crossed hostile borders. Consequently, ground-based vehicles such as the locomotive and automobile were replaced by aircraft such as the airplane and the zeppelin as the leading mode of transportation in North America. Europe soon followed this fascination with aviation to make its own strides into the new, aerially-dominated market. Gangs of air pirates formed in turn to plunder airborne commerce. Although air militias formed to counter the threat, rivalries between the nations of North America reduced their capacity to effectively address this issue, and even encouraged the countries to sponsor pirates as privateers so as to direct their illegal operations against opposing nations. In Europe, privateers and other mercenary groups were widely adopted by nations who wished to avoid another world war, especially in the case of the Spanish Civil War. By the end of 1937, North America was a "hotbed of conflict", with multiple pirate gangs and air militias battling for control of the skies. Europe was no better, as Germany jockeyed for power while France and Britain looked the other way. The Russian States continued to fight their civil war, which threatened to spill over into the Eastern European nations and Alaska. Asia, too, was on the brink, with Japan's recent invasion of China and the continuation of the bloody civil war in Australia. The planes of Crimson Skies are fictional designs created to fit within the Crimson Skies universe. Although some planes were modeled after actual 1930s era experimental aircraft and other "bizarre and outlandish designs" from the early years of aviation, they still take significant departures from conventional aviation design. Jordan Weisman has stated that the planes in Crimson Skies are designed to be the "hot rods of the air". According to IGN, "the planes in CS are built for style and not function with their redundant wing positions and rear propellers". For example, the Devastator aircraft features a pusher propeller and a biplane design. Because of the history of the world of Crimson Skies, especially given that the nation states of North America are constantly at war with one another and that air travel is the primary means of transportation, advancements in both aircraft and weaponry technology would have proceeded at a faster pace than had actually happened in the same time period. Zeppelins with hangar launch bays that can accommodate escort fighters are featured prominently in Crimson Skies; in actuality, only a few zeppelin-based airborne aircraft carriers saw service. Zeppelins in Crimson Skies are also armed with broadside cannons and are also heavily armored. Radio-controlled rockets are also available in the Crimson Skies universe, which can be controlled remotely after launch. Other technologies are exclusive to the world of Crimson Skies. Magnetic rockets have the ability to track planes or weapon emplacements over a short distance. Aerial torpedoes are similar to sea-based torpedoes, but are specifically designed to take out airships. Beeper/seeker rockets are designed to work in tandem. The "beeper" rocket attaches to a target and emits a homing signal; the "seeker" rocket follows the homing signal, destroying the target. The Choker rocket disables the target's engine by bursting into a fireball that burns all oxygen around it. The Tesla cannon is a tesla coil-style weapon that fires a bolt of electricity at a target, disabling it. Also featured in Crimson Skies is the wind turbine, a weather control mechanism designed to generate storms. Games The Crimson Skies board game was released by FASA in 1998. The base game came with card stock, assemble-yourself airplanes included, but later metal miniature planes were offered separately. While the focus was on fantasy over fact, many of the planes in Crimson Skies were modeled after real experimental aircraft of the era. The complex universe of Crimson Skies earned many devoted fans, as dozens of different weapons, planes, nations, air forces, bands of pirates, and characters were all given detailed pasts, and several additional supplemental campaigns were published. The PC game Crimson Skies was developed by Zipper Interactive and released in 2000. The game's storyline is framed around a radio drama that chronicles the adventures of Nathan Zachary and the Fortune Hunters pirate gang during their rise to fame and fortune. Gameplay centers on the control of one of the game's playable aircraft, which the player can customize with different parts to alter performance. The game's flight mechanics were designed to be a compromise between realistic and arcade flight. One of Crimson Skies' unique gameplay features was the inclusion of "danger zones"—challenging areas through which the player can fly for various effects. The game's focus on barnstorming and relaxed flight physics led GameSpot to comment that "Crimson Skies is very much based on a 'movie reality' where if it's fun and looks good, it works". However, the game's original release was plagued with numerous technical problems, most notably the unreliability of the player's saved game files. Though a patch was released to remedy this issue, the game still retains many technical issues such as long loading times and sluggish menu screens. Crimson Skies: High Road to Revenge is an Xbox game developed by FASA Studio and released in 2003. The game centers on Nathan Zachary and the Fortune Hunters, in their crusade to avenge the death of a close friend, Dr. Fassenbiender, at the hands of the Die Spinne organization. Developers decided early on in the game's production cycle that the game would not simply be a port of the PC title, and by the end of the development cycle, many of the story elements that linked the game to the PC game had been excised. Although the game is similar to the PC game in that gameplay centers on controlling an aircraft, a new feature is the ability for the player to switch aircraft or man fixed weapon emplacements during a mission. The game's mission structure also features a number of other open-ended elements that have led to comparisons with the sandbox gameplay of the Grand Theft Auto games. The game additionally boasted a number of online gameplay modes over Xbox Live. In 2003, Wizkids released the Crimson Skies collectible miniatures game. The game utilizes collectible figures featuring both planes and pilots from the Crimson Skies universe. These miniatures use WizKids' Clix system, by which a character's or plane's statistics and abilities can be altered during gameplay by way of an adjustable dial located on the base of the figure. The Crimson Skies miniatures game comprises two separate games, each with its own set of rules. The gameplay in Crimson Skies: Aces revolves around pilots battling each other on the ground, while the gameplay in Crimson Skies: Air Action focuses on dogfighting between squadrons of aircraft. Figures were sold in "squadron packs" and "ace packs", which were formatted in blisterpacks as opposed to the random packaging format used in other Wizkids games. Books In addition to the tabletop and video games, the Crimson Skies series also features a number of tie-in books and short stories. Spicy Air Tales was published by FASA in 1999. The two-volume series featured short stories that originally appeared on the Crimson Skies website and supplemental material for using characters and planes from the stories with the boardgame. Wings of Fortune: Pirate's Gold, by Stephen Kenson, was published by FASA in November 2000. It introduced Nathan Zachary and his famous band of air pirates, the Fortune Hunters. It follows Zachary's air exploits and daring escapes during his early days as a war pilot, and recounts a climactic confrontation with his nemesis. Wings of Justice: Rogue Flyer, by Loren L. Coleman, was published by FASA in December 2000. It follows the transformation of Trevor Girard from a law-abiding security agent to a pirate with a heart of gold. Crimson Skies was published by Del Rey in October 2002 to promote the future release of the Xbox game. It features three novellas, two originally published on the Crimson Skies website, one previously unpublished. Each story is preceded by a brief history lesson about the Crimson Skies universe that acts as the prelude to the following story. See also Notes and references External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/XPL0] | [TOKENS: 531] |
Contents XPL0 XPL0 is a computer programming language that is essentially a cross between Pascal and C. It looks somewhat like Pascal but works more like C. It was created in 1976 by Peter J. R. Boyle who wanted a high-level language for his microcomputer and wanted something more sophisticated than BASIC, which was the dominant language for personal computers at the time. XPL0 is based on PL/0, an example compiler in the book Algorithms + Data Structures = Programs by Niklaus Wirth. The first XPL0 compiler was written in ALGOL. It generated instructions for a pseudo-machine that was implemented as an interpreter on a Digital Group computer based on the 6502 microprocessor. The compiler was converted from ALGOL to XPL0 and was then able to compile itself and run on a microcomputer. XPL0 soon proved its worth in a variety of products based on the 6502. These embedded systems would otherwise have had their code written in assembly language, which is much more tedious to do. Boyle used XPL0 to write a disk operating system called Apex. Beginning in 1980 this was sold, along with XPL0, as an alternative to Apple DOS for the Apple II computer, which was based on the 6502. Since those early years XPL0 has been implemented on a dozen processors, and many features have been added. There are now optimizing native code compilers with 32-bit integers in place of the original 16-bit versions. Open source compilers for Windows and MS-DOS on PCs and Linux on the Raspberry Pi are available from the link below. Examples This is how the traditional Hello World program is coded in XPL0: Text is a built-in routine that outputs a string of characters. The zero (0) tells where to send the string. In this case it is sent to the display screen, but it could just as easily be sent to a printer, a file, or out a serial port by using a different number. In XPL0 all names must be declared before they can be used. The command word code associates the name Text to built-in routine number 12, which is the one that outputs strings. There are about a hundred of these built-in routines that provide capabilities such as input and output, graphics, and trig functions. The 32-bit versions of the compilers automatically insert code declarations, thus the program above can simply be written as: The TPK algorithm provides an example that can be compared to other languages: Graphics has been a feature of XPL0 since its days on the Apple II computer. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-138] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/V1149_Orionis] | [TOKENS: 669] |
Contents V1149 Orionis HD 37824 is a spectroscopic binary star system in the constellation of Orion. It has the variable-star designation V1149 Orionis (abbreviated to V1149 Ori). With an apparent magnitude of 6.59, it is near the limit for naked eye observation from Earth, faintly visible as an orange-hued dot of light under dark skies. It is located approximately 492 light-years (151 parsecs) distant according to Gaia DR3 parallax measurements, and is moving further away at a heliocentric radial velocity of 26.90 km/s. Stellar properties HD 37824 is a single-lined spectroscopic binary, meaning only the light from the luminous primary can be observed in the system's spectra. The two stars orbit each other in a circular orbit (eccentricity 0.0) with a period of 53.57 days. The star features prominent starspots, which are known to display the flip-flop effect; other stars that show this effect include FK Comae Berenices and HD 181809. The primary star (HD 37824 A) is a chromospherically active K-type giant star in the core helium burning phase. It has a radius of 12.6 R☉ and evolutionary models predict that its mass is 1.5–2.5 M☉. It is radiating 67±23 times the luminosity of the Sun from its photosphere. The unseen secondary, B, is estimated to have a mass of 1.10 M☉ if the orbital inclination is 90°, or 0.95–1.27 M☉ with an inclination of 60°, which makes it likely to be a late-F-type or G-type main-sequence star. Observational history In 1973, astronomers William P. Bidelman and Darrell Jack MacConnell reported the detection of Ca II H & K emission lines in the spectra of HD 37824. As such, Douglas S. Hall et al. suspected it to be a RS Canum Venaticorum variable. As expected, in 1983, the star was shown to vary in brightness by 0.11 magnitudes, with photometric and orbital periods of 52.6 and 53.6 days, respectively. It was given its variable star designation in 1985. The starspots on the surface of the primary star, which are thought to cause the variability, were analyzed using photometric data taken between late 1978 and early 1990. The results were published in 1991, identifying six starspots, which each made the star dim by about 0.1 to 0.3 magnitudes and lasted for several years. The same study refined the orbital period to 53.58±0.02 days. Observations in 1992 showed a large excess of Hα emission alongside strong Ca II H & K and Hε emission lines. A follow-up study in 1997 reported a lower but still strong Hα emission, as well as a clear emission line from singly ionized helium revealed by spectral subtraction. Additional observations in 2000 discovered high variability in the profile of the Hα line. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Nordic_alien] | [TOKENS: 810] |
Contents Nordic aliens In ufology and the study of alleged extraterrestrial beings and lifeforms visiting Earth, "Nordics", "Nordic aliens" or "Tall Whites" are among the names given to one of several purported humanoid races hailing from the Pleiades star cluster (i.e., Pleiadians), as they reportedly share superficial similarities with "Nordic", Germanic, or Scandinavian humans. Alleged contactees describe Nordics as being somewhat taller than the average human, standing roughly 6–7 ft (1.8–2.1 m) in height (with an equally proportional weight), and showing stereotypically "European" or "White" features, such as long, straight blond hair, blue eyes, and fair skin. The skin tone has also been reported by individuals who say they have seen such beings as being a pale blue-grey or pastel purple.[citation needed] In the 1950s, George Adamski, a Polish-American ufologist, was among the first to publicly report his alleged contact with Nordic beings. Scholars note that the mythology of extraterrestrial visitations from such beings (with physical features superficially described as "Aryan") often make mention of telepathy, benevolence, and physical beauty and grace; however, many purported alien and extraterrestrial encounters also involve some degree of telepathy serving as the primary communication with human beings. History Cultural historian David J. Skal wrote that early stories of Nordic-type aliens may have been partially inspired by the 1951 film The Day the Earth Stood Still, in which an extraterrestrial arrives on Earth to warn humanity about the dangers of atomic weapons. Bates College professor Stephanie Kelley-Romano described alien abduction beliefs as "a living myth", and notes that, among believers, Nordic aliens "are often associated with spiritual growth and love and act as protectors for the experiencers." In contactee and ufology literature, Nordic aliens are often described as benevolent or even "magical" beings who want to observe and communicate with humans and are concerned about the Earth's ecology or prospects for world peace. Believers also ascribe telepathic powers to Nordic aliens, and describe them as "paternal, watchful, smiling, affectionate, and youthful". During the 1950s, many people alleging to be contactees, especially those in Europe, claimed encounters with beings fitting this description. Such claims became relatively less common in subsequent decades, as the grey alien supplanted the Nordic in most alleged accounts of extraterrestrial encounters. Publications from people who claim to have been contacted and the topic in popular culture Books claiming personal contact with Nordic aliens include George Adamski's Flying Saucers Have Landed and Inside the Space Ships, Howard Menger's From Outer Space to You, Travis Walton's The Walton Experience and Charles James Hall's Millennial Hospitality (film adapted as Walking with the Tall Whites (2020). The UFO religion Universe People contains a variety of interactions, published as "Talks with Teachings from my Cosmic Friends". The Brazilian science fiction novella "Major Atlas - Uma Novela Sobre Alienígenas Nórdicos", stands out in the science fiction landscape for its deep and extensive exploration of the Nordic alien (Pleiadian) theme, arguably more so than any other work of fiction. The narrative centers on a police officer who undergoes an extraordinary transformation. After being exposed to a mysterious cosmic essence, he is endowed with superhuman abilities strikingly similar to those of Superman. As the plot unfolds, the protagonist finds himself involved in a complex web of deceit, realizing he is a pawn in a larger, manipulated narrative and slowly uncovers the secrets of the Pleiadian beings, highlighting themes of manipulation, hidden power structures, and the nature of reality. The book uses the Nordic alien mythology not merely as a backdrop but as a fundamental, driving force of its central conflict and character development. See also References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.