text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/CherryPy] | [TOKENS: 232] |
Contents CherryPy CherryPy is an object-oriented web application framework using the Python programming language. It is designed for rapid development of web applications by wrapping the HTTP protocol but stays at a low level and does not offer much more than what is defined in RFC 7231. CherryPy can be a web server itself or one can launch it via any WSGI compatible environment. It does not deal with tasks such as templating for output rendering or backend access. The framework is extensible with filters, which are called at defined points in the request/response processing. Pythonic interface One of the goals of the project founder, Remi Delon, was to make CherryPy as pythonic as possible. This allows the developer to use the framework as any regular Python module and to forget (from a technical point of view) that the application is for the web. For instance, the common Hello World program with CherryPy 3 would look like: Features CherryPy implements: CherryPy doesn't force you to use a specific object-relational mapper (ORM), template language or JavaScript library. CherryPy wiki helps choosing a templating language. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/2nd_millennium_BC] | [TOKENS: 1305] |
Contents 2nd millennium BC The 2nd millennium BC spanned the years 2000 BC to 1001 BC. In the Ancient Near East, it marks the transition from the Middle to the Late Bronze Age. The Ancient Near Eastern cultures are well within the historical era: The first half of the millennium is dominated by the Middle Kingdom of Egypt and Babylonia. The alphabet develops. At the center of the millennium, a new order emerges with Mycenaean Greek dominance of the Aegean and the rise of the Hittite Empire. The end of the millennium sees the Bronze Age collapse and the transition to the Iron Age. Other regions of the world are still in the prehistoric period. In Europe, the Beaker culture introduces the Bronze Age, presumably associated with Indo-European expansion. The Indo-Iranian expansion reaches the Iranian plateau and onto the Indian subcontinent (Vedic India), propagating the use of the chariot. Mesoamerica enters the Pre-Classic (Olmec) period. North America is in the late Archaic stage. In Maritime Southeast Asia, the Austronesian expansion reaches Micronesia. In Sub-Saharan Africa, the Bantu expansion begins. World population rose steadily, possibly surpassing the 100 million mark for the first time. The world in the 2nd millennium BC History See the article on chronology of the ancient Near East for a discussion regarding the accuracy and resolution of dates for events of the 2nd millennium BC in the Near East. Spending much of their energies in trying to recuperate from the chaotic situation that existed at the turn of the millennium, the most powerful civilizations of the time, Egypt and Mesopotamia, turned their attention to more modest goals. The Pharaohs of the Middle Kingdom of Egypt and their contemporary Kings of Babylon, of Amorite origin, brought governance that was largely popular and approved of among their subjects, and favoured elegant art and architecture. Farther east, the Indus Valley Civilization was in a period of decline, possibly as a result of intense, ruinous flooding. Egypt and Babylonia's military tactics were still based on foot soldiers transporting their equipment on donkeys. Combined with a weak economy and difficulty in maintaining order, this was a fragile situation that crumbled under the pressure of external forces they could not oppose. About a century before the middle of the millennium, bands of Indo-European invaders came from the Central Asian plains and swept through Western Asia and Northeast Africa. They were riding fast two-wheeled chariots powered by horses, a system of weaponry developed earlier in the context of plains warfare. This tool of war was unknown among the classical civilizations. Egypt and Babylonia's foot soldiers were unable to defend against the invaders: in 1630 BC, the Hyksos swept into the Nile Delta, and in 1595 BC, the Hittites swept into Mesopotamia. The people in place were quick to adapt to the new tactics, and a new international situation resulted from the change. Though during most of the second half of the 2nd millennium BC several regional powers competed relentlessly for hegemony, many developments occurred: there was new emphasis on grandiose architecture, new clothing fashions, vivid diplomatic correspondence on clay tablets, renewed economic exchanges, and the New Kingdom of Egypt played the role of the main superpower. Among the great states of the time, only Babylon refrained from taking part in battles, mainly due to its new position as the world's religious and intellectual capital. The Bronze Age civilization at its final period of time, displayed all its characteristic social traits: low level of urbanization, small cities centered on temples or royal palaces, strict separation of classes between an illiterate mass of peasants and craftsmen, and a powerful military elite, knowledge of writing and education reserved to a tiny minority of scribes, and pronounced aristocratic life. Near the end of the 2nd millennium BC, new waves of barbarians, this time riding on horseback, wholly destroyed the Bronze Age world, and were to be followed by waves of social changes that marked the beginning of different times. Also contributing to the changes were the Sea Peoples, ship-faring raiders of the Mediterranean. Empires and dynasties Prehistoric cultures Europe is still entirely within the prehistoric era; much of Europe enters the Bronze Age early in the 2nd millennium. The desiccation of the Sahara is complete. Neolithisation of Sub-Saharan Africa is initiated via expansion from the dried Sahara, reaching West and East Africa. Later in the 2nd millennium, pastoralism and iron metallurgy spread to Central Africa via the Bantu migration. Events Inventions, discoveries, introductions Languages In the history of the Egyptian language, the early 2nd millennium saw a transition from Old Egyptian to Middle Egyptian. As the most used written form of the Ancient Egyptian language, it is frequently (incorrectly) referred to simply as "Hieroglyphics". The earliest attested Indo-European language, the Hittite language, first appears in cuneiform in the 16th century BC (Anitta text), before disappearing from records in the 13th century BC. Hittite is the best known and the most studied language of the extinct Anatolian branch of Indo-European languages. The first Northwest Semitic language, Ugaritic, is attested in the 14th century BC. The first fully phonemic script Proto-Canaanite developed from Egyptian hieroglyphs, becoming the Phoenician alphabet by 1200 BC. The Phoenician alphabet was spread throughout the Mediterranean by Phoenician maritime traders and became one of the most widely used writing systems in the world, and the parent of virtually all alphabetic writing systems. The Phoenician language is also the first Canaanite language, the Northwest Semitic languages spoken by the ancient peoples of the Canaan region: the Israelites, Phoenicians, Amorites, Ammonites, Moabites and Edomites. Mycenaean Greek, the most ancient attested form of the Greek language, was used on the Greek mainland, Crete and Cyprus in the Mycenaean period. Centuries and Decades References See also ICS stages/ages (official) Blytt–Sernander stages/ages *Relative to year 2000 (b2k). |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Geography_of_the_United_States] | [TOKENS: 5236] |
Contents Geography of the United States The term "United States," when used in the geographic sense, refers to the contiguous United States (sometimes referred to as the Lower 48, including the District of Columbia not as a state), Alaska, Hawaii, the five insular territories of Puerto Rico, Northern Mariana Islands, U.S. Virgin Islands, Guam, American Samoa, and minor outlying possessions. The United States shares land borders with Canada and Mexico and maritime borders with Russia, Cuba, the Bahamas, and many other countries, mainly in the Caribbean,[note 2] in addition to Canada and Mexico. The northern border of the United States with Canada is the world's longest bi-national land border. The state of Hawaii is physiographically and ethnologically part of the Polynesian subregion of Oceania. U.S. territories are located in the Pacific Ocean and the Caribbean. Area From 1989 through 1996, the total area of the US was listed as 9,372,610 km2 (3,618,780 sq mi) (land and inland water only). The listed total area changed to 9,629,091 km2 (3,717,813 sq mi) in 1997 (Great Lakes area and coastal waters added), to 9,631,418 km2 (3,718,711 sq mi) in 2004, to 9,631,420 km2 (3,718,710 sq mi) in 2006, and to 9,826,630 km2 (3,794,080 sq mi) in 2007 (territorial waters added). Currently, the CIA World Factbook gives 9,833,517 km2 (3,796,742 sq mi), the United Nations Statistics Division gives 9,629,091 km2 (3,717,813 sq mi), and the Encyclopedia Britannica gives 9,522,055 km2 (3,676,486 sq mi) (Great Lakes area included but not coastal waters). These sources consider only the 50 states and the Federal District and exclude overseas territories. The US has the 2nd largest Exclusive Economic Zone of 11,351,000 km2 (4,383,000 mi2). By total area (water as well as land), the United States is either slightly larger or smaller than the People's Republic of China, making it the world's third or fourth-largest country. Both countries are smaller than Russia and Canada in total area but are larger than Brazil. By land area only (exclusive of waters), the United States is the world's third largest country, after Russia and China, with Canada in fourth. Whether the US or China is the third largest country by total area depends on two factors: (1) the validity of China's claim on Aksai Chin and Trans-Karakoram Tract (both these territories are also claimed by India, so are not counted); and (2) how the US calculates its surface area. Since the initial publishing of the World Factbook, the CIA has updated the total area of the United States several times. General characteristics The United States shares land borders with Canada to the north and Mexico to the south, a territorial water border with Russia in the northwest, and two territorial water borders in the southeast between Florida and Cuba, and Florida and the Bahamas. The contiguous 48 states are otherwise bounded by the Pacific Ocean on the west, the Atlantic Ocean on the east, and the Gulf of Mexico to the southeast. Alaska borders the Pacific Ocean to the south and southwest, the Bering Strait to the west, and the Arctic Ocean to the north; Hawaii lies far to the southwest of the mainland in the Pacific Ocean. Forty-eight of the states are in the single region between Canada and Mexico. This group is referred to, with varying precision and formality, as the contiguous United States, and as the "Lower 48". Alaska, which is included in the term "continental United States", is located at the northwestern end of North America. The nation's capital city, Washington, D.C., was established in 1800 after being relocated there from Philadelphia. It was established as a federal district located on land donated by the state of Maryland; Virginia also donated land, but it was returned in 1849. The United States also has overseas territories with varying levels of autonomy and organization, including the Caribbean territories of Puerto Rico and the U.S. Virgin Islands, formerly known as the Danish Virgin Islands and purchased by the United States at the beginning of World War II, the Pacific territories of American Samoa, Guam and the Northern Mariana Islands, and several uninhabited island territories. Some of the territories acquired were part of the territorial evolution of the United States or a product of the nation's effort to gain access to the east. Nearly all of the United States is in the Northern Hemisphere with the exception of American Samoa and Jarvis Island. Physiographic divisions Within the continental U.S. there are eight distinct physiographic divisions. These major divisions are: The eastern United States has a varied topography. A broad, flat coastal plain lines the Atlantic and Gulf shores from the Texas-Mexico border to New York City, and includes the Florida peninsula. This broad coastal plain and barrier islands make up the widest and longest beaches in the United States, much of it composed of soft, white sands. The Florida Keys are a string of coral islands that reach the southernmost city on the United States mainland at Key West in South Florida. Areas further inland feature rolling hills, mountains, and a diverse collection of temperate and subtropical moist and wet forests. Parts of interior Florida and South Carolina are also home to sandhill communities. The Appalachian Mountains form a line of low mountains separating the eastern seaboard from the Great Lakes and the Mississippi basins. New England features rocky seacoasts and rugged mountains with peaks up to 6,200 feet and valleys dotted with rivers and streams. Offshore islands dot the Atlantic and Gulf coasts. A recent global remote sensing analysis suggested that there were 6,622 km2 of tidal flats in the United States, making it the 4th ranked country in terms of tidal flat area. The five Great Lakes are located in the north-central portion of the country, four of them forming part of the border with Canada; only Lake Michigan is situated entirely within the United States. The southeast United States, generally stretching from the Ohio River southwards, includes a variety of warm temperate and subtropical moist and wet forests, as well as warm temperate and subtropical dry forests nearer the Great Plains in the west of the region. West of the Appalachians lies the lush Mississippi River basin and two large eastern tributaries, the Ohio River and the Tennessee River. The Ohio and Tennessee valleys and the Midwest consist largely of rolling hills, interior highlands and small mountains, jungle-like marsh and swampland near the Ohio River, and productive farmland, stretching south to the Gulf Coast. The Midwest also has a vast amount of cave systems. The Great Plains lie west of the Mississippi River and east of the Rocky Mountains. A large portion of the country's agricultural products are grown in the Great Plains. Before their general conversion to farmland, the Great Plains were noted for their extensive grasslands, from tallgrass prairie in the eastern plains to shortgrass steppe in the western High Plains. Elevation rises gradually from less than a few hundred feet near the Mississippi River to more than a mile high in the High Plains. The generally low relief of the plains is broken in several places, most notably in the Ozark and Ouachita Mountains, which form the U.S. Interior Highlands, the only major mountainous region between the Rocky Mountains and the Appalachian Mountains. The Great Plains come to an abrupt end at the Rocky Mountains. The Rocky Mountains form a large portion of the Western U.S., entering from Canada and stretching nearly to Mexico. The Rocky Mountain region is the highest region of the United States by average elevation. The Rocky Mountains generally contain fairly mild slopes and wider peaks compared to some of the other great mountain ranges, with a few exceptions, including the Teton Range in Wyoming and the Sawatch Range in Colorado. The highest peaks of the Rockies are found in Colorado, the tallest peak being Mount Elbert at 14,440 ft (4,400 m). Instead of being one generally continuous and solid mountain range, it is broken up into several smaller intermittent mountain ranges, forming a large series of basins and valleys. West of the Rocky Mountains lies the Intermontane Plateaus, also known as the Intermountain West, a large, arid desert lying between the Rockies and the Cascades and Sierra Nevada ranges. The large southern portion, known as the Great Basin, consists of salt flats, drainage basins, and many small north–south mountain ranges. The Southwest is predominantly a low-lying desert region. A portion known as the Colorado Plateau, centered around the Four Corners region, is considered to have some of the most spectacular scenery in the world. It is accentuated in such national parks as Grand Canyon, Arches, Mesa Verde and Bryce Canyon, among others. Other smaller Intermontane areas include the Columbia Plateau, which covers eastern Washington state, western Idaho and northeast Oregon and the Snake River Plain in southern Idaho. The Intermontane Plateaus come to an end at the Cascade Range and the Sierra Nevada. The Cascades consist of largely intermittent, volcanic mountains, many rising prominently from the surrounding landscape. The Sierra Nevada, further south, is a high, rugged, and dense mountain range. It contains the highest point in the contiguous 48 states, Mount Whitney (14,505 ft or 4,421 m). It is located at the boundary between California's Inyo and Tulare counties, just 84.6 mi or 136.2 km west-northwest of the lowest point in North America at the Badwater Basin in Death Valley National Park at 279 ft or 85 m below sea level. These areas contain some spectacular scenery as well, as evidenced by such national parks as Yosemite and Mount Rainier. West of the Cascades and Sierra Nevada is a series of valleys, such as the Central Valley in California and the Willamette Valley in Oregon. Along the coast is a series of low mountain ranges known as the Pacific Coast Ranges. Alaska contains some of the most dramatic scenery in the country. Tall, prominent mountain ranges rise up sharply from broad, flat tundra plains. On the islands off the south and southwest coast are many volcanoes. Hawaii, far to the south of Alaska in the Pacific Ocean, is a chain of tropical, volcanic islands, popular as a tourist destination for many from East Asia and the mainland United States. The territories of Puerto Rico and the U.S. Virgin Islands encompass a number of tropical isles in the northeastern Caribbean Sea. In the Pacific Ocean the territories of Guam and the Northern Mariana Islands occupy the limestone and volcanic isles of the Mariana archipelago, and American Samoa (the only populated US territory in the southern hemisphere) encompasses volcanic peaks and coral atolls in the eastern part of the Samoan Islands chain.[note 3] The Atlantic coast of the United States is low, with minor exceptions. The Appalachian Highland owes its oblique northeast–southwest trend to crustal deformations which in very early geological time gave a beginning to what later came to be the Appalachian Mountain system. This system had its climax of deformation so long ago (probably in Permian time) that it has since then been very generally reduced to moderate or low relief. It owes its present-day altitude either to renewed elevations along the earlier lines or to the survival of the most resistant rocks as residual mountains. The oblique trend of this coast would be even more pronounced but for a comparatively modern crustal movement, causing a depression in the northeast resulting in an encroachment of the sea upon the land. Additionally, the southeastern section has undergone an elevation resulting in the advance of the land upon the sea. While the Atlantic coast is relatively low, the Pacific coast is, with few exceptions, hilly or mountainous. This coast has been defined chiefly by geologically recent crustal deformations, and hence still preserves a greater relief than that of the Atlantic. The low Atlantic coast and the hilly or mountainous Pacific coast foreshadow the leading features in the distribution of mountains within the United States. The east coast Appalachian system, originally forest covered, is relatively low and narrow and is bordered on the southeast and south by an important coastal plain. The Cordilleran system on the western side of the continent is lofty, broad and complicated, having two branches, the Rocky Mountain System and the Pacific Mountain System. In between these mountain systems lie the Intermontane Plateaus. Both the Columbia River and Colorado River rise far inland near the easternmost members of the Cordilleran system, and flow through plateaus and intermontane basins to the ocean. Heavy forests cover the northwest coast, but elsewhere trees are found only on the higher ranges below the Alpine region. The intermontane valleys, plateaus and basins range from treeless to desert with the most arid region being in the southwest. Elevation extremes: Climate Due to its large size and wide range of geographic features, the United States contains examples of nearly every global climate. The climate is subtropical in the Southern United States, continental in the north, tropical in Hawaii and southern Florida, polar in Alaska, semiarid in the Great Plains west of the 100th meridian, Mediterranean in coastal California and arid in the Great Basin and the Southwest. Its comparatively favorable agricultural climate contributed (in part) to the country's rise as a world power, with infrequent severe drought in the major agricultural regions, a general lack of widespread flooding, and a mainly temperate climate that receives adequate precipitation. The main influence on U.S. weather is the polar jet stream which migrates northward into Canada in the summer months, and then southward into the US in the winter months. The jet stream brings in large low-pressure systems from the northern Pacific Ocean that enters the US mainland over the Pacific Northwest. The Cascade Range, Sierra Nevada, and Rocky Mountains pick up most of the moisture from these systems as they move eastward via the orographic effect, and they are greatly diminished by the time they reach the High Plains. Once they move over the Great Plains, uninterrupted flat land allows them to reorganize and can lead to major clashes of air masses. In addition, moisture from the Gulf of Mexico is often drawn northward. When combined with a powerful jet stream, this can lead to violent thunderstorms, especially during spring and summer. Sometimes during winter, these storms can combine with another low-pressure system as they move up the East Coast and into the Atlantic Ocean, where they intensify rapidly. These storms are known as Nor'easters and often bring widespread, heavy rain, wind, and snowfall to New England. The uninterrupted grasslands of the Great Plains also lead to some of the most extreme climate swings in the world. Temperatures can rise or drop rapidly, winds can be extreme, and the flow of heat waves or Arctic air masses often advance uninterrupted through the plains. The Great Basin and Columbia Plateau (the Intermontane Plateaus) are arid or semiarid regions that lie in the rain shadow of the Cascades and Sierra Nevada. Precipitation averages less than 15 inches (38 cm). The Southwest is a hot desert, with temperatures exceeding 100 °F (37.8 °C) for several weeks at a time in summer. The Southwest and the Great Basin are also affected by the monsoon from the Gulf of California from July to September, which brings localized but often severe thunderstorms to the region. Much of California consists of a Mediterranean climate, with sometimes excessive rainfall from October–April and nearly no rain the rest of the year. In the Pacific Northwest rain falls year-round but is much heavier during winter and spring. The mountains of the west receive abundant precipitation and very heavy snowfall. The Cascades are one of the snowiest places in the world, with some places averaging over 600 inches (1,524 cm) of snow annually, but the lower elevations closer to the coast receive very little snow. Florida has a subtropical climate in the northern part of the state and a tropical climate in the southern part of the state. Summers are wet and winters are dry in Florida. Annually, much of Florida and the deep southern states is frost-free. The mild winters of Florida allow a massive tropical fruit industry to thrive in the central part of the state, making the US second to only Brazil in citrus production worldwide. Another significant (but localized) weather effect is lake-effect snow that falls south and east of the Great Lakes, especially in the hilly portions of the Upper Peninsula of Michigan and on the Tug Hill Plateau in New York. The lake effect dumped well over 5 feet (1.52 m) of snow in the area of Buffalo, New York throughout the 2006–2007 winter. The Wasatch Front and Wasatch Range in Utah can also receive significant lake effect accumulations from the Great Salt Lake. In northern Alaska, tundra and arctic conditions predominate, and the temperature has fallen as low as −80 °F (−62.2 °C). On the other end of the spectrum, Death Valley, California once reached 134 °F (56.7 °C), the highest temperature ever recorded on Earth. On average, the mountains of the western states receive the highest levels of snowfall on Earth. The greatest annual snowfall level is at Mount Rainier in Washington, at 692 inches (1,758 cm); the record there was 1,122 inches (2,850 cm) in the winter of 1971–72. This record was broken by the Mt. Baker Ski Area in northwestern Washington which reported 1,140 inches (2,896 cm) of snowfall for the 1998–99 snowfall season. Other places with significant snowfall outside the Cascade Range are the Wasatch Mountains in Utah, the San Juan Mountains in Colorado, and the Sierra Nevada in California. In the east, the region near the Great Lakes and the mountains of the Northeast receive the most snowfall, although they do not near snowfall levels in the western United States. Along the northwestern Pacific coast, rainfall is greater than anywhere else in the continental U.S., with Quinault Rainforest in Washington having an average of 137 inches (348 cm). Hawaii receives even more, with 404 inches (1,026 cm) measured annually in the Big Bog, in Maui. Pago Pago Harbor in American Samoa is the rainiest harbor in the world (because of the 523 meter Rainmaker Mountain). The Mojave Desert, in the southwest, is home to the driest locale in the U.S. Yuma, Arizona, has an average of 2.63 inches (6.7 cm) of precipitation each year. In central portions of the U.S., tornadoes are more common than anywhere else on Earth and touch down most commonly in the spring and summer. Deadly and destructive hurricanes occur almost every year along the Atlantic seaboard and the Gulf of Mexico. The Appalachian region and the Midwest experience the worst floods, though virtually no area in the U.S. is immune to flooding. The Southwest has the worst droughts; one is thought to have lasted over 500 years and to have hurt Ancestral Pueblo peoples. The West is affected by large wildfires each year. Natural disasters The United States is affected by a variety of natural disasters yearly. Although drought is rare, it has occasionally caused major disruption, such as during the Dust Bowl (1931–1942). Farmland failed throughout the Plains, entire regions were substantially depopulated, and dust storms ravaged the land. According to a 2023 Gallup survey, around one in three Americans said that they directly experienced a severe weather condition over the previous two years. The Great Plains and Midwest, due to the contrasting air masses, see frequent severe thunderstorms and tornado outbreaks during spring and summer with around 1,000 tornadoes occurring each year. The strip of land from north Texas north to Kansas and Nebraska and east into Tennessee is known as Tornado Alley, where many houses have tornado shelters and many towns have tornado sirens, due to the very frequent tornado formation in the region. Hurricanes are another natural disaster found in the US, which can hit anywhere along the Gulf Coast or the Atlantic Coast as well as Hawaii in the Pacific Ocean. Particularly at risk are the central and southern Texas coasts, the area from southeastern Louisiana east to the Florida Panhandle, peninsular Florida, and the Outer Banks of North Carolina, although any portion of the coast could be struck. The U.S. territories and possessions in the Caribbean, including Puerto Rico and the U.S. Virgin Islands, are also vulnerable to hurricanes due to their location in the Caribbean Sea. Hurricane season runs from June 1 to November 30, with a peak from mid-August through early October. Some of the more devastating hurricanes have included the Galveston Hurricane of 1900, Hurricane Andrew in 1992, Hurricane Katrina in 2005, and Hurricane Harvey and Hurricane Maria in 2017. Hurricanes (known as cyclones in the Pacific Ocean) fail to make landfall on the Pacific Coast of the United States due to water temperatures being too cool to sustain them. However, the remnants of tropical cyclones from the Eastern Pacific occasionally impact the western United States, bringing moderate to heavy rainfall. Occasional severe flooding is experienced in the United States. Significant floods throughout history include the Great Mississippi Flood of 1927, the Great Flood of 1993, and widespread flooding and mudslides caused by the 1982–83 El Niño event in the western United States. Flooding is still prevalent, mostly on the Eastern Coast, during hurricanes or other inclement weather, for example in 2012, when Hurricane Sandy devastated the region. Localized flooding can, however, occur anywhere, and mudslides from heavy rain can cause problems in any mountainous area, particularly the Southwest. Large stretches of desert shrub in the west can fuel the spread of wildfires. The narrow canyons of many mountain areas in the west and severe thunderstorm activity during the summer lead to flash floods as well, which can sometimes be devastating, while nor'easter snowstorms can bring activity to a halt throughout the Northeast (although heavy snowstorms can occur almost anywhere). The West Coast of the continental United States makes up part of the Pacific Ring of Fire, an area of heavy tectonic and volcanic activity that is the source of 90% of the world's earthquakes. The American Northwest sees the highest concentration of active volcanoes in the United States, in Washington, Oregon and northern California along the Cascade Mountains. There are several active volcanoes located in the islands of Hawaii, including Kilauea in ongoing eruption since 1983, but they do not typically adversely affect the inhabitants of the islands. There has not been a major life-threatening eruption on the Hawaiian Islands since the 17th century. Volcanic eruptions can occasionally be devastating, such as in the 1980 eruption of Mount St. Helens in Washington. The Ring of Fire makes California and southern Alaska particularly vulnerable to earthquakes. Earthquakes can cause extensive damage, such as the 1906 San Francisco earthquake or the 1964 Good Friday earthquake near Anchorage, Alaska. California is well known for seismic activity and requires large structures to be earthquake resistant to minimize loss of life and property. Outside of devastating earthquakes, California experiences minor earthquakes on a regular basis. There have been about 100 significant earthquakes annually from 2010 to 2012. Past averages were 21 a year. This is believed to be due to the deep disposal of wastewater from fracking. None have exceeded a magnitude of 5.6, and no one has been killed. Other natural disasters include tsunamis around the Pacific Basin, mudslides in California, and forest fires in the western half of the contiguous U.S. Although drought is relatively rare, it has occasionally caused major economic and social disruption, such as during the Dust Bowl (1931–1942), which resulted in widespread crop failures and dust storms, beginning in the southern Great Plains and reaching to the Atlantic Ocean. According to report by U.S. Census Bureau, in 2022 natural disasters led to the forced displacement of 3.3 million people, more than 1.3% of the U.S. adult population, with half of the displacements being caused by the hurricanes. The survey-report stated that in Florida, the devastation caused by Hurricanes Ian and Nicole resulted in the relocation of around 1 million people, or about one in every 17 adult residents. In Louisiana, where inhabitants were still dealing with the devastating results of Hurricane Ida the year before, more than 409,000 people, or almost one in every eight residents, were moved. Despite this, the Louisiana state saw a relatively calm hurricane season in 2022. Public lands The United States holds many areas for the use and enjoyment of the public. These include national parks, national monuments, national forests, wilderness areas, and other areas. For lists of areas, see the following articles: Human In terms of human geography, the United States is inhabited by a diverse set of ethnicities and cultures. See also United States portal Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Dairy_product] | [TOKENS: 2409] |
Contents Dairy product Dairy products or milk products are food products made from (or containing) milk.[a] The most common dairy animals are cow, water buffalo, goat, and sheep. Dairy products consumed around the world include yogurt, cheese, milk, and butter. A facility that produces dairy products is a dairy.[b] Dairy products are consumed worldwide to varying degrees. Some people avoid some or all dairy products because of lactose intolerance, veganism, environmental concerns, or other reasons or beliefs. Types of dairy product Milk is produced after optional homogenization or pasteurization, in several grades after standardization of the fat level, and possible addition of the bacteria Streptococcus lactis and Leuconostoc citrovorum. Milk can be broken down into several different categories based on type of product produced, including cream, butter, cheese, infant formula, and yogurt. Milk varies in fat content. Skim milk is milk with zero fat, while whole milk products contain fat. Milk is an ingredient in many confectioneries. Milk can be added to chocolate to produce milk chocolate. Butter is mostly milk fat, produced by churning cream Fermented milk products include: Yogurt is milk fermented by thermophilic bacteria, mainly Streptococcus salivarius ssp. thermophilus and Lactobacillus delbrueckii ssp. bulgaricus, sometimes with additional bacteria, such as Lactobacillus acidophilus. Cheese is produced by coagulating milk, separating curds from whey, and letting it ripen, generally with bacteria, and sometimes also with certain molds. History of dairy products While cattle were domesticated as early as 12,000 years ago as a food source and as beasts of burden, the earliest evidence of using domesticated cows for dairy production is from the seventh millennium BC – the early Neolithic era – in northwestern Anatolia. Dairy farming developed elsewhere in the world in subsequent centuries: the sixth millennium BC in eastern Europe, the fifth millennium BC in Africa, and the fourth millennium BC in Britain and Northern Europe. In the last century or so larger farms specialising in dairy alone have emerged. Large scale dairy farming is only viable where either a large amount of milk is required for production of more durable dairy products such as cheese, butter, etc. or there is a substantial market of people with money to buy milk, but no cows of their own. In the 1800s, economist Johann Heinrich von Thünen argued that there was about a 100-mile radius surrounding a city where such fresh milk supply was economically viable. Cool temperature has been the main method by which milk freshness has been extended. When windmills and well pumps were invented, one of their first uses on the farm, besides providing water for animals themselves, was for cooling milk, to extend its storage life, until it would be transported to the town market. The naturally cold underground water would be continuously pumped into a cooling tub or vat. Tall, ten-gallon metal containers filled with freshly obtained milk, which is naturally warm, were placed in this cooling bath. This method of milk cooling was popular before the arrival of electricity and refrigeration. Harold McGee writes that, for thousands of years, "the making of cheese, yogurt, and other fermented products was largely uncontrolled, with microbes from the air or left over from the previous batch, whether desirable or not, colonizing the milk.... By the turn of the [twentieth] century, purified bacterial cultures were being used to control the quality of cheese more closely." Consumption patterns worldwide Rates of dairy consumption vary widely worldwide. High-consumption countries consume more than 150 kilograms (330 lb) per capita per year. These countries are: Argentina, Armenia, Australia, Costa Rica, most European countries, Israel, Kyrgyzstan, Canada, the United States and Pakistan. Medium-consumption countries consume 30 kilograms (66 lb) to 150 kg per capita per year. These countries are: India, Iran, Japan, Kenya, Mexico, Mongolia, New Zealand, North and Southern Africa, most of the Middle East, and most of Latin America and the Caribbean. Low-consumption countries consume under 30 kg per capita per year. These countries are: Senegal, most of Central Africa, and most of East and Southeast Asia. Lactose levels For those with some degree of lactose intolerance, considering the amount of lactose in dairy products can be important to health. Intolerance and health research Dairy products may upset the digestive system in individuals with lactose intolerance or a milk allergy. People who experience lactose intolerance usually avoid milk and other lactose-containing dairy products, which may cause mild side effects, such as abdominal pain, bloating, diarrhea, gas, and nausea. Such individuals may use non-dairy milk substitutes. There is no scientific evidence that consuming dairy products causes cancer. The British Dietetic Association have described the idea that milk promotes hormone related cancerous tumour growth as a myth, stating "no link between dairy containing diets and risk of cancer or promoting cancer growth as a result of hormones". In 2024, Cancer Research UK stated "there is no reliable evidence that casein or hormones in dairy causes cancer in people". The American Cancer Society (ACS) does not make specific recommendations on dairy food consumption for cancer prevention. Higher-quality research is needed to characterise valid associations between dairy consumption and risk of and/or cancer-related mortality. A 2023 review found no association between consumption of dairy products and breast cancer. Other recent reviews have found that low-fat dairy intake is associated with a decreased risk of breast cancer. The American Institute for Cancer Research (AICR), World Cancer Research Fund International (WCRF), Cancer Council Australia (CCA) and Cancer Research UK have stated that there is strong evidence that consumption of dairy products decreases risk of colorectal cancer. A 2021 umbrella review found strong evidence that consumption of dairy products decreases risk of colorectal cancer. Fermented dairy is associated with significantly decreased bladder cancer and colorectal cancer risk. A scoping review for Nordic Nutrition Recommendations 2023 found a reduced risk of colorectal cancer from dairy intake. The AICR, WCRF, CCA and Prostate Cancer UK have stated that there is limited but suggestive evidence that dairy products increase risk of prostate cancer. Cancer Research UK have stated that "research has not proven that milk or dairy increases the risk of prostate cancer" and that high-quality research is needed. It has been suggested that consumption of insulin-like growth factor 1 (IGF-1) in dairy products could increase cancer risk, particularly prostate cancer. However, a 2018 review by the Committee on Carcinogenicity of Chemicals in Food, Consumer Products and the Environment (COC) concluded that there is "insufficient evidence to draw any firm conclusions as to whether exposure to dietary IGF-1 is associated with an increased incidence of cancer in consumers". The COC also stated it is unlikely that there would be absorption of intact IGF-1 from food by most consumers. The American Medical Association (AMA) recommends that people replace full-fat dairy products with nonfat and low-fat dairy products. In 2017, the AMA stated that there is no high-quality clinical evidence that cheese consumption lowers the risk of cardiovascular disease. In 2021, they stated that "taken together, replacing full-fat dairy products with nonfat and low-fat dairy products and other sources of unsaturated fat shifts the composition of dietary patterns toward higher unsaturated to saturated fat ratios that are associated with better cardiovascular health". In 2017, the National Heart Foundation of New Zealand published an umbrella review which found an "overall neutral effect of dairy on cardiovascular risk for the general population". Their position paper stated that "the evidence overall suggests dairy products can be included in a heart-healthy eating pattern and choosing reduced-fat dairy over full-fat dairy reduces risk for some, but not all, cardiovascular risk factors". In 2019 the National Heart Foundation of Australia published a position statement on full fat dairy products, "Based on current evidence, there is not enough evidence to recommend full fat over reduced fat products or reduced fat over full fat products for the general population. For people with elevated cholesterol and those with existing coronary heart disease, reduced fat products are recommended." The position statement also noted that the "evidence for milk, yoghurt and cheese does not extend to butter, cream, ice-cream and dairy-based desserts; these products should be avoided in a heart healthy eating pattern". Recent reviews of randomized controlled trials have found that dairy intake from cheese, milk and yogurt does not have detrimental effects on markers of cardiometabolic health. A 2025 global analysis found that that total dairy consumption is associated with a 3.7% reduced risk of cardiovascular disease and a 6% reduced risk of stroke. Consumption of dairy products such as low-fat and whole milk have been associated with an increased acne risk, however, as of 2022[update] there is no conclusive evidence. Fermented and low-fat dairy products are associated with a decreased risk of diabetes. Consumption of dairy products are also associated with a decreased risk of gout. A 2023 review found that higher intake of dairy products is significantly associated with a lower risk of inflammatory bowel disease. A 2025 review found that dairy product intake is associated with a lower incidence of tinnitus. A 2025 scoping review of systematic reviews found that dairy consumption is not associated with an increased risk of non-communicable diseases or mortality and may reduce the risk of several health outcomes. Animal rights Dairy production in factory farms has been criticized by animal welfare activists. Some are concerned about how often the dairy cattle must remain pregnant, how calves are separated from their mothers, how dairy cattle are housed, and how dairy production affects the environment. In most modern dairy production systems, newborn calves are separated from their mothers within a few hours to a day after birth so that the cow’s milk can be collected for human consumption rather than consumed by the calf. In conventional practice, calves receive maternal colostrum for the first hours of life, after which they are typically raised separately and fed milk replacer, a formulated substitute for whole milk produced by the cow. The practice of early cow–calf separation is a major animal welfare concern among animal rights groups and segments of the public because cattle are strongly social animals that form maternal bonds under natural conditions. Studies have shown that separation practices can cause acute behavioural distress responses in both cows and calves, including vocalisation and increased activity, and interrupt normal maternal behaviours. Avoidance on principle Some groups avoid dairy products for non-health-related reasons. Some religions restrict or do not allow the consumption of dairy products. For example, some scholars of Jainism advocate not consuming any dairy products because dairy is perceived to involve violence against cows. Orthodox Judaism requires that meat and dairy products not be served at the same meal, served or cooked in the same utensils, or stored together, as prescribed in Deuteronomy 14:21. Veganism is the avoidance of all animal products, including dairy products, most often due to the ethics regarding how dairy products are produced. The ethical reasons for avoiding meat and dairy products include how dairy is produced, how the animals are handled, and the environmental effect of dairy production. According to a report of the United Nations' Food and Agriculture Organization in 2010 the dairy sector accounted for 4 percent of global human-made greenhouse gas emissions. Growing awareness of dairy products' environmental impact, specifically greenhouse gas emissions, has led to many people reducing or avoiding dairy. In the EU, dairy is responsible for 27% of all diet related emissions, on average, while plant-based milks cause 2.5–4.5 times fewer emissions. See also References and notes Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Constitution_of_the_United_States] | [TOKENS: 13955] |
Contents Constitution of the United States The Constitution of the United States is the supreme law of the United States of America. It superseded the Articles of Confederation, the nation's first constitution, on March 4, 1789. Originally including seven articles, the Constitution defined the foundational structure of the federal government. The drafting of the Constitution by many of the nation's Founding Fathers, often referred to as its framing, was completed at the Constitutional Convention, which assembled at Independence Hall in Philadelphia between May 25 and September 17, 1787. Influenced by English common law and the Enlightenment liberalism of philosophers like John Locke and Montesquieu, the Constitution's first three articles embody the doctrine of the separation of powers, in which the federal government is divided into the legislative, bicameral Congress;[d] the executive, led by the president;[e] and the judiciary, within which the Supreme Court has apex jurisdiction.[f] Articles IV, V, and VI embody concepts of federalism, describing the rights and responsibilities of state governments, the states in relationship to the federal government, and the process of constitutional amendment. Article VII establishes the procedure used to ratify the constitution. Since the Constitution became operational in 1789, it has been amended 27 times. The first ten amendments, known collectively as the Bill of Rights, offer specific protections of individual liberty and justice and place restrictions on the powers of government within the U.S. states. Amendments 13–15 are known as the Reconstruction Amendments. The majority of the later amendments expand individual civil rights protections, with some addressing issues related to federal authority or modifying government processes and procedures. Amendments to the United States Constitution, unlike ones made to many constitutions worldwide, are appended to the document. The Constitution of the United States is the oldest and longest-standing written and codified national constitution in force in the world.[g] The first permanent constitution,[h] it has been interpreted, supplemented, and implemented by a large body of federal constitutional law and has influenced the constitutions of other nations. History From September 5, 1774, to March 1, 1781, the Second Continental Congress, convened in Philadelphia in what is now Independence Hall, functioned as the provisional government of the United States. Delegates to the First Continental Congress in 1774 and then the Second Continental Congress from 1775 to 1781 were chosen largely from the revolutionary committees of correspondence in various colonies rather than through the colonial governments of the Thirteen Colonies. The Articles of Confederation and Perpetual Union was the first constitution of the United States. The document was drafted by a committee appointed by the Second Continental Congress in mid-June 1777 and was adopted by the full Congress in mid-November of that year. Ratification by the 13 colonies took more than three years and was completed March 1, 1781. The Articles gave little power to the central government. While the Confederation Congress had some decision-making abilities, it lacked enforcement powers. The implementation of most decisions, including amendments to the Articles, required legislative approval by all 13 of the newly formed states. Despite these limitations, based on the Congressional authority granted in Article 9, the league of states was considered as strong[clarification needed] as any similar republican confederation ever formed. The chief problem was, in the words of George Washington, "no money". The Confederated Congress could print money, but it was worthless,[why?] and while the Congress could borrow money, it could not pay it back. No state paid its share of taxes to support the government, and some paid nothing. A few states met the interest payments toward the national debt owed by their citizens, but nothing greater, and no interest was paid on debts owed to foreign governments. By 1786, the United States was facing default on its outstanding debts. Under the Articles, the United States had little ability to defend its sovereignty. Most of the troops in the nation's 625-man army were deployed facing non-threatening British forts on American soil. Soldiers were not being paid, some were deserting, and others were threatening mutiny. Spain closed New Orleans to American commerce, despite the protests of U.S. officials. When Barbary pirates began seizing American ships of commerce, the Treasury had no funds to pay toward ransom. If a military crisis required action, the Congress had no credit or taxing power to finance a response. Domestically, the Articles of Confederation was failing to bring unity to the diverse sentiments and interests of the various states. Although the Treaty of Paris in 1783 was signed between Britain and the U.S., and named each of the American states, various states proceeded to violate it. New York and South Carolina repeatedly prosecuted Loyalists for wartime activity and redistributed their lands. Individual state legislatures independently laid embargoes, negotiated directly with foreign authorities, raised armies, and made war, all violating the letter and the spirit of the Articles.[citation needed] In September 1786, during the inter–state Annapolis convention to discuss and develop a consensus about reversing the protectionist trade barriers that each state had erected, James Madison questioned whether the Articles of Confederation was a binding compact or even a viable government. Connecticut paid nothing and "positively refused" to pay U.S. assessments for two years. A rumor at the time was that a seditious party of New York legislators had opened a conversation with the Viceroy of Canada. To the south, the British were said to be openly funding Creek Indian raids on Georgia, and the state was under martial law. Additionally, during Shays' Rebellion (August 1786 – June 1787) in Massachusetts, Congress could provide no money to support an endangered constituent state. General Benjamin Lincoln was obliged to raise funds from Boston merchants to pay for a volunteer army. Congress could do nothing significant without nine states, and some legislation required all 13. When a state produced only one member in attendance, its vote was not counted. If a state's delegation was evenly divided, its vote could not be counted towards the nine-count requirement. The Congress of the Confederation had "virtually ceased trying to govern". The vision of a respectable nation among nations seemed to be fading in the eyes of revolutionaries such as George Washington, Benjamin Franklin, and Rufus King. Their dream of a republic, a nation without hereditary rulers, with power derived from the people in frequent elections, was in doubt. On February 21, 1787, the Confederation Congress called a convention of state delegates in Philadelphia to propose revisions to the Articles. Unlike earlier attempts, the convention was not meant for new laws or piecemeal alterations, but for the "sole and express purpose of revising the Articles of Confederation". The convention was not limited to commerce but intended to "render the federal constitution adequate to the exigencies of government and the preservation of the Union". On the appointed day, May 14, 1787, only the Virginia and Pennsylvania delegations were present, and the convention's opening meeting was postponed for lack of a quorum. A quorum of seven states met on May 25, and deliberations began. Eventually 12 states were represented, with Rhode Island refusing to participate. Of the 74 delegates appointed by the states, 55 attended. The convention's initial mandate was limited to amending the Articles of Confederation, which had proven highly ineffective in meeting the young nation's needs. Almost immediately, however, the delegates began considering measures to replace the Articles of Confederation. Two plans for structuring the federal government arose shortly after the convention's outset: On May 31, the Convention devolved into the Committee of the Whole, charged with considering the Virginia Plan. On June 13, the Virginia resolutions in amended form were reported out of committee. The New Jersey Plan was put forward in response to the Virginia Plan.[citation needed] On June 19, 1787, delegates rejected the New Jersey Plan with three states voting in favor, seven against, and one divided. The plan's defeat led to a series of compromises centering primarily on two issues: slavery and proportional representation.[i] Proposals by Madison (Virginia) and Wilson (Pennsylvania) called for a supreme court veto over national legislation. This proposal resembled the system in New York, where the Constitution of 1777 called for a "Council of Revision" by the governor and justices of the state supreme court, which council would review and veto any passed legislation. Madison's proposal was defeated three times and replaced by a presidential veto with congressional override. The justification for judicial review is to be explicitly found in the open ratifications held in the states and reported in their newspapers. John Marshall in Virginia, James Wilson in Pennsylvania and Oliver Ellsworth of Connecticut all argued for Supreme Court judicial review of acts of state legislature. In Federalist No. 78, Alexander Hamilton advocated the doctrine of a written document held as a superior enactment of the people. "A limited constitution can be preserved in practice no other way" than through courts which can declare void any legislation contrary to the Constitution. The preservation of the people's authority over legislatures rests "particularly with judges."[j] The issue of proportional representation was of concern to less populous states, which under the Articles had the same power as larger states. From July 2 to 16, a Committee of Eleven, including one delegate from each state represented, met to work out a compromise on the issue of representation in the federal legislature. All agreed to a republican form of government grounded in representing the people in the states. For the legislature, two issues were to be decided: (i) how the votes were to be allocated among the states in the Congress, and (ii) how the representatives should be elected. In its report, now known as the Connecticut Compromise (or "Great Compromise"), the committee proposed proportional representation for seats in the House of Representatives based on population (with the people voting for representatives), equal representation for each state in the Senate (with each state's legislators generally choosing their respective senators), and that all money bills would originate in the House. The Great Compromise ended the stalemate between patriots and nationalists, leading to numerous other compromises in a spirit of accommodation.[citation needed] The issue of slavery pitted Northern states, where slavery was slowly being abolished, against Southern states, whose agricultural economies depended on slave labor. To satisfy interests in the South, the delegates agreed to protect the slave trade for 20 years. Slavery was protected further by the Three-Fifths Compromise, which allowed states to count three-fifths of their slaves as part of their populations, for the purpose of representation in the federal government, and by requiring the return of escaped slaves to their owners, even if captured in states where slavery had been abolished. Further compromises were also made on presidential term, powers, and method of selection, as well as the jurisdiction of the federal judiciary.[citation needed] While these compromises held the Union together and aided the Constitution's ratification, slavery continued for eight more decades, and less populous states continue to have disproportional representation in the U.S. Senate and Electoral College. On July 24, a Committee of Detail, including John Rutledge (South Carolina), Edmund Randolph (Virginia), Nathaniel Gorham (Massachusetts), Oliver Ellsworth (Connecticut), and James Wilson (Pennsylvania), was elected to draft a detailed constitution reflective of the resolutions passed by the convention up to that point. The Convention recessed from July 26 to August 6 to await the report of this "Committee of Detail". Overall, the report of the committee conformed to the resolutions adopted by the convention, adding some elements. A twenty-three article (plus preamble) constitution was presented. From August 6 to September 10, the report of the committee of detail was discussed, section by section and clause by clause. Details were attended to, and further compromises were effected. Toward the close of these discussions, on September 8, a Committee of Style and Arrangement, including Alexander Hamilton from New York, William Samuel Johnson from Connecticut, Rufus King from Massachusetts, James Madison from Virginia, and Gouverneur Morris from Pennsylvania, was appointed to distill a final draft constitution from the 23 approved articles. The final draft, presented to the convention on September 12, contained seven articles, a preamble and a closing endorsement, of which Morris was the primary author. The committee also presented a proposed letter to accompany the constitution when delivered to Congress. The original U.S. Constitution was handwritten on five pages of parchment by Jacob Shallus. The final document was taken up on Monday, September 17, at the convention's final session. Several of the delegates were disappointed in the result, a makeshift series of unfortunate compromises. Some delegates left before the ceremony and three others refused to sign. Of the thirty-nine signers, Benjamin Franklin summed up, addressing the convention: "There are several parts of this Constitution which I do not at present approve, but I am not sure I shall never approve them." He would accept the Constitution, "because I expect no better and because I am not sure that it is not the best". The advocates of the Constitution were anxious to obtain unanimous support of all twelve states represented in the convention. Their accepted formula for the closing endorsement was "Done in Convention, by the unanimous consent of the States present". At the end of the convention, the proposal was agreed to by eleven state delegations and the lone remaining delegate from New York, Alexander Hamilton. Within three days of its signing on September 17, 1787, the Constitution was submitted to the Congress of the Confederation, then sitting in New York City, the nation's temporary capital. The document, originally intended as a revision of the Articles of Confederation, instead introduced a completely new form of government. While members of Congress had the power to reject it, they voted unanimously on September 28 to forward the proposal to the thirteen states for their ratification. Under the process outlined in Article VII of the proposed Constitution, the state legislatures were tasked with organizing "Federal Conventions" to ratify the document. This process ignored the amendment provision of the Articles of Confederation which required unanimous approval of all the states. Instead, Article VII called for ratification by just nine of the 13 states—a two-thirds majority. Two factions soon emerged, one supporting the Constitution, the Federalists, and the other opposing it, the so-called Anti-Federalists. Over the ensuing months, the proposal was debated, criticized, and expounded upon clause by clause. In the state of New York, at the time a hotbed of anti-Federalism, three delegates from the Philadelphia Convention who were also members of the Congress—Hamilton, Madison, and Jay—published a series of commentaries, now known as The Federalist Papers, in support of ratification. Before year's end, three state legislatures voted in favor of ratification. Delaware was first, voting unanimously 30–0; Pennsylvania second, approving the measure 46–23; and New Jersey third, also recording a unanimous vote. As 1788 began, Connecticut and Georgia followed Delaware's lead with almost unanimous votes, but the outcome became less certain as leaders in key states such as Virginia, New York, and Massachusetts expressed concerns over the lack of protections for people's rights. Fearing the prospect of defeat, the Federalists relented, promising that if the Constitution was adopted, amendments would be added to secure individual liberties. With that, the anti-Federalists' position collapsed. On June 21, 1788, New Hampshire became the ninth state to ratify. Three months later, on September 17, the Congress of the Confederation certified the ratification of eleven states, and passed resolutions setting dates for choosing the first senators and representatives, the first Wednesday of January (January 7, 1789); electing the first president, the first Wednesday of February (February 4); and officially starting the new government, the first Wednesday of March (March 4), when the first Congress would convene in New York City. As its final act, the Congress of Confederation agreed to acquire 100 square miles of land from Maryland and Virginia for establishing a permanent capital. North Carolina waited to ratify the Constitution until after the Bill of Rights was passed by the new Congress, and Rhode Island's ratification would only come after a threatened trade embargo. The Supreme Court was initially made up of jurists who had been intimately connected with the framing of the Constitution and the establishment of its government as law. John Jay (New York), a co-author of The Federalist Papers, served as chief justice for the first six years. The second chief justice, John Rutledge (South Carolina), was appointed by Washington in 1795 as a recess appointment, but was not confirmed by the Senate. Resigning later that year, he was succeeded in 1796 by the third chief justice, Oliver Ellsworth (Connecticut). Both Rutledge and Ellsworth were delegates to the Constitutional Convention. John Marshall (Virginia), the fourth chief justice, had served in the Virginia Ratification Convention in 1788. His 34 years of service on the Court would see some of the most important rulings to help establish the nation the Constitution had begun. Other early members of the Supreme Court who had been delegates to the Constitutional Convention included James Wilson (Pennsylvania) for ten years, and John Blair Jr. (Virginia) for five years.[citation needed] Section 1, Article 3 provides that Congress can create lower (or "inferior") courts. The Judiciary Act of 1789 saw Congress's first exercise of such power. Currently, Title 28 of the U.S. Code describes judicial powers and administration. As of the First Congress, the Supreme Court justices rode circuit to sit as panels to hear appeals from the district courts.[k] In 1891, Congress enacted a new system, where district courts would have original jurisdiction; intermediate appellate circuit courts with exclusive jurisdiction heard regional appeals before consideration by the Supreme Court; and the Supreme Court holds discretionary jurisdiction. No part of the Constitution expressly authorizes judicial review, but the framers did contemplate the idea, and precedent has since established that the courts could exercise judicial review over the actions of Congress or the executive branch. To establish a federal system of national law, considerable effort goes into developing a spirit of comity between federal government and states. By the doctrine of res judicata, federal courts give "full faith and credit" to State Courts.[l] The Supreme Court will decide Constitutional issues of state law only on a case-by-case basis, and only by strict Constitutional necessity, independent of state legislators' motives, their policy outcomes or its national wisdom.[m] Two conflicting federal laws are under "pendent" jurisdiction if one presents a strict constitutional issue. Federal court jurisdiction is rare when a state legislature enacts something as under federal jurisdiction.[n] Clauses 4 and 9 of Article One, Section 9 were explicitly shielded from Constitutional amendment prior to 1808. On January 1, 1808, the first day it was permitted to do so, Congress approved legislation prohibiting the importation of slaves into the country. On February 3, 1913, with ratification of the Sixteenth Amendment, Congress gained the authority to levy an income tax without apportioning it among the states or basing it on the United States Census. Influences The U.S. Constitution was a federal one and was greatly influenced by the study of Magna Carta and other federations, both ancient and extant. The Due Process Clause of the Constitution was partly based on common law and on Magna Carta (1215), which had become a foundation of English liberty against arbitrary power wielded by a ruler. The idea of Separation of Powers inherent in the Constitution was largely inspired by eighteenth-century Enlightenment philosophers, such as Montesquieu and John Locke. The influence of Montesquieu, Locke, Edward Coke and William Blackstone were evident at the Constitutional Convention. Prior to and during the framing and signing of the Constitution, Blackstone, Hume, Locke and Montesquieu were among the political philosophers most frequently referred to. James Madison, for example made frequent reference to Blackstone, Locke, and Montesquieu, who were among the most prominent political theorists of the late eighteenth century. While the ideas of unalienable rights, the separation of powers and the structure of the Constitution were largely influenced by the European Enlightenment thinkers, like Montesquieu, John Locke and others, Benjamin Franklin and Thomas Jefferson still had reservations about the existing forms of government in Europe. In a speech at the Constitutional Convention Franklin stated, "We have gone back to ancient history for models of Government, and examined different forms of those Republics ... And we have viewed modern States all round Europe but find none of their Constitutions suitable to our circumstances." Jefferson maintained, that most European governments were autocratic monarchies and not compatible with the egalitarian character of the American people. Historian Jack P. Greene maintains that by 1776 the founders drew heavily upon Magna Carta and the later writings of "Enlightenment rationalism" and English common law. In his Institutes of the Lawes of England, Coke interpreted Magna Carta protections and rights to apply not just to nobles, but to all British subjects. In writing the Virginia Charter of 1606, he enabled the King in Parliament to give those to be born in the colonies all rights and liberties as though they were born in England. William Blackstone's Commentaries on the Laws of England are considered the most influential books on law in the new republic. The English Bill of Rights (1689) was an inspiration for the American Bill of Rights. Both require jury trials, contain a right to keep and bear arms, prohibit excessive bail and forbid "cruel and unusual punishments". Many liberties protected by state constitutions and the Virginia Declaration of Rights were incorporated into the Bill of Rights. Upon the arrival of the American Revolution, many of the rights guaranteed by the Federal Bill of Rights were recognized as being inspired by English law. A substantial body of thought had been developed from the literature of republicanism in the United States, typically demonstrated by the works of John Adams, who often quoted Blackstone and Montesquieu verbatim, and applied to the creation of state constitutions. Historian Herbert W. Schneider held that the Scottish Enlightenment was "probably the most potent single tradition in the American Enlightenment" and the advancement of personal liberties. Historian Daniel Walker Howe notes that Benjamin Franklin greatly admired David Hume, an eighteenth-century Scottish philosopher, and had studied many of his works while at Edinburgh in 1760. Both embraced the idea that high-ranking public officials should receive no salary and that the lower class was a better judge of character when it came to choosing their representatives. Following the Glorious Revolution of 1688, British political philosopher John Locke was a major influence, expanding on the contract theory of government advanced by Thomas Hobbes, his contemporary. Locke advanced the principle of consent of the governed in his Two Treatises of Government. Government's duty under a social contract among the sovereign people was to serve the people by protecting their rights. These basic rights were life, liberty, and property. Montesquieu's influence on the framers is evident in Madison's Federalist No. 47 and Hamilton's Federalist No. 78. Thomas Jefferson, Adams, and Mason were known to read Montesquieu. Montesquieu emphasized the need for balanced forces pushing against each other to prevent tyranny (reflecting the influence of Polybius's 2nd century BC treatise on the checks and balances of the Roman Republic). In his The Spirit of Law, Montesquieu maintained that the separation of state powers should be by its service to the people's liberty: legislative, executive and judicial, while also emphasizing that the idea of separation had for its purpose the even distribution of authority among the several branches of government. Supreme Court Justices, the ultimate interpreters of the constitution, have also cited Montesquieu throughout the Court's history. (See, e.g., Green v. Biddle, 21 U.S. 1, 1, 36 (1823). United States v. Wood, 39 U.S. 430, 438 (1840). Myers v. United States, 272 U.S. 52, 116 (1926). Nixon v. Administrator of General Services, 433 U.S. 425, 442 (1977). Bank Markazi v. Peterson, 136 U.S. 1310, 1330 (2016).) American Indian history scholars Donald Grinde and Bruce Johansen claim there is "overwhelming evidence" that Iroquois Confederacy political concepts and ideas influenced the U.S. Constitution, and are considered to be the most outspoken supporters of the Iroquois thesis. The idea as to the extent of that influence on the founding, however, varies among historians and has been questioned or criticized by various historians, including Samuel Payne, William Starna, George Hamell, and historian and archaeologist Philip Levy, who claims the evidence is largely coincidental and circumstantial. The most outspoken critic, anthropologist Elisabeth Tooker, claimed the Iroquois influence thesis is largely the product of "white interpretations of Indians" and "scholarly misapprehension". John Napoleon Brinton Hewitt, who was born on the Tuscarora Indian Reservation, and was an ethnologist at the Smithsonian Institution's Bureau of Ethnology is often cited by historians of Iroquois history. Hewitt, however, rejected the idea that the Iroquois League had a major influence on the Albany Plan of Union, Benjamin Franklin's plan to create a unified government for the Thirteen Colonies, which was rejected. Structure The Constitution includes four sections: an introductory paragraph titled Preamble, a list of seven Articles that define the government's framework, an untitled closing endorsement with the signatures of 39 framers. 27 amendments have also been adopted under Article V. The Preamble, the Constitution's introductory paragraph, lays out the purposes of the new government: We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America. The opening words, "We the People", represented a new thought: the idea that the people and not the states were the source of the government's legitimacy. Coined by Gouverneur Morris of Pennsylvania, who chaired the convention's Committee of Style, the phrase is considered an improvement on the section's original draft which followed the words We the People with a list of the 13 states. In place of the names of the states Morris substituted "of the United States" and then listed the Constitution's six goals, none of which were mentioned originally. The signing of the United States Constitution occurred on September 17, 1787, when 39 delegates endorsed the constitution created during the convention. In addition to signatures, this closing endorsement, the Constitution's eschatocol, included a brief declaration that the delegates' work has been successfully completed and that those whose signatures appear on it subscribe to the final document. Included are a statement pronouncing the document's adoption by the states present, a formulaic dating of its adoption, and the delegates' signatures. Additionally, the convention's secretary, William Jackson, added a note to verify four amendments made by hand to the final document, and signed the note to authenticate its validity. The language of the concluding endorsement, conceived by Gouverneur Morris and presented to the convention by Benjamin Franklin, was made intentionally ambiguous in hopes of winning over the votes of dissenting delegates. Advocates for the new frame of government, realizing the impending difficulty of obtaining the consent of the states needed to make it operational, were anxious to obtain the unanimous support of the delegations from each state. It was feared that many of the delegates would refuse to give their individual assent to the Constitution. Therefore, in order that the action of the convention would appear to be unanimous, the formula, Done in convention by the unanimous consent of the states present ... was devised.[better source needed] Articles The Constitution's main provisions include seven articles that define the basic framework of the federal government. Articles that have been amended still include the original text, although provisions repealed by amendments under Article V are usually bracketed or italicized to indicate they no longer apply.[citation needed] Article I describes the Congress, the legislative branch of the federal government: Congress comprises both the Senate and House of Representatives;[o] members of both houses are subject to age, citizenship, and state residency requirements;[p] and are elected by the people of a state.[q] Section 3, Clause 1 provides for equal representation of the states in the Senate. Section 8 enumerates the powers delegated to the legislature and includes broad provisions such as the General Welfare Clause (also known as the Taxing and Spending Clause), Commerce Clause, and Necessary and Proper Clause. Section 9 lists eight specific limits on congressional power. In McCulloch v. Maryland (1819), the Supreme Court read the Necessary and Proper Clause to permit the federal government to take action that would "enable [it] to perform the high duties assigned to it [by the Constitution] in the manner most beneficial to the people," even if that action is not itself within the enumerated powers. Section 9, Clause 1 prevents Congress from passing any law that would restrict the importation of slaves into the United States prior to 1808. Clause 4 holds that direct taxes must be apportioned according to state populations. Article II describes the office, qualifications, and duties of the president of the United States and the vice president. The Article is modified by the 12th Amendment, which regulates presidential elections, and the 25th Amendment, relating to office succession. The president is head of the executive branch of the federal government; the nation's head of state and head of government; and the Commander in Chief of the United States Armed Forces, as well as of state militias when they are mobilized. The president makes treaties with the advice and consent of the Senate. To administer the federal government, the president commissions all the offices of the federal government as Congress directs; and may require the opinions of its principal officers and make "recess appointments" for vacancies that may happen during the recess of the Senate. The president ensures the laws are faithfully executed and may grant reprieves and pardons with the exception of Congressional impeachment. The president reports to Congress on the State of the Union, and by the Recommendation Clause, recommends "necessary and expedient" national measures. The president may convene and adjourn Congress under special circumstances. Section 4 provides for the removal of the president and other federal officers. The president is removed on impeachment for, and conviction of, treason, bribery, or other high crimes and misdemeanors. Article III describes the court system, including the Supreme Court. Section 1 vests the judicial power of the United States in federal courts and, with it, the authority to interpret and apply the law to particular cases. Also included is the power to punish, sentence, and direct future action to resolve conflicts. Implied powers under Article III include the enforcement of judicial decisions through criminal contempt and civil contempt powers; injunctive relief and the habeas corpus remedy; and the ability to imprison for contumacy, bad-faith litigation, and failure to obey a writ of mandamus. Clause 1 of Section 2, known as the Case or Controversy Clause, authorizes the federal courts to hear actual cases and controversies only. Their judicial power does not extend to cases that are hypothetical, or which are proscribed due to standing, mootness, or ripeness issues. Generally, a case or controversy requires the presence of adverse parties who have some interest genuinely at stake in the case.[r] Section 2 also protects the right to trial by jury in all criminal cases. Section 3 bars Congress from changing or modifying Federal law on treason by simple majority. This section also defines treason as an overt act of making war or materially helping those at war with the United States.[s] Article IV outlines the relations among the states and between each state and the federal government. It also provides for such matters as admitting new states, border changes between the states, and extradition between the states, as well as laying down a legal basis for freedom of movement and travel among the states. The Full Faith and Credit Clause requires states to recognise the public acts, records, and court proceedings of the other states. Congress is permitted to regulate the manner in which proof of such acts may be admitted. The "privileges and immunities" clause prohibits state governments from discriminating against citizens of other states in favor of resident citizens. For instance, in criminal sentencing, a state may not increase a penalty on the grounds that the convicted person is a non-resident. The Territorial Clause gives Congress the power to make rules for disposing of federal property and governing non-state territories of the United States. Finally, the fourth section of Article Four requires the United States to guarantee to each state a republican form of government and to protect them from invasion and violence. Article V outlines the process for amending the Constitution. The Articles of Confederation provided that amendments were to be proposed by Congress and ratified by the unanimous vote of all 13 state legislatures. This proved to be a major flaw in the Articles, as it created an insurmountable obstacle to constitutional reform. The amendment process crafted during the Philadelphia Constitutional Convention was, according to The Federalist No. 43, designed to establish a balance between pliancy and rigidity.[better source needed] Article Five ends by shielding certain clauses in the Constitution from being amended. Article VI establishes that the Constitution and all federal laws and treaties made in accordance with it have supremacy over state laws, and that "the judges in every state shall be bound thereby, any thing in the laws or constitutions of any state notwithstanding". It validates national debt created under the Articles of Confederation and requires that all federal and state legislators, officers, and judges take oaths or affirmations to support the Constitution. This means that the states' constitutions and laws should not conflict with the laws of the federal constitution and that in case of a conflict, state judges are legally bound to honor the federal laws and constitution over those of any state. Article Six also states "no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States". Article VII describes the process for establishing the proposed new frame of government. Anticipating that the influence of many state politicians would be Antifederalist, delegates to the Philadelphia Convention provided for ratification of the Constitution by popularly elected ratifying conventions in each state. The convention method also made it possible that judges, ministers and others ineligible to serve in state legislatures, could be elected to a convention. Suspecting that Rhode Island, at least, might not ratify, delegates decided that the Constitution would go into effect as soon as nine states (two-thirds rounded up) ratified. Each of the remaining four states could then join the newly formed union by ratifying. Amendments The procedure for amending the Constitution is outlined in Article V and is currently overseen by the archivist of the United States. Between 1949 and 1985, it was overseen by the administrator of General Services, and before that by the secretary of state. Under Article V, a proposal for an amendment must be adopted either by two-thirds of both houses of Congress or by a national convention that had been requested by two-thirds of the state legislatures. Following this, Congress decides whether the proposed amendment is to be ratified by state legislatures or state ratifying conventions. The proposed amendment along with the method of ratification is sent to the Office of the Federal Register, which copies it in slip law format and submits it to the states. To date, the convention method of proposal has never been tried and the convention method of ratification has only been used once, for the Twenty-first Amendment. A proposed amendment becomes an operative part of the Constitution as soon as it is ratified by three-fourths of the States (currently 38 of the 50 states). No additional action by Congress or anyone else after ratification is required. When the Office of the Federal Register verifies that it has received the required number of authenticated ratification documents, it drafts a formal proclamation for the Archivist to certify that the amendment is valid. This certification is published in the Federal Register and United States Statutes at Large and serves as official notice to Congress and the nation that the ratification process has been completed. The Constitution has twenty-seven amendments. Structurally, the Constitution's original text and all prior amendments remain untouched. The precedent for this practice was set in 1789, when Congress considered and proposed the first several Constitutional amendments. Among these, Amendments 1–10 are collectively known as the Bill of Rights, and Amendments 13–15 are known as the Reconstruction Amendments. Excluding the Twenty-seventh Amendment, which was pending before the states for 202 years, 225 days, the longest pending amendment that was successfully ratified was the Twenty-second Amendment, which took 3 years, 343 days. The Twenty-sixth Amendment was ratified in the shortest time, 100 days. The average ratification time for the first twenty-six amendments was 1 year, 252 days; for all twenty-seven, 9 years, 48 days. The first ten Amendments introduced were referred to as the Bill of Rights which consists of 10 amendments that were added to the Constitution in 1791, as supporters of the Constitution had promised critics during the debates of 1788. The First Amendment prohibits Congress from obstructing the exercise of certain individual freedoms: freedom of religion, freedom of speech, freedom of the press, freedom of assembly, and right to petition. Its Free Exercise Clause guarantees a person's right to hold whatever religious beliefs they want, and to freely exercise that belief; its Establishment Clause prevents the federal government from creating an official national church or favoring one set of religious beliefs over another. The Second Amendment protects the right of individuals to keep and bear arms. The Supreme Court has ruled that this right applies to individuals, not merely to collective militias. It has also held that the government may regulate or place some limits on the manufacture, ownership and sale of firearms or other weapons. Requested by several states during the constitutional ratification debates, the amendment followed the efforts of the British to confiscate the colonists' firearms at the outbreak of the Revolutionary War. Patrick Henry had rhetorically asked, if the States would be stronger "when we are totally disarmed, and when a British Guard shall be stationed in every house?" The Third Amendment prohibits the federal government from forcing individuals to provide lodging to soldiers in their homes during peacetime without their consent. Requested by several states during the Constitutional ratification debates, the amendment reflected the lingering resentment over the Quartering Acts passed by the British Parliament during the Revolutionary War, which had allowed British soldiers to take over private homes for their own use. The Fourth Amendment protects people against unreasonable searches and seizures of either self or property by government officials. A search can mean everything from a frisking by a police officer or to a demand for a blood test to a search of an individual's home or car. A seizure occurs when the government takes control of an individual or something in the possession of the individual. Items that are seized often are used as evidence when the individual is charged with a crime. It also imposes certain limitations on police investigating a crime and prevents the use of illegally obtained evidence at trial. The Fifth Amendment establishes the requirement that a trial for a major crime may commence only after an indictment has been handed down by a grand jury; protects individuals from double jeopardy; prohibits punishment without due process of law; and provides that an accused person may not be compelled to reveal to the police, prosecutor, judge, or jury any information that might incriminate or be used against him or her in a court of law. Additionally, the Fifth Amendment also prohibits government from taking private property for public use without "just compensation", the basis of eminent domain in the United States. The Sixth Amendment provides several protections and rights to an individual accused of a crime. The accused has the right to a fair, speedy, and public trial by a local and impartial jury. This right also protects defendants from secret proceedings that might encourage abuse of the justice system, enshrines a right to legal counsel if accused of a crime, guarantees that the accused may require witnesses to attend the trial and testify in the presence of the accused, and guarantees the accused a right to know the charges against them. In 1966, the Supreme Court ruled that, with the Fifth Amendment, this amendment requires what has become known as the Miranda warning. The Seventh Amendment extends the right to a jury trial to federal civil cases, and inhibits courts from overturning a jury's findings of fact. Although the Seventh Amendment itself says that it is limited to "suits at common law", meaning cases that triggered the right to a jury under English law, the amendment has been found to apply in lawsuits that are similar to the old common law cases. For example, the right to a jury trial applies to cases brought under federal statutes that prohibit race or gender discrimination in housing or employment. This amendment guarantees the right to a jury trial only in federal court, not in state court. The Eighth Amendment protects people from having bail or fines set at an amount so high that it would be impossible for all but the richest defendants to pay, and also protects people from being subjected to cruel and unusual punishment. Although this phrase originally was intended to outlaw certain gruesome methods of punishment, it has been broadened over the years to protect against punishments that are grossly disproportionate to or too harsh for the particular crime. This provision has also been used to challenge prison conditions such as extremely unsanitary cells, overcrowding, insufficient medical care and deliberate failure by officials to protect inmates from one another. The Ninth Amendment declares that individuals have other fundamental rights, in addition to those stated in the Constitution. During the Constitutional ratification debates, Anti-Federalists argued that a Bill of Rights should be added. The Federalists opposed it on grounds that a list would necessarily be incomplete but would be taken as explicit and exhaustive, thus enlarging the power of the federal government by implication. The Anti-Federalists persisted, and several state ratification conventions refused to ratify the Constitution without a more specific list of protections, so the First Congress added what became the Ninth Amendment as a compromise. Because the rights protected by the Ninth Amendment are not specified, they are referred to as "unenumerated". The Supreme Court has found that unenumerated rights include such important rights as the right to travel, the right to vote, the right to privacy, and the right to make important decisions about one's health care or body. The Tenth Amendment (1791) was included in the Bill of Rights to further define the balance of power between the federal government and the states. The amendment states that the federal government has only those powers specifically granted by the Constitution. These powers include the power to declare war, to collect taxes, to regulate interstate business activities and others that are listed in the articles or in subsequent constitutional amendments. Any power not listed is, says the Tenth Amendment, left to the states or the people. While there is no specific list of what these "reserved powers" may be, the Supreme Court has ruled that laws affecting family relations, commerce within a state's own borders, abortion, and local law enforcement activities, are among those specifically reserved to the states or the people. The Eleventh Amendment specifically prohibits federal courts from hearing cases in which a state is sued by an individual from another state or another country, thus extending to the states sovereign immunity protection from certain types of legal liability. Article Three, Section 2, Clause 1 has been affected by this amendment, which also overturned the Supreme Court's decision in Chisholm v. Georgia (1793). The Twelfth Amendment modifies the way the Electoral College chooses the president and vice president. It stipulates that each elector must cast a distinct vote for president and vice president, instead of two votes for president. It also suggests that the president and vice president should not be from the same state. Article II, Section 1, Clause 3 is superseded by this amendment, which also extends the eligibility requirements to become president to the vice president. The Thirteenth Amendment (1865) abolished slavery and involuntary servitude, except as punishment for a crime, and authorized Congress to enforce abolition. Though millions of slaves had been declared free by the 1863 Emancipation Proclamation, their post-Civil War status was unclear, as was the status of other millions. Congress intended the Thirteenth Amendment to be a proclamation of freedom for all slaves throughout the nation and to take the question of emancipation away from politics. This amendment rendered inoperative or moot several of the original parts of the constitution. The Fourteenth Amendment (1868) granted United States citizenship to former slaves and to all persons "subject to U.S. jurisdiction". It also contained three new limits on state power: a state shall not violate a citizen's privileges or immunities; shall not deprive any person of life, liberty, or property without due process of law; and must guarantee all persons equal protection of the laws. These limitations dramatically expanded the protections of the Constitution. This amendment, according to the Supreme Court's Doctrine of Incorporation, makes most provisions of the Bill of Rights applicable to state and local governments as well. It superseded the mode of apportionment of representatives delineated in Article 1, Section 2, Clause 3, and also overturned the Supreme Court's decision in Dred Scott v. Sandford (1857). The Fifteenth Amendment (1870) prohibits the use of race, color, or previous condition of servitude in determining which citizens may vote. The last of three post Civil War Reconstruction Amendments, it sought to abolish one of the key vestiges of slavery and to advance the civil rights and liberties of former slaves. The Sixteenth Amendment removed existing Constitutional constraints that limited the power of Congress to lay and collect taxes on income. Specifically, the apportionment constraints delineated in Article 1, Section 9, Clause 4 have been removed by this amendment, which also overturned an 1895 Supreme Court decision, in Pollock v. Farmers' Loan & Trust Co., that declared an unapportioned federal income tax on rents, dividends, and interest unconstitutional. This amendment has become the basis for all subsequent federal income tax legislation and has greatly expanded the scope of federal taxing and spending in the years since. The Seventeenth Amendment modifies the way senators are elected. It stipulates that senators are to be elected by direct popular vote. The amendment supersedes Article 1, Section 3, Clauses 1 and 2, under which the two senators from each state were elected by the state legislature. It also allows state legislatures to permit their governors to make temporary appointments until a special election can be held. The Eighteenth Amendment (1919) prohibited the making, transporting, and selling of alcoholic beverages nationwide. It also authorized Congress to enact legislation enforcing this prohibition. Adopted at the urging of a national temperance movement, proponents believed that the use of alcohol was reckless and destructive and that prohibition would reduce crime and corruption, solve social problems, decrease the need for welfare and prisons, and improve the health of all Americans. During prohibition, it is estimated that alcohol consumption and alcohol related deaths declined dramatically. But prohibition had other, more negative consequences. The amendment drove the lucrative alcohol business underground, giving rise to a large and pervasive black market. In addition, prohibition encouraged disrespect for the law and strengthened organized crime. Prohibition came to an end in 1933, when this amendment was repealed. The Twenty-first Amendment (1933) repealed the Eighteenth Amendment and returned the regulation of alcohol to the states. Each state sets its own rules for the sale and importation of alcohol, including the drinking age. Because a federal law provides federal funds to states that prohibit the sale of alcohol to minors under the age of twenty-one, all fifty states have set their drinking age there. Rules about how alcohol is sold vary greatly from state to state. The Nineteenth Amendment prohibits the government from denying women the right to vote on the same terms as men. Prior to the amendment's adoption, only a few states permitted women to vote and to hold office. The Twentieth Amendment changes the date on which a new president, vice president and Congress take office, thus shortening the time between Election Day and the beginning of presidential, vice presidential and congressional terms. Originally, the Constitution provided that the annual meeting was to be on the first Monday in December unless otherwise provided by law. This meant that, when a new Congress was elected in November, it did not come into office until the following March, with a "lame duck" Congress convening in the interim. By moving the beginning of the president's new term from March 4 to January 20 (and in the case of Congress, to January 3), proponents hoped to put an end to lame duck sessions, while allowing for a speedier transition for the new administration and legislators. The Twenty-second Amendment limits an elected president to two terms in office, a total of eight years. However, under some circumstances it is possible for an individual to serve more than eight years. Although nothing in the original frame of government limited how many presidential terms one could serve, the nation's first president, George Washington, declined to run for a third term, suggesting that two terms of four years were enough for any president. This precedent remained an unwritten rule of the presidency until broken by Franklin D. Roosevelt, who was elected to a third term as president 1940 and in 1944 to a fourth. The Twenty-third Amendment extends the right to vote in presidential elections to citizens residing in the District of Columbia by granting the District electors in the Electoral College, as if it were a state. When first established as the nation's capital in 1800, the District of Columbia's five thousand residents had neither a local government, nor the right to vote in federal elections. By 1960 the population of the District had grown to over 760,000. The Twenty-fourth Amendment prohibits a poll tax for voting. Although passage of the Thirteenth, Fourteenth, and Fifteenth Amendments helped remove many of the discriminatory laws left over from slavery, they did not eliminate all forms of discrimination. Along with literacy tests and durational residency requirements, poll taxes were used to keep low-income (primarily African American) citizens from participating in elections. The Supreme Court has since struck down these discriminatory measures. The Twenty-fifth Amendment clarifies what happens upon the death, removal, or resignation of the president or vice president and how the presidency is temporarily filled if the president becomes disabled and cannot fulfill the responsibilities of the office. It supersedes the ambiguous succession rule established in Article II, Section 1, Clause 6. A concrete plan of succession has been needed on multiple occasions since 1789. However, for nearly 20% of U.S. history, there has been no vice president in office who could assume the presidency. The Twenty-sixth Amendment prohibits the government from denying the right of United States citizens, eighteen years of age or older, to vote on account of age. The drive to lower the voting age was driven in large part by the broader student activism movement protesting the Vietnam War. It gained strength following the Supreme Court's decision in Oregon v. Mitchell (1970). The Twenty-seventh Amendment (1992) prevents members of Congress from granting themselves pay raises during the current session. Rather, any raises that are adopted must take effect during the next session of Congress. Its proponents believed that Federal legislators would be more likely to be cautious about increasing congressional pay if they have no personal stake in the vote. Article One, Section 6, Clause 1 has been affected by this amendment, which remained pending for over two centuries as it contained no time limit for ratification. Collectively, members of the House and Senate propose around 150 amendments during each two-year term of Congress. Most however, never get out of the Congressional committees in which they are proposed, and only a fraction of those approved in committee receive sufficient support to win Congressional approval and actually enter the constitutional ratification process.[citation needed] Six amendments approved by Congress and proposed to the states for consideration have not been ratified by the required number of states to become part of the Constitution. Four of these are technically still pending, as Congress did not set a time limit for their ratification. The other two are no longer pending, as both had a time limit attached and in both cases the time period set for their ratification expired.[citation needed] Judicial review Courts established by the Constitution can regulate government under the Constitution, the supreme law of the land.[t] First, they have jurisdiction over actions by an officer of government and state law. Second, federal courts may rule on whether coordinate branches of national government conform to the Constitution. Until the twentieth century, the Supreme Court of the United States may have been the only high tribunal in the world to use a court for constitutional interpretation of fundamental law, others generally depending on their national legislature. The basic theory of American judicial review is summarized by constitutional legal scholars and historians as follows: the written Constitution is fundamental law within the states. It can change only by extraordinary legislative process of national proposal, then state ratification. The powers of all departments are limited to enumerated grants found in the Constitution. Courts are expected (a) to enforce provisions of the Constitution as the supreme law of the land, and (b) to refuse to enforce anything in conflict with it. Judicial review relies on the jurisdictional authority in Article III, and the Supremacy Clause. When John Marshall followed Oliver Ellsworth as chief justice of the Supreme Court in 1801, the federal judiciary had been established by the Judiciary Act, but there were few cases. Review of state legislation and appeals from state supreme courts was understood. But the Court's jurisdiction over state legislation was limited. The Marshall Court's landmark Barron v. Baltimore held that the Bill of Rights restricted only the federal government, and not the states. In the landmark Marbury v. Madison case, the Supreme Court asserted its authority of judicial review over Acts of Congress. Its findings were that Marbury and the others had a right to their commissions as judges in the District of Columbia. Marshall, writing the opinion for the majority, announced his discovered conflict between Section 13 of the Judiciary Act of 1789 and Article III.[u][v] In this case, both the Constitution and the statutory law applied to the particulars at the same time. "The very essence of judicial duty" according to Marshall was to determine which of the two conflicting rules should govern. The Constitution enumerates powers of the judiciary to extend to cases arising "under the Constitution". Further, justices take a Constitutional oath to uphold it as "Supreme law of the land." Therefore, since the United States government as created by the Constitution is a limited government, the federal courts were required to choose the Constitution over congressional law if there were deemed to be a conflict.[citation needed] "This argument has been ratified by time and by practice ..."[w][x] The Supreme Court did not declare another act of Congress unconstitutional until the controversial Dred Scott decision in 1857, held after the voided Missouri Compromise statute had already been repealed. In the eighty years following the Civil War to World War II, the Court voided congressional statutes in 77 cases, on average almost one a year. Salmon P. Chase was a Lincoln appointee, serving as chief justice from 1864 to 1873. In one of his first official acts, Chase admitted John Rock, the first African American to practice before the Supreme Court. The Chase Court is famous for Texas v. White, which asserted a permanent Union of indestructible states. Veazie Bank v. Fenno upheld the Civil War tax on state banknotes. Hepburn v. Griswold found parts of the Legal Tender Acts unconstitutional, though it was reversed under a late Supreme Court majority. The Civil Rights Cases, 109 U.S. 3 (1883), were a group of five landmark cases in which the Supreme Court of the United States held that the Thirteenth and Fourteenth Amendments did not empower Congress to outlaw racial discrimination by private individuals. The holding that the Thirteenth Amendment did not empower the federal government to punish racist acts done by private citizens would be overturned by the Supreme Court in the 1968 case Jones v. Alfred H. Mayer Co.The Fourteenth Amendment not applying to private entities, however, is still valid precedent to this day. Although the Fourteenth Amendment-related decision has never been overturned, in the 1964 case of Heart of Atlanta Motel, Inc. v. United States, the Supreme Court held that Congress could prohibit racial discrimination by private actors under the Commerce Clause. As chief justice, William Taft advocated for the Judiciary Act of 1925 that brought the Federal District Courts under the administrative jurisdiction of the Supreme Court. In 1925, the Taft Court issued a ruling overturning a Marshall Court ruling on the Bill of Rights. In Gitlow v. New York, the Court established the doctrine of "incorporation", which applied the Bill of Rights to the states. Important cases included the Board of Trade of City of Chicago v. Olsen, which upheld Congressional regulation of commerce; Olmstead v. United States, which allowed exclusion of evidence obtained without a warrant based on application of the 14th Amendment proscription against unreasonable searches; and Wisconsin v. Illinois, which ruled the equitable power of the United States can impose positive action on a state to prevent its inaction from damaging another state. A crisis arose when, in 1935 and 1936, the Supreme Court handed down twelve decisions voiding acts of Congress relating to the New Deal. President Franklin D. Roosevelt then responded with his abortive "court packing plan". Other proposals have suggested a Court super-majority to overturn Congressional legislation, or a constitutional amendment to require that the justices retire at a specified age by law. To date, the Supreme Court's power of judicial review has persisted. Earl Warren was an Eisenhower nominee, chief justice from 1953 to 1969. In 1954, the Warren Court overturned a landmark Fuller Court ruling on the Fourteenth Amendment interpreting racial segregation as permissible in government and commerce providing "separate but equal" services. Warren built a coalition of justices after 1962 that developed the idea of natural rights as guaranteed in the Constitution. Brown v. Board of Education banned segregation in public schools. Baker v. Carr and Reynolds v. Sims established Court ordered "one-man-one-vote". Bill of Rights Amendments were incorporated into the states. Due process was expanded in Gideon v. Wainwright and Miranda v. Arizona. First Amendment rights were addressed in Griswold v. Connecticut concerning privacy, and Engel v. Vitale relative to free speech. Warren E Burger was appointed by Richard Nixon. Under his tenure, the Court decided the landmark cases of Roe v. Wade and Swann v. Charlotte-Mecklenburg Board of Education. William Rehnquist was a Reagan-appointed chief justice, serving from 1986 to 2005. While he would concur with overthrowing a state supreme court's decision, as in Bush v. Gore, he built a coalition of Justices after 1994 that developed the idea of federalism as provided for in the Tenth Amendment. In the hands of the Supreme Court, the Constitution and its amendments were to restrain Congress, as in City of Boerne v. Flores. Nevertheless, the Rehnquist Court was noted in the contemporary "culture wars" for overturning state laws relating to privacy, prohibiting late-term abortions in Stenberg v. Carhart, prohibiting sodomy in Lawrence v. Texas, or ruling so as to protect free speech in Texas v. Johnson or affirmative action in Grutter v. Bollinger. John Roberts was appointed Chief Justice in 2005. The Supreme Court has developed a system of doctrine and practice that limits its own power of judicial review. The Court controls almost all of its business by choosing what cases to consider, limiting decisions by defining what is a "justiciable question". The Court requires a personal interest, not one generally held, and a legally protected right must be immediately threatened by government action. Cases are not taken up if the litigant has no standing to sue. The Court also generally refuses to make any advisory opinions in advance of actual cases.[ac] Further, friendly suits between those of the same legal interest are not considered. The procedural ways by which the Court dismisses cases have led critics to charge that the Supreme Court delays decisions by unduly insisting on technicalities in their "standards of litigability". They say cases are left unconsidered which are in the public interest, with genuine controversy, and resulting from good faith action. The Supreme Court balances several pressures to maintain its roles in national government. It seeks to be a co-equal branch of government, but its decrees must be enforceable. The Court seeks to minimize situations where it asserts itself superior to either president or Congress, but federal officers must be held accountable. The Supreme Court assumes power to declare acts of Congress as unconstitutional but it self-limits its passing on constitutional questions. But the Court's guidance on basic problems of life and governance in a democracy is most effective when American political life reinforces its rulings. Justice Brandeis summarized four general guidelines that the Supreme Court uses to avoid constitutional decisions relating to Congress:[ad] The Court will not anticipate a question of constitutional law nor decide open questions unless a case decision requires it. If it does, a rule of constitutional law is formulated only as the precise facts in the case require. The Court will choose statutes or general law for the basis of its decision if it can without constitutional grounds. If it does, the Court will choose a constitutional construction of an act of Congress, even if its constitutionality is seriously in doubt. Likewise with the executive department, Edwin Corwin observed that the Court does sometimes rebuff presidential pretensions, but it more often tries to rationalize them. Against Congress, an act is merely "disallowed". In the executive case, exercising judicial review produces "some change in the external world" beyond the ordinary judicial sphere. The "political question" doctrine especially applies to questions which present a difficult enforcement issue. Chief Justice Charles Evans Hughes addressed the Court's limitation when political process allowed future policy change, but a judicial ruling would "attribute finality". Political questions lack "satisfactory criteria for a judicial determination." John Marshall recognized that the president holds "important political powers" which as executive privilege allows great discretion. This doctrine was applied in Court rulings on President Grant's duty to enforce the law during Reconstruction. It extends to the sphere of foreign affairs. Justice Robert Jackson explained, foreign affairs are inherently political, "wholly confided by our Constitution to the political departments of the government ... [and] not subject to judicial intrusion or inquiry". Critics of the Court object in two principal ways to self-restraint in judicial review, deferring as it does as a matter of doctrine to acts of Congress and presidential actions. Its inaction is said to allow "a flood of legislative appropriations" which permanently create an imbalance between the states and federal government. It has also been argued that the Supreme Court's deference to Congress and the executive compromises American protection of civil rights, political minority groups and aliens. In anthropology and sociology There is a viewpoint that some Americans have come to see the documents of the Constitution, along with the Declaration of Independence and the Bill of Rights, as being a cornerstone of a type of civil religion. Some commentators depict the multi-ethnic, multi-sectarian United States as held together by political orthodoxy, in contrast with a nation-state of people having more "natural" ties. Worldwide influence The United States Constitution has been a notable model for governance worldwide, especially through the 1970s. Its international influence is found in similarities in phrasing and borrowed passages in other constitutions, as well as in the principles of the rule of law, separation of powers, and recognition of individual rights. The American experience of fundamental law with amendments and judicial review has motivated constitutionalists at times when they were considering the possibilities for their nation's future. It informed Abraham Lincoln during the American Civil War,[ae] his contemporary and ally Benito Juárez of Mexico,[af] and the second generation of 19th-century constitutional nationalists, José Rizal of the Philippines[ag] and Sun Yat-sen of China.[ah] The framers of the Australian constitution integrated federal ideas from the U.S. and other constitutions. Since the 1980s, the influence of the United States Constitution has been waning as other countries have created new constitutions or updated older constitutions, a process which Sanford Levinson believes to be more difficult in the United States than in any other country. Criticism The United States Constitution has faced various criticisms since its inception in 1787. The Constitution did not originally define who was eligible to vote, allowing each state to determine who was eligible. In the early history of the U.S., most states allowed only white male adult property owners to vote; the notable exception was New Jersey, where women were able to vote on the same basis as men. Until the Reconstruction Amendments were adopted between 1865 and 1870, the five years immediately following the American Civil War, the Constitution did not abolish slavery, nor give citizenship and voting rights to former slaves. These amendments did not include a specific prohibition on discrimination in voting on the basis of sex; it took another amendment—the Nineteenth, ratified in 1920—for the Constitution to prohibit any United States citizen from being denied the right to vote on the basis of sex. According to a 2012 study by David Law and Mila Versteeg published in the New York University Law Review, the U.S. Constitution guarantees relatively few rights compared to the constitutions of other countries and contains fewer than half (26 of 60) of the provisions listed in the average bill of rights. It is also one of the few in the world today that still features the right to keep and bear arms; the other two being the constitutions of Guatemala and Mexico. Sanford Levinson wrote in 2006 that it has been the most difficult constitution in the world to amend since the fall of Yugoslavia. Levitsky and Ziblatt argue that the US Constitution is the most difficult in the world to amend, and that this helps explain why the US still has so many undemocratic institutions that most or all other democracies have reformed, directly allowing significant democratic backsliding in the United States. Commemorations In 1937, the U.S. Post Office, at the prompting of President Franklin Delano Roosevelt, an avid stamp collector himself, released a commemorative postage stamp celebrating the 150th anniversary of the signing of the U.S. Constitution. The engraving on this issue is after an 1856 painting by Junius Brutus Stearns of Washington and shows delegates signing the Constitution at the 1787 Convention. The following year another commemorative stamp was issued celebrating the 150th anniversary of the ratification of the Constitution. In 1987 the U.S. Mint issued commemorative coins in celebration of the 200th anniversary of the signing of the Constitution. See also Notes Citations Bibliography Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Science_fiction] | [TOKENS: 7622] |
Contents Science fiction Science fiction (often shortened to sci-fi or abbreviated SF) is the genre of speculative, science-based fiction that imagines advanced and futuristic scientific or technological progress. The elements of science fiction have evolved over time: from space exploration, extraterrestrial life, time travel, and robotics; to parallel universes, dystopian societies, and biological manipulations; and, most lately, to information technology, transhumanism, posthumanism, and environmental challenges. Science fiction often specifically explores human responses to the consequences of these types of projected or imagined scientific advances. The precise definition of science fiction has long been disputed among authors, critics, scholars, and readers. It contains many subgenres, including hard science fiction, which emphasizes scientific accuracy, and soft science fiction, which focuses on social sciences. Other notable subgenres are cyberpunk, which explores the interface between technology and society; climate fiction, which addresses environmental issues; and space opera, which emphasizes pure adventure in a universe in which space travel is common. Precedents for science fiction are claimed to exist as far back as antiquity. Some books written in the Scientific Revolution and the Enlightenment Age were considered early science-fantasy stories. The modern genre arose primarily in the 19th and early 20th centuries, when popular writers began looking to technological progress for inspiration and speculation. Mary Shelley's Frankenstein, written in 1818, is often credited as the first true science fiction novel. Jules Verne and H. G. Wells are pivotal figures in the genre's development. In the 20th century, the genre grew during the Golden Age of Science Fiction; it expanded with the introduction of space operas, dystopian literature, and pulp magazines. Science fiction has come to influence not only literature, but also film, television, and culture at large. Science fiction can criticize present-day society and explore alternatives, as well as provide entertainment and inspire a sense of wonder. Definitions The Encyclopedia of Science Fiction, edited by John Clute and Peter Nicholls in 1993, contains an extensive discussion of the problem of defining the genre.[further explanation needed] American writer and professor of biochemistry Isaac Asimov wrote, "Science fiction can be defined as that branch of literature which deals with the reaction of human beings to changes in science and technology." Science fiction writer Robert A. Heinlein wrote, "A handy short definition of almost all science fiction might read: realistic speculation about possible future events, based solidly on adequate knowledge of the real world, past and present, and on a thorough understanding of the nature and significance of the scientific method." American science fiction author and editor Lester del Rey wrote, "Even the devoted aficionado or fan—has a hard time trying to explain what science fiction is," and no "full satisfactory definition" exists because "there are no easily delineated limits to science fiction." Another definition is provided in The Literature Book by the publisher DK: "scenarios that are at the time of writing technologically impossible, extrapolating from present-day science...[,]...or that deal with some form of speculative science-based conceit, such as a society (on Earth or another planet) that has developed in wholly different ways from our own." There is a tendency among science fiction enthusiasts to be their own arbiters in deciding what constitutes science fiction. David Seed says that it may be more useful to talk about science fiction as the intersection of other more concrete subgenres. American science fiction author, editor, and critic Damon Knight summed up the difficulty, saying "Science fiction is what we point to when we say it." American magazine editor, science fiction writer, and literary agent Forrest J Ackerman has been credited with first using the term sci-fi (reminiscent of the then-trendy term hi-fi) in about 1954. The first known use in print was a description of Donovan's Brain by movie critic Jesse Zunser in January 1954. As science fiction entered popular culture, writers and fans in the field came to associate the term with low-quality pulp science fiction and with low-budget, low-tech B movies. By the 1970s, critics in the field, such as Damon Knight and Terry Carr, were using sci fi to distinguish hack-work from serious science fiction. Australian literary scholar and critic Peter Nicholls writes that SF (or sf) is "the preferred abbreviation within the community of sf writers and readers." Robert Heinlein found the term science fiction insufficient to describe certain types of works in this genre, and he suggested that the term speculative fiction be used instead for works that are more "serious" or "thoughtful". Literature Some scholars assert that science fiction had its beginnings in ancient times, when the distinction between myth and fact was blurred. Written in the 2nd century CE by the satirist Lucian, the novel A True Story contains many themes and tropes that are characteristic of modern science fiction, including travel to other worlds, extraterrestrial lifeforms, interplanetary warfare, and artificial life. Some consider it to be the first science fiction novel. Some stories from the folktale collection The Arabian Nights, along with the 10th-century fiction The Tale of the Bamboo Cutter and Ibn al-Nafis's 13th-century novel Theologus Autodidactus, are also argued to contain elements of science fiction. Several books written during the Scientific Revolution and later the Age of Enlightenment are considered true works of science-fantasy. Francis Bacon's New Atlantis (1627), Johannes Kepler's Somnium (1634), Athanasius Kircher's Itinerarium extaticum (1656), Cyrano de Bergerac's Comical History of the States and Empires of the Moon (1657) and The States and Empires of the Sun (1662), Margaret Cavendish's "The Blazing World" (1666), Jonathan Swift's Gulliver's Travels (1726), Ludvig Holberg's Nicolai Klimii Iter Subterraneum (1741) and Voltaire's Micromégas (1752). Isaac Asimov and Carl Sagan considered Johannes Kepler's 1634 novel Somnium to be the first science fiction story; it depicts a journey to the Moon and how the Earth's motion is seen from there. Kepler has been called the "father of science fiction". Following the 17th-century development of the novel as a literary form, Mary Shelley's Frankenstein (1818) and The Last Man (1826) helped to define the form of the science fiction novel. Brian Aldiss has argued that Frankenstein was the first work of science fiction. Edgar Allan Poe wrote several stories considered to be science fiction, including "The Unparalleled Adventure of One Hans Pfaall" (1835) about a trip to the Moon. Jules Verne was noted for his attention to detail and scientific accuracy, especially in the novel Twenty Thousand Leagues Under the Seas (1870). In 1887, the novel El anacronópete by Spanish author Enrique Gaspar y Rimbau introduced the first time machine. An early French/Belgian science fiction writer was J.-H. Rosny aîné (1856–1940). Rosny's masterpiece is Les Navigateurs de l'Infini (The Navigators of Infinity) (1925) in which the word astronaut (astronautique in French) was used for the first time. Many critics consider H. G. Wells to be one of science fiction's most important authors, or even "the Shakespeare of science fiction". His novels include The Time Machine (1895), The Island of Doctor Moreau (1896), The Invisible Man (1897), and The War of the Worlds (1898). His science fiction imagined alien invasion, biological engineering, invisibility, and time travel. In his non-fiction futurologist works, he predicted the advent of airplanes, military tanks, nuclear weapons, satellite television, space travel, and something like the World Wide Web. Edgar Rice Burroughs's novel A Princess of Mars, published in 1912, was the first of his thirty-year planetary romance series about the fictional Barsoom; the novels were set on Mars and featured John Carter as the hero. One of the first dystopian novels, We, was written by the Russian author Yevgeny Zamyatin and published in 1924. It describes a world of harmony and conformity within a united totalitarian state. In 1926, Hugo Gernsback published the first American science fiction magazine, Amazing Stories. In its first issue, he provided the definition: By 'scientifiction' I mean the Jules Verne, H. G. Wells and Edgar Allan Poe type of story—a charming romance intermingled with scientific fact and prophetic vision... Not only do these amazing tales make tremendously interesting reading—they are always instructive. They supply knowledge... in a very palatable form... New adventures pictured for us in the scientifiction of today are not at all impossible of realization tomorrow... Many great science stories destined to be of historical interest are still to be written... Posterity will point to them as having blazed a new trail, not only in literature and fiction, but progress as well. In 1928, E. E. "Doc" Smith's first published novel, The Skylark of Space (co-authored with Lee Hawkins Garby), appeared in Amazing Stories. It is often described as the first great space opera. That same year, Philip Francis Nowlan's original story about Buck Rogers, Armageddon 2419, also appeared in Amazing Stories. This story was followed by a Buck Rogers comic strip, the first serious science fiction comic. Last and First Men: A Story of the Near and Far Future is a future history novel written in 1930 by the British author Olaf Stapledon. A work of innovative scale in the science fiction genre, it describes the fictional history of humanity from the present forward across two billion years. In 1937, John W. Campbell became the editor of Astounding Science Fiction magazine; this event is sometimes considered the beginning of the Golden Age of Science Fiction, which was characterized by stories celebrating scientific achievement and progress. The "Golden Age" is often said to have ended in 1946, but sometimes the late 1940s and the 1950s are included in this period. In 1942, Isaac Asimov began the Foundation series of novels, which chronicles the rise and fall of galactic empires, and also introduces the concept of psychohistory. The series was later awarded a one-time Hugo Award for "Best All-Time Series". Theodore Sturgeon's novel More Than Human (1953) explored possible future human evolution. In 1957, the novel Andromeda: A Space-Age Tale by the Russian writer and paleontologist Ivan Yefremov presented a view of a future interstellar communist civilization; it is considered one of the most important Soviet science fiction novels. In 1959, Robert A. Heinlein's novel Starship Troopers marked a departure from his earlier juvenile stories and novels. It is one of the first and most influential examples of military science fiction, and it introduced the concept of powered armor exoskeletons. The German space opera series Perry Rhodan, written by various authors, started in 1961 with an account of the first Moon landing; the series has since expanded in space to multiple universes and in time by billions of years. It has become the most popular book series in science fiction to date. During the 1960s and 1970s, New Wave science fiction was known for embracing a high degree of experimentation (in both form and content), as well as a highbrow and self-consciously "literary" or "artistic" sensibility. In 1961, Stanisław Lem's novel Solaris was published in Poland. The novel dealt with the theme of human limitations, as its characters attempted to study a seemingly intelligent ocean on a newly discovered planet. Lem's work anticipated the creation of microrobots and micromachinery, nanotechnology, smartdust, virtual reality, and artificial intelligence (including swarm intelligence); his work also developed the ideas of necroevolution and artificial worlds. In 1965, the novel Dune by Frank Herbert imagined a more complex and detailed future society than had most previous science fiction. In 1967 Anne McCaffrey, began a science fantasy series called Dragonriders of Pern . Two novellas included in the series' first novel, Dragonflight, led McCaffrey to win the first Hugo or Nebula award given to a female author. In 1968, Philip K. Dick's novel Do Androids Dream of Electric Sheep? was published. It is the literary source of the Blade Runner movie franchise. Published in 1969, the novel The Left Hand of Darkness by Ursula K. Le Guin is set on a planet where the inhabitants have no fixed gender. The novel is one of the most influential examples of social, feminist, or anthropological science fiction. In 1979, Science Fiction World magazine began publication in the People's Republic of China. It dominates the Chinese science fiction magazine market, at one time claiming a circulation of 300,000 copies per issue and an estimated 3–5 readers per copy, giving it a total readership of at least 1 million people—making it the world's most popular science fiction periodical. In 1984, William Gibson's first novel, Neuromancer, helped to popularize cyberpunk and the word cyberspace, a term he originally coined in the 1982 short story Burning Chrome. In the same year, Octavia Butler's short story "Speech Sounds" won the Hugo Award for Best Short Story. She went on to explore themes of racial injustice, global warming, women's rights, and political conflict. In 1995, she became the first science fiction author to receive a MacArthur Fellowship. In 1986, the novel Shards of Honor by Lois McMaster Bujold began her Vorkosigan Saga science opera. 1992's novel Snow Crash by Neal Stephenson predicted immense social upheaval due to the information revolution. In 2007, Liu Cixin's novel The Three-Body Problem was published in China. It was translated into English by Ken Liu and published by Tor Books in 2014; it won the Hugo Award for Best Novel in 2015, making Liu the first Asian writer to win the award. Emerging themes in late 20th- and early 21st-century science fiction include environmental issues, the implications of the Internet and the expanding information universe, questions about biotechnology, nanotechnology, and post-scarcity societies. Recent trends and subgenres include steampunk, biopunk, and mundane science fiction. Film One of the first recorded science fiction films is A Trip to the Moon from 1902, directed by French filmmaker Georges Méliès. It influenced later filmmakers, offering a different kind of creativity and fantasy. Méliès's innovative editing and special effects techniques were widely imitated, and they became important elements of the cinematic medium. The 1927 film Metropolis, directed by Fritz Lang, is the first feature-length science fiction film. Though not well received in its time, it is now ranked as one of the best films ever made. In 1954, Godzilla, directed by Ishirō Honda, started the kaiju subgenre of science fiction film; this subgenre features large creatures in any form, usually attacking a major city or engaging other monsters in battle. The 1968 film 2001: A Space Odyssey, was directed by Stanley Kubrick and based on stories by Arthur C. Clarke. The film improved on the largely B-movie offerings to date in both scope and quality, and it influenced later science fiction films. The original Planet of the Apes movie, directed by Franklin J. Schaffner and based on the 1963 French novel La Planète des Singes by Pierre Boulle, was also released in 1968. The film vividly depicts a post-apocalyptic world in which intelligent apes dominate humans. The film received both popular and critical acclaim. In 1977, George Lucas began the Star Wars series with the film later called "Star Wars: Episode IV – A New Hope." The series, often called a space opera, became a worldwide popular culture phenomenon and the third-highest-grossing film series of all time. Since the 1980s, science fiction films, along with fantasy, horror, and superhero films, have dominated Hollywood's big-budget productions. Science fiction films often cross over with other genres. Some examples include film noir (Blade Runner, 1982), family (E.T. the Extra-Terrestrial, 1982), war (Enemy Mine, 1985), comedy (Spaceballs , 1987; Galaxy Quest, 1999), animation (WALL-E, 2008; Big Hero 6, 2014), Western (Serenity, 2005), action (Edge of Tomorrow, 2014; The Matrix, 1999), adventure (Jupiter Ascending, 2015; Interstellar, 2014), mystery (Minority Report, 2002), thriller (Ex Machina, 2014), drama (Melancholia, 2011; Predestination, 2014), and romance (Eternal Sunshine of the Spotless Mind, 2004; Her, 2013). Television Science fiction and television have consistently had a close relationship. Television or similar technology often appeared in science fiction before television itself became widely available in the late 1940s and early 1950s. The first known science fiction television program was a 35-minute adapted excerpt of the play RUR, written by the Czech playwright Karel Čapek, broadcast live from the BBC's Alexandra Palace studios on 11 February 1938. The first popular science fiction program on American television was the children's adventure serial Captain Video and His Video Rangers, which ran from June 1949 to April 1955. The original The Twilight Zone series, produced and narrated by Rod Serling, ran from 1959 to 1964. (Serling also wrote or co-wrote most of the episodes.) The series featured fantasy, suspense, and horror as well as science fiction, with each episode being a complete story. Critics have ranked it as one of the best TV programs of any genre. The animated series The Jetsons, while intended as comedy and only running for one season (1962–1963), predicted many inventions now in common use: flat-screen televisions, newspapers on a computer-like screen, computer viruses, video chat, tanning beds, home treadmills, and more. In 1963, the series Doctor Who premiered on BBC Television with a time-travel theme. The original series ran until 1989 and was revived in 2005. It has been popular globally and has significantly influenced later science fiction TV. Other British sci-fi dramas which are broadcast in the 1970s are UFO (1970–1971), The Tomorrow People (1973–1979), Space: 1999 (1975–1977) and Blake's 7 (1978–1981). Other notable programs during the 1960s included The Outer Limits (1963–1965), Lost in Space (1965–1968), and The Prisoner (1967–1968). The original Star Trek series, created by Gene Roddenberry, premiered in 1966 on NBC Television and ran for three seasons. It combined elements of space opera and Space Western. Only mildly successful at first, the series gained popularity through syndication and strong fan interest. It became a popular and influential franchise with many films, television shows, novels, and other works and products. The series Star Trek: The Next Generation (1987–1994) led to six additional live action Star Trek shows: Deep Space Nine (1993–1999), Voyager (1995–2001), Enterprise (2001–2005), Discovery (2017–2024), Picard (2020–2023), and Strange New Worlds (2022–present); additional shows are in some stage of development. The miniseries V premiered in 1983 on NBC. It depicted an attempted conquest of Earth by reptilian aliens. Red Dwarf, a comic science fiction series, aired on BBC Two between 1988 and 1999, and on Dave since 2009. The X-Files, which featured UFOs and conspiracy theories, was created by Chris Carter and broadcast by Fox Broadcasting Company from 1993 to 2002, and again from 2016 to 2018. Stargate, a film about ancient astronauts and interstellar teleportation, was released in 1994. The series Stargate SG-1 premiered in 1997 and ran for 10 seasons (1997–2007). Spin-off series included Stargate Infinity (2002–2003), Stargate Atlantis (2004–2009), and Stargate Universe (2009–2011). Other 1990s series included Quantum Leap (1989–1993) and Babylon 5 (1994–1999). The Syfy channel, launched in 1992 as The Sci-Fi Channel, specializes in science fiction, supernatural horror, and fantasy. The space-Western series Firefly premiered in 2002 on Fox. It is set in the year 2517, after humans arrive in a new star system, and it follows the adventures of the renegade crew of Serenity, a "Firefly-class" spaceship. The series Orphan Black began a five-season run in 2013, focusing on a woman who takes on the identity of one of her genetically identical clones. In late 2015, Syfy premiered the series The Expanse to great critical acclaim—an American show about humanity's colonization of the Solar System. Its later seasons were aired through Amazon Prime Video. Social influence Science fiction's rapid increase in popularity during the first half of the 20th century was closely tied to public respect for science during that era, as well as the rapid pace of technological innovation and new inventions. Science fiction has often predicted scientific and technological progress. Some works imagine that this progress will tend to improve human life and society, for instance, the stories of Arthur C. Clarke and Star Trek. Other works, such as H.G. Wells's The Time Machine and Aldous Huxley's Brave New World, warn of possible negative consequences. In 2001 the National Science Foundation conducted a survey of "Public Attitudes and Public Understanding: Science Fiction and Pseudoscience". The survey found that people who read or prefer science fiction may think about or relate to science differently than other people. Such people also tend to support the space program and efforts to contact extraterrestrial civilizations. Carl Sagan wrote that "Many scientists deeply involved in the exploration of the solar system (myself among them) were first turned in that direction by science fiction." Science fiction has predicted several existing inventions, such as the atomic bomb, robots, and borazon. In the 2020 TV series Away, astronauts use a Mars rover called InSight to listen for a landing on Mars. In 2022, scientists actually used InSight to listen for the landing of a spacecraft. Science fiction can act as a vehicle for analyzing and recognizing a society's past, present, and potential future social relationships with the other. Science fiction offers a medium for and a representation of alterity and differences in social identity. Brian Aldiss described science fiction as "cultural wallpaper". This broad influence can be seen in the trend for writers to use science fiction as a tool for advocacy and generating cultural insights, as well as for educators who teach across a range of academic disciplines beyond the natural sciences. Scholar and science fiction critic George Edgar Slusser said that science fiction "is the one real international literary form we have today, and as such has branched out to visual media, interactive media and on to whatever new media the world will invent in the 21st century. Crossover issues between the sciences and the humanities are crucial for the century to come." Science fiction has sometimes been used as a means of social protest. George Orwell's novel Nineteen Eighty-Four (1949) is an important work of dystopian science fiction. The novel is often invoked in protests against governments and leaders who are seen as totalitarian. James Cameron's film Avatar (2009) was intended as a protest against imperialism, specifically the European colonization of the Americas. Science fiction in Latin America and Spain explores the concept of authoritarianism. Robots, artificial humans, human clones, intelligent computers, and their possible conflicts with human society have all been major themes of science fiction since the publication of Shelly's novel Frankenstein (or earlier). Some critics have seen this tendency as reflecting authors' concerns over the social alienation seen in modern society. Feminist science fiction poses questions about social issues such as how society constructs gender roles, the role reproduction plays in defining gender, and the inequitable political or personal power of one gender over others. Some works have illustrated these themes using utopias in which gender differences or gender power imbalances do not exist, or dystopias in which gender inequalities are intensified, thus asserting a need for feminist work to continue. Climate fiction (or cli-fi) deals with issues of climate change and global warming. University courses on literature and environmental issues may include climate change fiction in their syllabi, and these issues are often discussed by other media beyond science fiction fandom. Libertarian science fiction focuses on the politics and social order implied by right libertarian philosophies with an emphasis on individualism and private property, and in some cases anti-statism. Robert A. Heinlein is one of the most popular authors of this subgenre, including his novels The Moon is a Harsh Mistress and Stranger in a Strange Land. Science fiction comedy often satirizes and criticizes present-day society, and it sometimes makes fun of the conventions and clichés of more serious science fiction. Science fiction is often said to inspire a sense of wonder. Science fiction editor, publisher, and critic David Hartwell wrote that "Science fiction's appeal lies in combination of the rational, the believable, with the miraculous. It is an appeal to the sense of wonder." Carl Sagan wrote about growing up with science fiction: One of the great benefits of science fiction is that it can convey bits and pieces, hints, and phrases, of knowledge unknown or inaccessible to the reader . . . works you ponder over as the water is running out of the bathtub or as you walk through the woods in an early winter snowfall. In 1967, Isaac Asimov commented on changes occurring in the science fiction community: And because today's real life so resembles day-before-yesterday's fantasy, the old-time fans are restless. Deep within, whether they admit it or not, is a feeling of disappointment and even outrage that the outer world has invaded their private domain. They feel the loss of a 'sense of wonder' because what was once truly confined to 'wonder' has now become prosaic and mundane. Study The field of science fiction studies involves the critical assessment, interpretation, and discussion of science fiction literature, film, TV shows, new media, fandom, and fan fiction. Science fiction scholars study the genre to better understand it and its relationship to science, technology, politics, other genres, and culture at large. Science fiction studies began around the turn of the 20th century, but it was not until later that science fiction studies solidified as a discipline with the publication of the academic journals Extrapolation (1959), Foundation: The International Review of Science Fiction (1972), and Science Fiction Studies (1973), and the establishment of the oldest organizations devoted to the study of science fiction in 1970, the Science Fiction Research Association and the Science Fiction Foundation. The field has grown considerably since the 1970s with the establishment of more journals, organizations, and conferences, as well as science fiction degree-granting programs such as those offered by the University of Liverpool. Science fiction has historically been subdivided into hard and soft categories, with the division centering on the feasibility of the science. However, this distinction has come under increased scrutiny in the 21st century. Some authors, such as Tade Thompson and Jeff VanderMeer, have observed that stories focusing explicitly on physics, astronomy, mathematics, and engineering tend to be considered hard science fiction, while stories focusing on botany, mycology, zoology, and the social sciences tend to be considered soft science fiction (regardless of the relative rigor of the science). Max Gladstone defined hard science fiction as stories "where the math works", but he pointed out that this definition identifies stories that often seem "weirdly dated", as scientific paradigms shift over time. Michael Swanwick dismissed the traditional definition of hard science fiction altogether, instead stating that it was defined by characters striving to solve problems "in the right way–with determination, a touch of stoicism, and the consciousness that the universe is not on his or her side." Ursula K. Le Guin also criticized the traditional contrast between hard and soft science fiction: "The 'hard' science fiction writers dismiss everything except, well, physics, astronomy, and maybe chemistry. Biology, sociology, anthropology—that's not science to them, that's soft stuff. They're not that interested in what human beings do, really. But I am. I draw on the social sciences a great deal." Many critics remain skeptical of the literary value of science fiction and other forms of genre fiction, though some mainstream authors have written works claimed by opponents to be science fiction. Mary Shelley wrote a number of scientific romance novels in the Gothic literature tradition, including Frankenstein; or, The Modern Prometheus (1818). Kurt Vonnegut was a respected American author whose works have been argued by some to contain science fiction premises or themes. Other science fiction authors whose works are widely considered to be "serious" literature include Ray Bradbury (especially Fahrenheit 451 and The Martian Chronicles), Arthur C. Clarke (especially Childhood's End), and Paul Myron Anthony Linebarger (using the pseudonym Cordwainer Smith). Doris Lessing, who was later awarded the Nobel Prize in Literature, wrote a series of five science fiction novels, Canopus in Argos: Archives (1979–1983); these novels depict the efforts of more advanced species and civilizations to influence less advanced ones, including humans on Earth. David Barnett has indicated that some novels use recognizable science fiction tropes, but they are not classified by their authors and publishers as science fiction; such novels include The Road (2006) by Cormac McCarthy, Cloud Atlas (2004) by David Mitchell, The Gone-Away World (2008) by Nick Harkaway, The Stone Gods (2007) by Jeanette Winterson, and Oryx and Crake (2003) by Margaret Atwood. Atwood in particular argued against categorizing works such as the Handmaid's Tale as science fiction; instead she labeled this novel, Oryx and Crake, and The Testaments as speculative fiction, and she criticized science fiction as "talking squids in outer space." In his book The Western Canon, literary critic Harold Bloom includes the novels Brave New World, Stanisław Lem's Solaris, Kurt Vonnegut's Cat's Cradle, and The Left Hand of Darkness as culturally and aesthetically significant works of Western literature, though Lem actively spurned the label science fiction. In her 1976 essay "Science Fiction and Mrs Brown", Ursula K. Le Guin was asked, "Can a science fiction writer write a novel?" She answered that "I believe that all novels ... deal with character... The great novelists have brought us to see whatever they wish us to see through some character. Otherwise, they would not be novelists, but poets, historians, or pamphleteers." Orson Scott Card is best known for his 1985 science fiction novel Ender's Game; he has postulated that in science fiction, the message and intellectual significance of the work are contained within the story itself—therefore the genre can omit accepted literary devices and techniques that he characterized as gimmicks or literary games. In 1998, Jonathan Lethem wrote an essay titled "Close Encounters: The Squandered Promise of Science Fiction" in the Village Voice. In this essay, he recalled the time in 1973 when Thomas Pynchon's novel Gravity's Rainbow was nominated for the Nebula Award and was passed over in favor of Arthur C. Clarke's novel Rendezvous with Rama; Lethem suggests that this point stands as "a hidden tombstone marking the death of the hope that SF was about to merge with the mainstream." In the same year, science fiction author and physicist Gregory Benford wrote that "SF is perhaps the defining genre of the twentieth century, although its conquering armies are still camped outside the Rome of the literary citadels." Community Science fiction has been written by authors from diverse cultural and geographical backgrounds. Among submissions to the science fiction publisher Tor Books, men account for 78% and women account for 22% (according to 2013 statistics from the publisher). A controversy about voting slates for the 2015 Hugo Awards highlighted a tension in the science fiction community between two things: a trend toward increasingly diverse works and authors being honored by awards, and a reaction by groups of authors and fans who preferred more "traditional" science fiction. Among the most significant and well-known awards for science fiction are the Hugo Award for literature, presented by the World Science Fiction Society at Worldcon, and voted on by fans; the Nebula Award for literature, presented by the Science Fiction and Fantasy Writers of America, and voted on by the community of authors; the John W. Campbell Memorial Award for Best Science Fiction Novel, presented by a jury of writers; and the Theodore Sturgeon Memorial Award for short fiction, presented by a jury. One notable award for science fiction films and TV programs is the Saturn Award, which is presented annually by The Academy of Science Fiction, Fantasy, and Horror Films. There are other national awards, like Canada's Prix Aurora Awards, regional awards, like the Endeavour Award presented at Orycon for works from the U.S. Pacific Northwest, and special interest or subgenre awards such as the Chesley Award for art, presented by the Association of Science Fiction & Fantasy Artists, or the World Fantasy Award for fantasy. Magazines may organize reader polls, notably the Locus Award. Conventions (often abbreviated by fans as cons, such as Comic-con) are held in cities around the world; these cater to a local, regional, national, or international membership. General-interest conventions cover all aspects of science fiction, while others focus on a particular interest such as media fandom or filk music. Most science fiction conventions are organized by volunteers in non-profit groups, though most media-oriented events are organized by commercial promoters. Science fiction fandom emerged from the letters column in Amazing Stories magazine. Fans began writing letters to each other, and then assembling their comments in informal publications that became known as fanzines. Once in regular communication, these fans wanted to meet in person, so they organized local clubs. During the 1930s, the first science fiction conventions gathered fans from a larger area. The earliest organized online fandom was the SF Lovers Community, originally a mailing list in the late 1970s, with a text archive file that was updated regularly. In the 1980s, Usenet groups greatly expanded the circle of fans online. In the 1990s, the development of the World-Wide Web increased online fandom through websites devoted to science fiction and related genres in all media.[failed verification] The first science fiction fanzine, The Comet, was published in 1930 by the Science Correspondence Club in Chicago, Illinois. As of 2025, one of the best known fanzines is Ansible, edited by David Langford, winner of numerous Hugo awards. Other notable fanzines to win one or more Hugo awards include File 770, Mimosa, and Plokta. Artists working for fanzines have often risen to prominence in the field, including Brad W. Foster, Teddy Harvia, and Joe Mayhew; the Hugo Awards include a category for Best Fan Artists. Elements Science fiction elements can include the following: International examples Subgenres While science fiction is a genre of fiction, a science fiction genre is a subgenre within science fiction. Science fiction may be divided along any number of overlapping axes. Gary K. Wolfe's Critical Terms for Science Fiction and Fantasy identifies over 30 subdivisions of science fiction, not including science fantasy (which is a mixed genre). Related genres See also References General and cited sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Christmas_Day] | [TOKENS: 12407] |
Contents Christmas Christmas is an annual festival commemorating the birth of Jesus Christ, observed primarily on December 25[a] as a religious and cultural celebration among billions of people around the world. A liturgical feast central to Christianity, Christmas preparation begins on the First Sunday of Advent. This is followed by Christmastide, which historically in the West lasts twelve days and culminates on Twelfth Night. Christmas Day is a public holiday in many countries and is observed by a majority of Christians; it is also celebrated culturally by many non-Christians and forms an integral part of the annual holiday season. The traditional Christmas narrative recounted in the New Testament, known as the Nativity of Jesus, says that Jesus was born in Bethlehem, in accordance with messianic prophecies. When Joseph and Mary arrived in the city, the inn had no room, and so they were offered a stable where the Christ Child was soon born, with angels proclaiming this news to shepherds, who then spread the word. There are different hypotheses regarding the date of the birth of Jesus. In the early fourth century, the church fixed the date as December 25, the date of the winter solstice in the Roman Empire. It is nine months after Annunciation on March 25, also the Roman date of the spring equinox. Most Christians celebrate on December 25 in the Gregorian calendar, which has been adopted almost universally in the civil calendars used in countries throughout the world. However, part of the Eastern Christian Churches celebrate Christmas on December 25 of the older Julian calendar, which currently corresponds to January 7 in the Gregorian calendar. For Christians, celebrating that God came into the world in the form of man to atone for the sins of humanity is more important than knowing Jesus Christ's exact birth date. The customs associated with Christmas in various countries have a mix of pre-Christian, Christian, and secular themes and origins. Popular holiday traditions include gift-giving; completing an Advent calendar or Advent wreath; Christmas music and carolling; watching Christmas films; viewing a Nativity play; an exchange of Christmas cards; attending church services; a special meal; and displaying various Christmas decorations, including Christmas trees, Christmas lights, nativity scenes, poinsettias, garlands, wreaths, mistletoe, and holly. Additionally, several related and often interchangeable figures, known as Santa Claus, Father Christmas, Saint Nicholas, and Christkind, are associated with bringing gifts to children during the Christmas season and have their own body of traditions and lore. Because gift-giving and many other aspects of the Christmas festival involve heightened economic activity, Christmas has become a significant event and a key sales period for retailers and businesses. Over the past few centuries, Christmas has had a steadily growing economic effect in many regions of the world. Etymology The English word Christmas is a shortened form of 'Christ's Mass'. The word is recorded as Crīstesmæsse in 1038 and Cristes-messe in 1131. Crīst (genitive Crīstes) is from the Greek Χριστός (Khrīstos, 'Christ'), a translation of the Hebrew מָשִׁיחַ (Māšîaḥ, 'Messiah'), meaning 'anointed'; and mæsse is from the Latin missa, the celebration of the Eucharist. The form Christenmas was also used during some periods, but is now considered archaic. The term derives from Middle English Cristenmasse. Xmas is an abbreviation of Christmas, particularly in print, based on the initial letter chi (Χ) in the Greek Χριστός (Christ), although some style guides discourage its use. This abbreviation has a precedent in Middle English Χρ̄es masse (where Χρ̄ is another abbreviation of the Greek word). The Anglo-Saxons referred to Christmas as midwinter. The period corresponding to December and January was called Gēola ('Yule'), and this term was eventually equated with Christmastide; Old English Ġeōhel-dæg ('Yule Day') was sometimes used as a name for Christmas Day. Yule or Yuil survived as the main name for Christmastide in Scotland until the modern era. A rare name for Christmas in Old English was Nātiuiteð ('Nativity'), from the Latin nātīvitās meaning 'birth'. 'Noel' (also 'Nowel' or 'Nowell', as in "The First Nowell") entered English in the late 14th century and is from the Old French noël or naël, itself ultimately from the Latin nātālis (diēs) meaning 'birth (day)'. Koleda is the traditional Slavic name for Christmas and the period from Christmas to Epiphany or, more generally, to Slavic Christmastide rituals, some dating to pre-Christian times. During the late Qing dynasty, the Shanghai News referred to Christmas by a variety of terms. In 1872, it initially called Christmas "Jesus' birthday" (Chinese: 耶穌誕日; pinyin: yēsū dànrì). However, from 1873 to 1881, the Qing dynasty used terms such as "Western countries' Winter Solstice" (Chinese: 西國冬至; pinyin: xīguó dōngzhì) and "Western peoples' Winter Solstice" (Chinese: 西人冬節; pinyin: xīrén dōngjiē); they settled on "Foreign Winter Solstice" (Chinese: 外國冬至; pinyin: wàiguó dōngzhì) in 1882. This term was gradually replaced by the now standard term "Festival of the birth of the Holy One" (Chinese: 聖誕節; pinyin: shèngdàn jiē) during the early twentieth century. Nativity The gospels of Luke and Matthew describe Jesus as being born in Bethlehem of Judea to the Virgin Mary. In the Gospel of Luke, Joseph and Mary travel from Nazareth to Bethlehem in order to be counted for a census, and Jesus is born there and placed in a manger. Angels proclaim him a savior for all people, and three shepherds come to adore him. In the Gospel of Matthew, by contrast, three magi follow a star to Bethlehem to bring gifts to Jesus. History In the 2nd century, the "earliest church records" indicate that "Christians were remembering and celebrating the birth of the Lord", an "observance [that] sprang up organically from the authentic devotion of ordinary believers"; although "they did not agree upon a set date". Though Christmas did not appear on the lists of festivals given by the early Christian writers Irenaeus and Tertullian, the early Church Fathers John Chrysostom, Augustine of Hippo, and Jerome attested to December 25 as the date of Christmas toward the end of the fourth century. The earliest document to place Jesus's birthday on December 25 is the Chronograph of 354 (also called the Calendar of Filocalus), which also names it as the birthday of the god Sol Invictus (the 'Invincible Sun'). Liturgical historians generally agree that this part of the text was written in Rome in AD 336. This is consistent with the assertion that the date was formally set by Pope Julius I, bishop of Rome from 337 to 352. December 25 was the traditional date of the winter solstice in the Roman Empire, where most Christians lived, and the Roman festival Dies Natalis Solis Invicti (birthday of Sol Invictus) had been held on this date since AD 274. In the East, the birth of Jesus was celebrated in connection with the Epiphany on January 6. This holiday was not primarily about Christ's birth, but rather his baptism. Christmas was promoted in the East as part of the revival of Orthodox Christianity that followed the death of the pro-Arian Emperor Valens at the Battle of Adrianople in 378. The feast was introduced in Constantinople in 379, in Antioch by John Chrysostom towards the end of the fourth century, probably in 388, and in Alexandria in the following century. There is evidence that Christmas was celebrated in Jerusalem by the sixth century. In the Early Middle Ages, Christmas Day was overshadowed by Epiphany, which in western Christianity focused on the visit of the magi. However, the medieval calendar was dominated by Christmas-related holidays. The forty days before Christmas became the "forty days of St. Martin" (which began on November 11, the feast of St. Martin of Tours), now known as Advent. In Italy, former Saturnalian traditions were attached to Advent. Around the 12th century, these traditions transferred again to the Twelve Days of Christmas (December 25 – January 5); a time that appears in the liturgical calendars as Christmastide or Twelve Holy Days. In 567, the Council of Tours put in place the season of Christmastide, proclaiming "the twelve days from Christmas to Epiphany as a sacred and festive season, and established the duty of Advent fasting in preparation for the feast". This was done in order to solve the "administrative problem for the Roman Empire as it tried to coordinate the solar Julian calendar with the lunar calendars of its provinces in the east". The prominence of Christmas Day increased gradually after Charlemagne was crowned Emperor on Christmas Day in 800. King Edmund the Martyr was anointed on Christmas in 855 and King William I of England was crowned on Christmas Day 1066. By the High Middle Ages, the holiday had become so prominent that chroniclers routinely noted where various magnates celebrated Christmas. King Richard II of England hosted a Christmas feast in 1377 at which 28 oxen and 300 sheep were eaten. The Yule boar was a common feature of medieval Christmas feasts. Caroling also became popular, and was originally performed by a group of dancers who sang. The group was composed of a lead singer and a ring of dancers that provided the chorus. Writers of the time condemned caroling as lewd, indicating that the unruly traditions of Saturnalia and Yule may have continued in this form. "Misrule"—drunkenness, promiscuity, gambling—was also an important aspect of the festival. In England, gifts were exchanged on New Year's Day (a custom at the royal court), and there was special Christmas ale. Christmas during the Middle Ages was a public festival that incorporated ivy, holly, and other evergreens. Christmas gift-giving during the Middle Ages was usually between people with legal relationships, such as tenant and landlord. The annual indulgence in eating, dancing, singing, sporting, and card playing escalated in England, and by the 17th century the Christmas season featured lavish dinners, elaborate masques, and pageants. In 1607, King James I insisted that a play be acted on Christmas night and that the court indulge in games. It was during the Reformation in 16th–17th-century Europe that many Protestants changed the gift bringer to the Christ Child or Christkindl, and the date of giving gifts changed from December 6 to Christmas Eve. Following the Protestant Reformation, many of the new denominations, including the Anglican Church and Lutheran Church, continued to celebrate Christmas. In 1629, the Anglican poet John Milton penned On the Morning of Christ's Nativity, a poem that has since been read by many during Christmastide. Donald Heinz, a professor at California State University, Chico, states that Martin Luther "inaugurated a period in which Germany would produce a unique culture of Christmas, much copied in North America". Among the congregations of the Dutch Reformed Church, Christmas was celebrated as one of the principal evangelical feasts. However, in 17th century England, some groups such as the Puritans strongly condemned the celebration of Christmas, considering it a Catholic invention and the "trappings of popery" or the "rags of the Beast". In contrast, the established Anglican Church "pressed for a more elaborate observance of feasts, penitential seasons, and saints' days. The calendar reform became a major point of tension between the Anglican party and the Puritan party". The Catholic Church also responded, promoting the festival in a more religiously oriented form. King Charles I of England directed his noblemen and gentry to return to their landed estates in midwinter to keep up their old-style Christmas generosity. Following the Parliamentarian victory over Charles I during the English Civil War, England's Puritan rulers banned Christmas in 1647. Oliver Cromwell even ordered his troops to confiscate any special meals made on Christmas Day. Protests followed as pro-Christmas rioting broke out in several cities and for weeks Canterbury was controlled by the rioters, who decorated doorways with holly and shouted royalist slogans. Football, among the sports the Puritans banned on a Sunday, was also used as a rebellious force: when Puritans outlawed Christmas in England in December 1647 the crowd brought out footballs as a symbol of festive misrule. The book, The Vindication of Christmas (London, 1652), argued against the Puritans, and makes note of Old English Christmas traditions, dinner, roast apples on the fire, card playing, dances with "plow-boys" and "maidservants", old Father Christmas and carol singing. During the ban, semi-clandestine religious services marking Christ's birth continued to be held, and people sang carols in secret. Christmas was restored as a legal holiday in England with the Restoration of King Charles II in 1660 when Puritan legislation was declared void, with Christmas again freely celebrated in England. Many Calvinist clergymen disapproved of Christmas celebrations. As such, in Scotland, the Presbyterian Church of Scotland discouraged the observance of Christmas, and though James VI commanded its celebration in 1618, attendance at church was scant. The Parliament of Scotland officially abolished the observance of Christmas in 1640, claiming that the church had been "purged of all superstitious observation of days". Whereas in England, Wales and Ireland Christmas Day is a common law holiday, having been a customary holiday since time immemorial, it was not until 1871 that it was designated a bank holiday in Scotland. The diary of James Woodforde, from the latter half of the 18th century, details the observance of Christmas and celebrations associated with the season over a number of years. As in England, Puritans in Colonial America staunchly opposed the observation of Christmas. The Pilgrims of New England pointedly spent their first December 25 in the New World working normally. Puritans such as Cotton Mather condemned Christmas both because scripture did not mention its observance and because Christmas celebrations of the day often involved boisterous behavior. Many non-Puritans in New England deplored the loss of the holidays enjoyed by the laboring classes in England. Christmas observance was outlawed in Boston in 1659. The ban on Christmas observance was revoked in 1681 by English governor Edmund Andros, but it was not until the mid-19th century that celebrating Christmas became fashionable in the Boston region. At the same time, Christian residents of Virginia and New York observed the holiday freely. Pennsylvania Dutch settlers, predominantly Moravian settlers of Bethlehem, Nazareth, and Lititz in Pennsylvania and the Wachovia settlements in North Carolina, were enthusiastic celebrators of Christmas. The Moravians in Bethlehem had the first Christmas trees in America as well as the first Nativity Scenes. Christmas fell out of favor in the United States after the American Revolution, when it was considered an English custom. With the atheistic Cult of Reason in power during the era of Revolutionary France, Christian Christmas religious services were banned and the three kings cake was renamed the "equality cake" under anticlerical government policies. In the early 19th century, Christmas festivities and services gradually spread with the rise of the Oxford Movement in the Church of England that emphasized the centrality of Christmas in Christianity and charity to the poor, along with Washington Irving, Charles Dickens, and other authors emphasizing family, children, kind-heartedness, gift-giving, and Santa Claus (for Irving), or Father Christmas (for Dickens). An indication this increased recognition of Christmas was slow, however, is seen in the fact that "in twenty of the years between 1790 and 1835, The Times did not mention Christmas at all." In the early-19th century, writers imagined Tudor-period Christmas as a time of heartfelt celebration. In 1835, Thomas Hervey and Robert Seymour published The Christmas Book in which they introduced what has been called a "national Christmas narrative." In his book, Hervey asserted: "the revels of merry England are fast subsiding into silence, and her many customs wearing gradually away." In 1843, Charles Dickens wrote the novel A Christmas Carol, which helped revive the "spirit" of Christmas and seasonal merriment. Its instant popularity played a major role in portraying Christmas as a holiday emphasizing family, goodwill, and compassion. Dickens sought to construct Christmas as a family-centered festival of generosity, linking "worship and feasting, within a context of social reconciliation". Superimposing his humanitarian vision of the holiday, in what has been termed "Carol Philosophy", Dickens influenced many aspects of Christmas that are celebrated today in Western culture, such as family gatherings, seasonal food and drink, dancing, games, and a festive generosity of spirit. It has been said that Dickens' breakthrough with A Christmas Carol was his "ingenious pairing of seasonal fiction and seasonal [book] sales." A prominent phrase from the tale, "Merry Christmas", was popularized following the appearance of the story. This coincided with the appearance of the Oxford Movement and the growth of Anglo-Catholicism, which led a revival in traditional rituals and religious observances. The term Scrooge became a synonym for miser, with the phrase "Bah! Humbug!" becoming emblematic of a dismissive attitude of the festive spirit. In 1843, the first commercial Christmas card was produced by Sir Henry Cole. The revival of the Christmas Carol began with William Sandys's Christmas Carols Ancient and Modern (1833), with the first appearance in print of "The First Noel", "I Saw Three Ships", "Hark the Herald Angels Sing" and "God Rest Ye Merry, Gentlemen", popularized in Dickens's A Christmas Carol. In Britain, the Christmas tree was introduced in the early 19th century by the German-born Queen Charlotte. In 1832, the future Queen Victoria wrote about her delight at having a Christmas tree, hung with lights, ornaments, and presents placed round it. After her marriage to her German cousin Prince Albert, by 1841 the custom became more widespread throughout Britain. An image of the British royal family with their Christmas tree at Windsor Castle created a sensation when it was published in the Illustrated London News in 1848. A modified version of this image was published in Godey's Lady's Book, Philadelphia in 1850. By the 1870s, putting up a Christmas tree had become common in America. In America, interest in Christmas had been revived in the 1820s by several short stories by Washington Irving which appear in his The Sketch Book of Geoffrey Crayon, Gent. and "Old Christmas". Irving's stories depicted harmonious warm-hearted English Christmas festivities he experienced while staying in Aston Hall, Birmingham, England, that had largely been abandoned, and he used the tract Vindication of Christmas (1652) of Old English Christmas traditions, that he had transcribed into his journal as a format for his stories. In 1822, Clement Clarke Moore wrote the poem A Visit From St. Nicholas (popularly known by its first line: Twas the Night Before Christmas). The poem helped popularize the tradition of exchanging gifts, and seasonal Christmas shopping began to assume economic importance. This also started the cultural conflict between the holiday's spiritual significance and its associated commercialism that some see as corrupting the holiday. In her 1850 book The First Christmas in New England, Harriet Beecher Stowe includes a character who complains that the true meaning of Christmas was lost in a shopping spree. While the celebration of Christmas was not yet customary in some regions in the U.S., Henry Wadsworth Longfellow detected "a transition state about Christmas here in New England" in 1856. "The old puritan feeling prevents it from being a cheerful, hearty holiday; though every year makes it more so". In Reading, Pennsylvania, a newspaper remarked in 1861, "Even our presbyterian friends who have hitherto steadfastly ignored Christmas—threw open their church doors and assembled in force to celebrate the anniversary of the Savior's birth." The First Congregational Church of Rockford, Illinois, "although of genuine Puritan stock", was 'preparing for a grand Christmas jubilee', a news correspondent reported in 1864. By 1860, fourteen states including several from New England had adopted Christmas as a legal holiday. In 1875, Louis Prang introduced the Christmas card to Americans. He has been called the "father of the American Christmas card". On June 28, 1870, Christmas was formally declared a United States federal holiday. During the First World War and particularly in 1914, a series of informal truces took place for Christmas between opposing armies. The truces, which were organised spontaneously by fighting men, ranged from promises not to shoot (shouted at a distance in order to ease the pressure of war for the day) to friendly socializing, gift giving and even sport between enemies. These incidents became a well known and semi-mythologised part of popular memory. They have been described as a symbol of common humanity even in the darkest of situations and used to demonstrate to children the ideals of Christmas. Under the state atheism of the Soviet Union, after its foundation in 1917, Christmas celebrations—along with other Christian holidays—were prohibited in public. During the 1920s, 1930s, and 1940s, the League of Militant Atheists encouraged school pupils to campaign against Christmas traditions, such as the Christmas tree, as well as other Christian holidays, including Easter; the League established an antireligious holiday to be the 31st of each month as a replacement. At the height of this persecution, in 1929, on Christmas Day, children in Moscow were encouraged to spit on crucifixes as a protest against the holiday. Instead, the importance of the holiday and all its trappings, such as the Christmas tree and gift-giving, was transferred to the New Year. It was not until the dissolution of the Soviet Union in 1991 that the persecution ended and Orthodox Christmas became a state holiday again for the first time in Russia after seven decades. In 1991, the Gubbio Christmas Tree, in Italy, 650 metres (2,130 ft) high and decorated with over 700 lights, entered the Guinness Book of Records as the tallest Christmas tree in the world. European History Professor Joseph Perry wrote that likewise, in Nazi Germany, "because Nazi ideologues saw organized religion as an enemy of the totalitarian state, propagandists sought to deemphasize—or eliminate altogether—the Christian aspects of the holiday" and that "Propagandists tirelessly promoted numerous Nazified Christmas songs, which replaced Christian themes with the regime's racial ideologies". Some Christians and organizations such as Pat Robertson's American Center for Law and Justice cite alleged attacks on Christmas (dubbing them a "war on Christmas"). Such groups claim that any specific mention of the term "Christmas" or its religious aspects is being increasingly censored, avoided, or discouraged by a number of advertisers, retailers, government (prominently schools), and other public and private organizations. One controversy is the occurrence of Christmas trees being renamed Holiday trees. In the U.S. there has been a tendency to replace the greeting Merry Christmas with Happy Holidays, which is considered inclusive at the time of the Jewish celebration of Hanukkah. In the U.S., and Canada, where the use of the term "Holidays" is most prevalent, opponents have denounced its usage and avoidance of using the term "Christmas" as being politically correct. In 1984, the U.S. Supreme Court ruled in Lynch v. Donnelly that a Christmas display (which included a Nativity scene) owned and displayed by the city of Pawtucket, Rhode Island, did not violate the First Amendment. Observance and traditions Christmas Day is celebrated as a major festival and public holiday in countries around the world, including many whose populations are mostly non-Christian. In some non-Christian areas, periods of former colonial rule introduced the celebration (e.g. Hong Kong); in others, Christian minorities or foreign cultural influences have led populations to observe the holiday. Countries such as Japan, where Christmas is popular despite there being only a small number of Christians, have adopted many of the cultural aspects of Christmas, such as gift-giving, decorations, and Christmas trees. A similar example is in Turkey, being Muslim-majority and with a small number of Christians, where Christmas trees and decorations tend to line public streets during the festival.[citation needed] Many popular customs associated with Christmas developed independently of the commemoration of Jesus's birth, with some[who?] claiming that certain elements are Christianized and have origins in pre-Christian festivals that were celebrated by pagan populations who were later converted to Christianity; other scholars reject these claims and affirm that Christmas customs largely developed in a Christian context. The prevailing atmosphere of Christmas has also continually evolved since the holiday's inception, ranging from a sometimes raucous, drunken, carnival-like state in the Middle Ages, to a tamer family-oriented and children-centered theme introduced in a 19th-century transformation. The celebration of Christmas was banned on more than one occasion within certain groups, such as the Puritans and Jehovah's Witnesses (who do not celebrate birthdays in general), due to concerns that it was too unbiblical. Celtic winter herbs such as mistletoe and ivy, and the custom of kissing under a mistletoe, are common in modern Christmas celebrations in the English-speaking countries. The pre-Christian Germanic peoples—including the Anglo-Saxons and the Norse—celebrated a winter festival called Yule, held in the late December to early January period, yielding modern English yule, today used as a synonym for Christmas. In Germanic language-speaking areas, numerous elements of modern Christmas folk custom and iconography may have originated from Yule, including the Yule log, Yule boar, and the Yule goat. Often leading a ghostly procession through the sky (the Wild Hunt), the long-bearded god Odin is referred to as "the Yule one" and "Yule father" in Old Norse texts, while other gods are referred to as "Yule beings". On the other hand, as there are no reliable existing references to a Christmas log prior to the 16th century, the burning of the Christmas block may have been an early modern invention by Christians unrelated to the pagan practice. Among countries with a strong Christian tradition, a variety of Christmas celebrations have developed that incorporate regional and local cultures. For example, in eastern Europe Christmas celebrations incorporated pre-Christian traditions such as the Koleda, which shares parallels with the Christmas carol. Christmas Day (inclusive of its vigil, Christmas Eve), is a Festival in the Lutheran Churches, a solemnity in the Roman Catholic Church, and a Principal Feast of the Anglican Communion. Other Christian denominations do not rank their feast days but nevertheless place importance on Christmas Eve/Christmas Day, as with other Christian feasts like Easter, Ascension Day, and Pentecost. As such, for Christians, attending a Christmas Eve or Christmas Day church service plays an important part in the recognition of the Christmas season. Christmas, along with Easter, is the period of the highest annual church attendance. A 2010 survey by LifeWay Christian Resources found that six in ten Americans attend church services during this time. In the United Kingdom, the Church of England reported an estimated attendance of 2.5 million people at Christmas services in 2015. Nativity scenes are known from 10th-century Rome. They were popularised by Saint Francis of Assisi from 1223, quickly spreading across Europe. Different types of decorations developed across the Christian world, dependent on local tradition and available resources, and can vary from simple representations of the crib to far more elaborate sets – renowned manger scene traditions include the colourful Kraków szopka in Poland, which imitate Kraków's historical buildings as settings, the elaborate Italian presepi (Neapolitan [it], Genoese [it] and Bolognese [it]), or the Provençal crèches in southern France, using hand-painted terracotta figurines called santons. In certain parts of the world, notably Sicily, living nativity scenes following the tradition of Saint Francis are a popular alternative to static crèches. The first commercially produced decorations appeared in Germany in the 1860s, inspired by paper chains made by children. In countries where a representation of the Nativity scene is very popular, people are encouraged to compete and create the most original or realistic ones. Within some families, the pieces used to make the representation are considered a valuable family heirloom. The traditional colours of Christmas decorations are red, green, and gold. Red symbolizes the blood of Jesus, which was shed in his crucifixion; green symbolizes eternal life, and in particular the evergreen tree, which does not lose its leaves in the winter; and gold is the first colour associated with Christmas, as one of the three gifts of the Magi, symbolizing royalty. The Christmas tree was first used by German Lutherans in the 16th century, with records indicating that a Christmas tree was placed in the Cathedral of Strassburg in 1539, under the leadership of the Protestant Reformer, Martin Bucer. In the United States, these "German Lutherans brought the decorated Christmas tree with them; the Moravians put lighted candles on those trees". When decorating the Christmas tree, many individuals place a star at the top of the tree symbolizing the Star of Bethlehem, a fact recorded by The School Journal in 1897. Professor David Albert Jones of Oxford University writes that in the 19th century, it became popular for people to also use an angel to top the Christmas tree in order to symbolize the angels mentioned in the accounts of the Nativity of Jesus. Additionally, in the context of a Christian celebration of Christmas, the Christmas tree, being evergreen in colour, is symbolic of Christ, who offers eternal life; the candles or lights on the tree represent the Light of the World—Jesus—born in Bethlehem. Christian services for family use and public worship have been published for the blessing of a Christmas tree, after it has been erected. The Christmas tree is considered by some as Christianisation of pagan tradition and ritual surrounding the Winter Solstice, which included the use of evergreen boughs, and an adaptation of pagan tree worship; according to eighth-century biographer Æddi Stephanus, Saint Boniface (634–709), who was a missionary in Germany, took an ax to an oak tree dedicated to Thor and pointed out a fir tree, which he stated was a more fitting object of reverence because it pointed to heaven and it had a triangular shape, which he said was symbolic of the Trinity. The English language phrase "Christmas tree" is first recorded in 1835 and represents an importation from the German language. Since the 16th century, the poinsettia, a native plant from Mexico, has been associated with Christmas carrying the Christian symbolism of the Star of Bethlehem; in that country it is known in Spanish as the Flower of the Holy Night. Other popular holiday plants include holly, mistletoe, red amaryllis, and Christmas cactus. Along with a Christmas tree, the interior of a home may be decorated with these plants, along with garlands and evergreen foliage. The display of Christmas villages has also become a tradition in many homes this season. The outside of houses may be decorated with lights and sometimes with illuminated Christmas figures. Mistletoe features prominently in European myth and folklore (for example, the legend of Baldr). It is customary to hang a sprig of mistletoe in the house at Christmas, and anyone standing underneath it may be kissed. Other traditional decorations include bells, candles, candy canes, stockings, wreaths, and angels. The wreaths and candles in each window are a more traditional Christmas display. The concentric assortment of leaves, usually from an evergreen, make up Christmas wreaths. Candles in each window are meant to demonstrate that Christians believe that Jesus Christ is the ultimate light of the world. Christmas lights and banners may be hung along streets, music played by speakers, and Christmas trees placed in prominent places. It is common in many parts of the world for town squares and consumer shopping areas to sponsor and display decorations. Rolls of brightly colored paper with secular or religious Christmas motifs are manufactured to wrap gifts. In some countries, Christmas decorations are traditionally taken down on Twelfth Night. The tradition of the Nativity scene comes from Italy. One of the earliest representation in art of the nativity was found in the early Christian Roman catacomb of Saint Valentine. It dates to about AD 380. Another, of similar date, is beneath the pulpit in Sant'Ambrogio, Milan. For the Christian celebration of Christmas, the viewing of the Nativity play is one of the oldest Christmastime traditions, with the first reenactment of the Nativity of Jesus taking place in A.D. 1223 in the Italian town of Greccio. In that year, Francis of Assisi assembled a Nativity scene outside of his church in Italy and children sang Christmas carols celebrating the birth of Jesus. Each year, this grew larger, and people travelled from afar to see Francis' depiction of the Nativity of Jesus that came to feature drama and music. Nativity plays eventually spread throughout all of Europe, where they remained popular. Christmas Eve and Christmas Day church services often came to feature Nativity plays, as did schools and theatres. In France, Germany, Mexico, and Spain, Nativity plays are often reenacted outdoors in the streets. Many families have been decorating their houses with lights and in modern times, inflatables, to create a festive environment. The origin of Christmas lights began with candles on Christmas trees in 16th-century Germany, where they symbolized Christ as the light of the world. The earliest extant specifically Christmas hymns appear in fourth-century Rome. Latin hymns such as "Veni redemptor gentium", written by Ambrose, Archbishop of Milan, were austere statements of the theological doctrine of the Incarnation in opposition to Arianism. "Corde natus ex Parentis" ("Of the Father's love begotten") by the Spanish poet Prudentius (died 413) is still sung in some churches today. In the 9th and 10th centuries, the Christmas "Sequence" or "Prose" was introduced in North European monasteries, developing under Bernard of Clairvaux into a sequence of rhymed stanzas. In the 12th century the Parisian monk Adam of St. Victor began to derive music from popular songs, introducing something closer to the traditional Christmas carol. Christmas carols in English appear in a 1426 work of John Awdlay who lists twenty five "caroles of Cristemas", probably sung by groups of 'wassailers', who went from house to house. The songs now known specifically as carols were originally communal folk songs sung during celebrations such as "harvest tide" as well as Christmas. It was only later that carols began to be sung in church. Traditionally, carols have often been based on medieval chord patterns, and it is this that gives them their uniquely characteristic musical sound. Some carols like "Personent hodie", "Good King Wenceslas", and "In dulci jubilo" can be traced directly back to the Middle Ages. They are among the oldest musical compositions still regularly sung. "Adeste Fideles" (O Come all ye faithful) appeared in its current form in the mid-18th century. The singing of carols increased in popularity after the Protestant Reformation in the Lutheran areas of Europe, as the Reformer Martin Luther wrote carols and encouraged their use in worship, in addition to spearheading the practice of caroling outside the Mass. The 18th-century English reformer Charles Wesley, a founder of Methodism, understood the importance of music to Christian worship. In addition to setting many psalms to melodies, he wrote texts for at least three Christmas carols. The best known was originally entitled "Hark! How All the Welkin Rings", later renamed "Hark! The Herald Angels Sing". Christmas seasonal songs of a secular nature emerged in the late 18th century. The Welsh melody for "Deck the Halls" dates from 1794, with the lyrics added by Scottish musician Thomas Oliphant in 1862, and the American "Jingle Bells" was copyrighted in 1857. Other popular carols include "The First Noel", "God Rest Ye Merry, Gentlemen", "The Holly and the Ivy", "I Saw Three Ships", "In the Bleak Midwinter", "Joy to the World", "Once in Royal David's City" and "While Shepherds Watched Their Flocks". In the 19th and 20th centuries, African American spirituals and songs about Christmas, based in their tradition of spirituals, became more widely known. An increasing number of seasonal holiday songs were commercially produced in the 20th century, including jazz and blues variations. In addition, there was a revival of interest in early music, from groups singing folk music, such as The Revels, to performers of early medieval and classical music. One of the most ubiquitous festive songs is "We Wish You a Merry Christmas", which originates from the West Country of England in the 1930s. Radio has covered Christmas music from variety shows from the 1940s and 1950s, as well as modern-day stations that exclusively play Christmas music from late November through December 25. Hollywood movies have featured new Christmas music, such as "White Christmas" in Holiday Inn and Rudolph the Red-Nosed Reindeer. Traditional carols have also been included in Hollywood films, such as "Hark! The Herald Angels Sing" in It's a Wonderful Life (1946), and "Silent Night" in A Christmas Story. A special Christmas family meal is traditionally an important part of the holiday's celebration, and the food served varies greatly from country to country. Some regions have special meals for Christmas Eve, such as Sicily, where 12 kinds of fish are served. In the United Kingdom and countries influenced by its traditions, a standard Christmas meal includes turkey, goose or other large bird, gravy, potatoes, vegetables, sometimes bread, and cider. Special desserts are also prepared, such as Christmas pudding, mince pies, fruit cake and Yule log cake. In Poland and Scandinavia, fish is often used for the traditional main course, but richer meat such as lamb is increasingly served. In Sweden, it is common with a special variety of smörgåsbord, where ham, meatballs, and herring play a prominent role. In Germany, France, and Austria, goose and pork are favored. Beef, ham, and chicken in various recipes are popular worldwide. The Maltese traditionally serve Imbuljuta tal-Qastan, a chocolate and chestnuts beverage, after Midnight Mass and throughout the Christmas season. Slovenes prepare the traditional Christmas bread potica, bûche de Noël in France, panettone in Italy, and elaborate tarts and cakes. Panettone, an Italian type of sweet bread and fruitcake, originally from Milan, Italy, usually prepared and enjoyed for Christmas and New Year in Western, Southern, and Southeastern Europe, as well as in South America, Eritrea, Australia and North America. The eating of sweets and chocolates has become popular worldwide, and sweeter Christmas delicacies include the German stollen, marzipan cake or candy, and Jamaican rum fruit cake. As one of the few fruits traditionally available to northern countries in winter, oranges have been long associated with special Christmas foods. Eggnog is a sweetened dairy-based beverage traditionally made with milk, cream, sugar, and whipped eggs (which gives it a frothy texture). Spirits such as brandy, rum, or bourbon are often added. The finished serving is often garnished with a sprinkling of ground cinnamon or nutmeg. Christmas cards are illustrated messages of greeting exchanged between friends and family members during the weeks preceding Christmas Day. The traditional greeting reads "wishing you a Merry Christmas and a Happy New Year", much like that of the first commercial Christmas card, produced by Sir Henry Cole in London in 1843. The custom of sending them has become popular among a wide cross-section of people with the emergence of the modern trend towards exchanging E-cards. Christmas cards are purchased in considerable quantities and feature artwork, commercially designed and relevant to the season. The content of the design might relate directly to the Christmas narrative, with depictions of the Nativity of Jesus, or Christian symbols such as the Star of Bethlehem, or a white dove, which can represent both the Holy Spirit and Peace on Earth. Other Christmas cards are more secular and can depict Christmas traditions, figures such as Santa Claus, objects directly associated with Christmas such as candles, holly, and baubles, or a variety of images associated with the season, such as Christmastide activities, snow scenes, and the wildlife of the northern winter. A number of nations have issued commemorative stamps at Christmastide. Postal customers will often use these stamps to mail Christmas cards, and they are popular with philatelists. These stamps are regular postage stamps, unlike Christmas seals, and are valid for postage year-round. They usually go on sale sometime between early October and early December and are printed in considerable quantities. Christmas seals were first issued to raise funding to fight and bring awareness to tuberculosis. The first Christmas seal was issued in Denmark in 1904, and since then other countries have issued their own Christmas seals. The exchanging of gifts is one of the core aspects of the modern Christmas celebration, making it the most profitable time of year for retailers and businesses throughout the world. On Christmas, people exchange gifts based on the Christian tradition associated with Saint Nicholas, and the gifts of gold, frankincense, and myrrh which were given to the baby Jesus by the Magi. The practice of gift giving in the Roman celebration of Saturnalia may have influenced Christian customs, but on the other hand the Christian "core dogma of the Incarnation, however, solidly established the giving and receiving of gifts as the structural principle of that recurrent yet unique event", because it was the Biblical Magi, "together with all their fellow men, who received the gift of God through man's renewed participation in the divine life". However, Thomas J. Talley holds that the Roman Emperor Aurelian placed the alternate festival on December 25 in order to compete with the growing rate of the Christian Church, which had already been celebrating Christmas on that date first. A number of figures are associated with Christmas and the seasonal giving of gifts. Among these are Father Christmas, also known as Santa Claus (derived from the Dutch for Saint Nicholas), Père Noël, and the Weihnachtsmann; Saint Nicholas or Sinterklaas; the Christkind; Kris Kringle; Joulupukki; tomte/nisse; Babbo Natale; Saint Basil; and Ded Moroz. The Scandinavian tomte (also called nisse) is sometimes depicted as a gnome instead of Santa Claus. The best known of these figures today is the red-dressed Santa Claus, of diverse origins. The name 'Santa Claus' can be traced back to the Dutch Sinterklaas ('Saint Nicholas'). Nicholas was a 4th-century Greek bishop of Myra. Among other saintly attributes, he was noted for the care of children, generosity, and the giving of gifts. His feast day, December 6, came to be celebrated in many countries with the giving of gifts. Saint Nicholas traditionally appeared in bishop's attire, accompanied by helpers, inquiring about the behaviour of children during the past year before deciding whether they deserved a gift or not. By the 13th century, Saint Nicholas was well known in the Netherlands, and the practice of gift-giving in his name spread to other parts of central and southern Europe. At the Reformation in 16th- and 17th-century Europe, many Protestants changed the gift bringer to the Christ Child or Christkindl, corrupted in English to 'Kris Kringle', and the date of giving gifts changed from December 6 to Christmas Eve. The modern popular image of Santa Claus, however, was created in the United States, and in particular in New York. The transformation was accomplished with the aid of notable contributors including Washington Irving and the German-American cartoonist Thomas Nast (1840–1902). Following the American Revolutionary War, some of the inhabitants of New York City sought out symbols of the city's non-English past. New York had originally been established as the Dutch colonial town of New Amsterdam and the Dutch Sinterklaas tradition was reinvented as Saint Nicholas. Current tradition in several Latin American countries holds that while Santa makes the toys, he then gives them to the Baby Jesus, who is the one who actually delivers them to the children's homes, a reconciliation between traditional religious beliefs and the iconography of Santa Claus imported from the United States. In Italy's South Tyrol, Austria, the Czech Republic, Southern Germany, Hungary, Liechtenstein, Slovakia, and Switzerland, the Christkind (Ježíšek in Czech, Jézuska in Hungarian and Ježiško in Slovak) brings the presents. Greek children get their presents from Saint Basil on New Year's Eve, the eve of that saint's liturgical feast. The German St. Nikolaus is not identical with the Weihnachtsmann (who is the German version of Santa Claus / Father Christmas). St. Nikolaus wears a bishop's dress and still brings small gifts (usually candies, nuts, and fruits) on December 6 and is accompanied by Knecht Ruprecht. Although many parents around the world routinely teach their children about Santa Claus and other gift bringers, some have come to reject this practice, considering it deceptive. Multiple gift-giver figures exist in Poland, varying between regions and individual families. St Nicholas (Święty Mikołaj) dominates Central and North-East areas, the Starman (Gwiazdor) is most common in Greater Poland, Baby Jesus (Dzieciątko) is unique to Upper Silesia, with the Little Star (Gwiazdka) and the Little Angel (Aniołek) being common in the South and the South-East. Grandfather Frost (Dziadek Mróz) is less commonly accepted in some areas of Eastern Poland. It is worth noting that across all of Poland, St Nicholas is the gift giver on Saint Nicholas Day on December 6. Christmas during the Middle Ages was a public festival with annual indulgences, including sporting. When Puritans outlawed Christmas in England in December 1647 the crowd brought out footballs as a symbol of festive misrule. The Orkney Christmas Day Ba' tradition continues. In the former top tier of English football, home and away Christmas Day and Boxing Day double headers were often played guaranteeing football clubs large crowds by allowing many working people their only chance to watch a game. Champions Preston North End faced Aston Villa on Christmas Day 1889 and the last December 25 fixture was in 1965 in England, Blackpool beating Blackburn Rovers 4–2. One of the most memorable images of the Christmas truce during World War I was the games of football played between the opposing sides on Christmas Day 1914. Choice of date There are several theories as to why December 25 was chosen as the date for Christmas. However, according to theology professor Susan Roll, liturgical historians agree that it was linked in some way with "the sun, the winter solstice and the popularity of solar worship in the later Roman Empire". In Roman tradition, December 25 was deemed to be the winter solstice and March 25 the spring equinox, dates that retained significance despite Julian calendar drift. Greco-Roman writers in the second and third centuries called December 25 the birth day of the Sun. The early Church linked Jesus to the Sun and referred to him as the 'true Sun' or 'Sun of Righteousness'. In the early fifth century, Augustine of Hippo and Maximus of Turin preached that it was fitting to celebrate Christ's birth at the winter solstice, because it marked the point when the hours of daylight begin to grow. The use of solar imagery for Christ is understood as part of broader cosmological symbolism in late antiquity, and may not have been direct cultic borrowing. The 'history of religions' or 'substitution' theory proposes that the Church chose December 25 as Christ's birthday (dies Natalis Christi) to appropriate the Roman winter solstice festival dies Natalis Solis Invicti, the birthday of the god Sol Invictus (the 'Invincible Sun'). It had been held on this date since 274 AD; before the earliest evidence of Christmas on December 25. Professor Gary Forsythe says that the Natalis Solis Invicti followed "the seven-day period of the Saturnalia (December 17–23), Rome's most joyous holiday season since Republican times, characterized by parties, banquets, and exchanges of gifts". Roll says that "the specific nature of the relation" between Christmas and the Natalis Solis Invicti has not yet been "conclusively proven from extant texts". The 'calculation theory' proposes that the date arose from Christian chronography rather than from an effort to supplant a pagan festival. It was first proposed by Louis Duchesne. Some early Christians marked Jesus's death on the day before Passover, the 14th of Nisan. Several third‑century sources correlate this with the spring equinox on March 25. Duchesne conjectured that early Christians believed Jesus was conceived and died on the same date, yielding a March 25 conception and December 25 birth nine months later. However, he admitted that this belief is not found in any early Christian text. The earliest attestation of a December 25 Nativity may be contemporaneous with or earlier than clear attestations of a December 25 Sol Invictus festival. Some jurisdictions of the Eastern Orthodox Church, including those of Russia, Georgia, North Macedonia, Montenegro, Serbia, and Jerusalem, mark feasts using the older Julian calendar. Since Christmas 1899 until Christmas 2099 inclusive, there is a difference of 13 days between the Julian calendar and the modern Gregorian calendar. As a result, December 25 on the Julian calendar currently corresponds to January 7 on the calendar used by most governments and people in everyday life. Therefore, the aforementioned Orthodox Christians mark December 25 (and thus Christmas) on the day that is internationally considered to be January 7. However, following the Council of Constantinople in 1923, other Orthodox Christians, such as those belonging to the jurisdictions of Constantinople, Bulgaria, Greece, Romania, Antioch, Alexandria, Albania, Cyprus, Finland, and the Orthodox Church in America, among others, began using the Revised Julian calendar, which at present corresponds exactly to the Gregorian calendar. Therefore, these Orthodox Christians mark December 25 (and thus Christmas) on the same day that is internationally considered to be December 25. The Armenian Apostolic Church continues the original ancient Eastern Christian practice of celebrating the birth of Christ not as a separate holiday, but on the same day as the celebration of his baptism (Theophany), which is on January 6. This is a public holiday in Armenia, and it is held on the same day that is internationally considered to be January 6, because since 1923 the Armenian Church in Armenia has used the Gregorian calendar. However, there is also a small Armenian Patriarchate of Jerusalem, which maintains the traditional Armenian custom of celebrating the birth of Christ on the same day as Theophany (January 6), but uses the Julian calendar for the determination of that date. As a result, this church celebrates "Christmas" (more properly called Theophany) on the day that is considered January 19 on the Gregorian calendar in use by the majority of the world. Following the 2022 Russian invasion of Ukraine, Ukraine officially moved its Christmas date from January 7 to December 25, to distance itself from the Russian Orthodox Church that had supported Russia's invasion. This followed the Orthodox Church of Ukraine formally adopting the Revised Julian calendar for fixed feasts and solemnities. There are four different dates used by different Christian groups to mark the birth of Christ, given in the table below. Also, the Ancient Church of the East, Syriac Orthodox Church, Indian Orthodox Church. Although it follows the Julian calendar, the Ancient Church of the East decided on 2010 to celebrate Christmas according to the Gregorian calendar date. Also, some Byzantine Rite Catholics and Byzantine Rite Lutherans. Most Protestants (P'ent'ay/Evangelicals) in the diaspora have the option of choosing the Ethiopian calendar (Tahsas 29/January 7) or the Gregorian calendar (December 25) for religious holidays, with this option being used when the corresponding eastern celebration is not a public holiday in the western world (with most diaspora Protestants celebrating both days).[citation needed] Economy Christmas is typically a peak selling season for retailers in many nations around the world; sales increase dramatically during this time as people purchase gifts, decorations, and supplies to celebrate. In the United States, the "Christmas shopping season" starts as early as October. In Canada, merchants begin advertising campaigns before Halloween (October 31) and step up their marketing following Remembrance Day on November 11. In the UK and Ireland, the Christmas shopping season starts from mid-November, around the time when high street Christmas lights are turned on. A concept devised by retail entrepreneur David Lewis, the first Christmas grotto opened in Lewis's department store in Liverpool, England in 1879. In the United States, it has been calculated that a quarter of all personal spending takes place during the Christmas/holiday shopping season. Figures from the US Census Bureau reveal that expenditure in department stores nationwide rose from $20.8 billion in November 2004 to $31.9 billion in December 2004, an increase of 54 percent. In other sectors, the pre-Christmas increase in spending was even greater, there being a November–December buying surge of 100 percent in bookstores and 170 percent in jewelry stores. In the same year employment in American retail stores rose from 1.6 million to 1.8 million in the two months leading up to Christmas. Industries completely dependent on Christmas include Christmas cards, of which 1.9 billion are sent in the United States each year, and live Christmas trees, of which 20.8 million were cut in the US in 2002. In the UK in 2010, up to £8 billion was expected to be spent online at Christmas, approximately a quarter of total retail festive sales. In most Western nations, Christmas Day is the least active day of the year for business and commerce; almost all retail, commercial and institutional businesses are closed, and almost all industries cease activity (more than any other day of the year), whether laws require such or not. In England and Wales, the Christmas Day (Trading) Act 2004 prevents all large shops from trading on Christmas Day. Similar legislation was approved in Scotland in 2007. Film studios release many high-budget movies during the holiday season, including Christmas films, fantasy movies or high-tone dramas with high production values to hopes of maximizing the chance of nominations for the Academy Awards. One economist's analysis calculates that despite increased overall spending, Christmas is a deadweight loss under orthodox microeconomic theory, because of the effect of gift-giving. This loss is calculated as the difference between what the gift giver spent on the item and what the gift receiver would have paid for the item. It is estimated that in 2001, Christmas resulted in a $4 billion deadweight loss in the US alone. Because of complicating factors, this analysis is sometimes used to discuss possible flaws in current microeconomic theory. Other deadweight losses include the effects of Christmas on the environment and the fact that material gifts are often perceived as white elephants, imposing cost for upkeep and storage and contributing to clutter. Prohibition Christmas has been the subject of controversy and attacks from various sources, both Christian and non-Christian. Historically, it was prohibited by Puritans during their ascendency in the Commonwealth of England (1647–1660) and in Colonial New England where the Puritans outlawed the celebration of Christmas in 1659 on the grounds that Christmas was not mentioned in Scripture and therefore violated the regulative principle of worship. The Parliament of Scotland, which was dominated by Presbyterians, passed a series of acts outlawing the observance of Christmas between 1637 and 1690; Christmas Day did not become a public holiday in Scotland until 1871. Today, some conservative Reformed denominations such as the Free Presbyterian Church of Scotland and the Reformed Presbyterian Church of North America likewise reject the celebration of Christmas based on the regulative principle and what they see as its non-Scriptural origin. Celebrating Christmas is banned in the Jehovah's Witnesses, as the Governing Body believes that Christmas is originally pagan and without basis in Scripture. Christmas celebrations have also been prohibited by atheist states such as the Soviet Union and in 2015 by the majority Muslim states of Somalia, Tajikistan and Brunei. As Christmas celebrations began to spread globally even outside traditional Christian cultures, several Muslim-majority countries began to ban the observance of Christmas, claiming it undermined Islam. In December 2018, the Iraqi Council of Ministers amended the national holidays law in order to designate Christmas Day (December 25) an official nationwide holiday which is to be celebrated by all Iraqis instead of only the country's Christian minority. In 2023, public Christmas celebrations were cancelled in Bethlehem, the city synonymous with the birth of Jesus. Palestinian leaders of various Christian denominations cited the ongoing Israel–Gaza war in their unanimous decision to cancel celebrations. The government of the People's Republic of China officially espouses state atheism, and has conducted antireligious campaigns to this end. In December 2018, officials raided Christian churches prior to Christmastide and coerced them to close; Christmas trees and Santa Clauses were also forcibly removed. See also Notes References Further reading External links (federal) = federal holidays, (abbreviation) = state/territorial holidays, (religious) = religious holidays, (cultural) = holiday related to a specific racial/ethnic group or sexual minority, (week) = week-long holidays, (month) = month-long holidays, (36) = Title 36 Observances and Ceremonies |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Cairo] | [TOKENS: 18482] |
Contents Cairo Cairo[b] is the capital and largest city of Egypt and the Cairo Governorate. It is home to more than 10.5 million people. It is also part of the largest urban agglomeration in Africa, the Arab world, and the Middle East. The Greater Cairo metropolitan area is one of the largest in the world by population with over 22 million people. Areas of what would become Cairo were inhabited from pre-dynastic and early-dynastic ancient Egypt c. 6000 years ago, as the Giza pyramid complex and the ancient cities of Memphis and Heliopolis are today within the city. Located near the Nile Delta, the predecessor city was Fustat following Babylon. Subsequently, Cairo was founded in 969. It later superseded Fustat as the main urban centre during the Ayyubid and Mamluk periods (12th–16th centuries). Cairo has since become a longstanding centre of political and cultural life, and is titled "the city of a thousand minarets" for its preponderance of Islamic architecture. Cairo's historic center was awarded World Heritage Site status in 1979. The city is considered a regional center of finance and commerce, academics and the arts, and is home to Cairo Symphony Orchestra and the Cairo Opera House, while the Academy of Arts provides visual arts education. Cairo is home to Egypt's oldest university, Al-Azhar University, one of the oldest universities in the world, as well as the oldest and largest film and music industry in Africa and the Arab world. Many international media, businesses, and organizations have regional headquarters in Cairo, such as the headquarters of the Arab League, the regional offices of the World Health Organization, Food and Agriculture Organization, International Civil Aviation Organization, United Nations Development Programme, African Space Agency, and also the headquarters of FIBA Africa. Cairo is a key global city, like many other megacities, suffers from high levels of pollution and traffic. The Cairo Metro, opened in 1987, is the oldest metro system in Africa, and ranks amongst the fifteen busiest in the world, with over 1 billion annual passenger rides. The economy of Cairo was ranked first in the Middle East in 2005 on Foreign Policy's Global Cities Index, and first in Africa in 2025 according to International Monetary Fund, and continues to be a major destination for foreign direct investment (FDI) due to its massive consumer market and strategic location. Etymology The name of Cairo is derived from the Arabic al-Qāhirah (القاهرة), meaning 'the Vanquisher' or 'the Conqueror', given by the Fatimid Caliph al-Mu'izz following the establishment of the city as the capital of the Fatimid dynasty. Its full, formal name was al-Qāhirah al-Mu'izziyyah (القاهرة المعزيّة), meaning 'the Vanquisher of al-Mu'izz'. It is also supposedly due to the fact that the planet Mars, known in Arabic by names such as an-Najm al-Qāhir (النجم القاهر, 'the Conquering Star'), was rising at the time of the city's founding. Egyptians often refer to Cairo as Maṣr (IPA: [mɑsˤɾ]; مَصر), the Egyptian Arabic name for Egypt itself, emphasising the city's importance for the country. There are a number of Coptic names for the city. Tikešrōmi (Coptic: Ϯⲕⲉϣⲣⲱⲙⲓ Late Coptic: [dikɑʃˈɾoːmi]) is attested in the 1211 text The Martyrdom of John of Phanijoit and is either a calque meaning 'man breaker' (Ϯ-, 'the', ⲕⲁϣ-, 'to break', and ⲣⲱⲙⲓ, 'man'), akin to Arabic al-Qāhirah, or a derivation from Arabic قَصْر الرُوم (qaṣr ar-rūm, "the Roman castle"), another name of Babylon Fortress in Old Cairo. The Arabic name is also calqued as ⲧⲡⲟⲗⲓⲥ ϯⲣⲉϥϭⲣⲟ, "the victor city" in the Coptic antiphonary. The form Khairon (Coptic: ⲭⲁⲓⲣⲟⲛ) is attested in the modern Coptic text Ⲡⲓⲫⲓⲣⲓ ⲛ̀ⲧⲉ ϯⲁⲅⲓⲁ ⲙ̀ⲙⲏⲓ Ⲃⲉⲣⲏⲛⲁ (The Tale of Saint Verina).[better source needed] Lioui (Ⲗⲓⲟⲩⲓ Late Coptic: [lɪˈjuːj]) or Elioui (Ⲉⲗⲓⲟⲩⲓ Late Coptic: [ælˈjuːj]) is another name which is descended from the Greek name of Heliopolis (Ήλιούπολις). Some argue that Mistram (Ⲙⲓⲥⲧⲣⲁⲙ Late Coptic: [ˈmɪstəɾɑm]) or Nistram (Ⲛⲓⲥⲧⲣⲁⲙ Late Coptic: [ˈnɪstəɾɑm]) is another Coptic name for Cairo, although others think that it is rather a name for the Abbasid province capital al-Askar. Ⲕⲁϩⲓⲣⲏ (Kahi•ree) is a popular modern rendering of an Arabic name (others being Ⲕⲁⲓⲣⲟⲛ [Kairon] and Ⲕⲁϩⲓⲣⲁ [Kahira]) which is modern folk etymology meaning 'land of sun'. Some argue that it was the name of an Egyptian settlement upon which Cairo was built, but it is rather doubtful as this name is not attested in any Hieroglyphic or Demotic source, although some researchers, like Paul Casanova, view it as a legitimate theory. Cairo is also referred to as Ⲭⲏⲙⲓ (Late Coptic: [ˈkɪmi]) or Ⲅⲩⲡⲧⲟⲥ (Late Coptic: [ˈɡɪpdos]), which means Egypt in Coptic, the same way it is referred to in Egyptian Arabic. Sometimes the city is informally referred to as Cairo by people from Alexandria (IPA: [ˈkæjɾo]; Egyptian Arabic: كايرو). History The area around present-day Cairo had long been a focal point of Ancient Egypt due to its strategic location at the junction of the Nile Valley and the Nile Delta regions (roughly Upper Egypt and Lower Egypt), which also placed it at the crossing of major routes between North Africa and the Levant. Memphis, the capital of Egypt during the Old Kingdom and a major city up until the Ptolemaic period, was located a short distance south west of present-day Cairo. Heliopolis, another important city and major religious center, was located in what are now the modern districts of Matariya and Ain Shams in northeastern Cairo. It was largely destroyed by the Persian invasions in 525 BC and 343 BC and partly abandoned by the late first century BC. However, the origins of modern Cairo are generally traced back to a series of settlements in the first millennium AD. Around the turn of the fourth century, as Memphis was continuing to decline in importance, the Romans established a large fortress along the east bank of the Nile. The trading post of Babylon, first mentioned in 50 BC, became a fortress, built by the Roman emperor Diocletian (r. 285–305) at the entrance of a canal connecting the Nile to the Red Sea that was created earlier by Emperor Trajan (r. 98–117).[c] Further north of the fortress, near the present-day district of al-Azbakiya, was a port and fortified outpost known as Tendunyas (Coptic: ϯⲁⲛⲧⲱⲛⲓⲁⲥ) or Umm Dunayn. While no structures older than the 7th century have been preserved in the area aside from the Roman fortifications, historical evidence suggests that a sizeable city existed. The city was important enough that its bishop, Cyrus, participated in the Second Council of Ephesus in 449. The Byzantine-Sassanian War between 602 and 628 caused great hardship and likely caused much of the urban population to leave for the countryside, leaving the settlement partly deserted. The site today remains at the nucleus of the Coptic Orthodox community, which separated from the Roman and Byzantine churches in the late 4th century. Cairo's oldest extant churches, such as the Church of Saint Barbara and the Church of Saints Sergius and Bacchus (from the late 7th or early 8th century), are located inside the fortress walls in what is now known as Old Cairo or Coptic Cairo. The Muslim conquest of Byzantine Egypt was led by Amr ibn al-As from 639 to 642. Babylon Fortress was besieged in September 640 and fell in April 641. In 641 or early 642, after the surrender of Alexandria, the Egyptian capital at the time, he founded a new settlement next to Babylon Fortress. The city, known as Fustat (Arabic: الفسطاط, romanised: al-Fusṭāṭ, lit. 'the tent'), served as a garrison town and as the new administrative capital of Egypt. Historians such as Janet Abu-Lughod and André Raymond trace the genesis of present-day Cairo to the foundation of Fustat. The choice of founding a new settlement at this inland location, instead of using the existing capital of Alexandria on the Mediterranean coast, may have been due to the new conquerors' strategic priorities. One of the first projects of the new Muslim administration was to clear and re-open Trajan's ancient canal in order to ship grain more directly from Egypt to Medina, the capital of the caliphate in Arabia. Ibn al-As also founded a mosque for the city at the same time, now known as the Mosque of Amr Ibn al-As, the oldest mosque in Egypt and Africa (although the current structure dates from later expansions). In 750, following the overthrow of the Umayyad Caliphate by the Abbasids, the new rulers created their own settlement to the northeast of Fustat which became the new provincial capital. This was known as al-Askar (Arabic: العسكر, lit. 'the camp') as it was laid out like a military camp. A governor's residence and a new mosque were also added, with the latter completed in 786. The Red Sea canal re-excavated in the 7th century was closed by the Abbasid Caliph al-Mansur (r. 754–775), but a part of the canal, known as the Khalij, continued to be a major feature of Cairo's geography and of its water supply until the 19th century. In 861, on the orders of the Abbasid Caliph al-Mutawakkil, a Nilometer was built on Roda Island near Fustat. Although it was repaired and given a new roof in later centuries, its basic structure is still preserved today, making it the oldest preserved Islamic-era structure in Cairo today. In 868, a commander of Turkic origin named Bakbak was sent to Egypt by the Abbasid Caliph al-Mu'taz to restore order after a rebellion in the country. He was accompanied by his stepson, Ahmad ibn Tulun, who became effective governor of Egypt. Over time, Ibn Tulun gained an army and accumulated influence and wealth, allowing him to become the de facto independent ruler of both Egypt and Syria by 878. In 870, he used his growing wealth to found a new administrative capital, al-Qata'i (Arabic: القطائـع, lit. 'the allotments'), to the northeast of Fustat and of al-Askar. The new city included a palace known as the Dar al-Imara, a parade ground known as al-Maydan, a bimaristan (hospital), and an aqueduct to supply water. Between 876 and 879 Ibn Tulun built a great mosque, now known as the Mosque of Ibn Tulun, at the center of the city, next to the palace. After his death in 884, Ibn Tulun was succeeded by his son and his descendants who continued a short-lived dynasty, the Tulunids. In 905, the Abbasids sent general Muhammad Sulayman al-Katib to re-assert direct control over the country. Tulunid rule was ended and al-Qatta'i was razed to the ground, except for the mosque which remains standing today. In 969, the Fatimid Caliphate conquered Egypt after ruling from Ifriqiya. The Fatimid Caliph al-Mu'izz li-Din Allah instructed his courtier and general Jawhar al-Saqili to establish a new fortified city northeast of Fustat and of former al-Qata'i. It took four years to build the city, initially known as al-Manṣūriyyah, which was to serve as the new capital of the caliphate. During that time, the construction of the al-Azhar Mosque was commissioned by order of the caliph, which developed into the third-oldest university in the world. Cairo would eventually become a centre of learning, with the library of Cairo containing hundreds of thousands of books. When Caliph al-Mu'izz arrived from the old Fatimid capital of Mahdia in Tunisia in 973, he gave the city its present name, Qāhirat al-Mu'izz ("The Vanquisher of al-Mu'izz"), from which the name "Cairo" (al-Qāhira) originates. The caliphs lived in a vast and lavish palace complex that occupied the heart of the city. Cairo remained a relatively exclusive royal city for most of this era, but during the tenure of Badr al-Gamali as vizier (1073–1094) the restrictions were loosened for the first time and richer families from Fustat were allowed to move into the city. Between 1087 and 1092 Badr al-Gamali also rebuilt the city walls in stone and constructed the city gates of Bab al-Futuh, Bab al-Nasr, and Bab Zuweila that still stand today. During the Fatimid period Fustat reached its apogee in size and prosperity, acting as a center of craftsmanship and international trade and as the area's main port on the Nile. Historical sources report that multi-story communal residences existed in the city, particularly in its center, which were typically inhabited by middle and lower-class residents. Some of these were as high as seven stories and could house some 200 to 350 people. They may have been similar to Roman insulae and may have been the prototypes for the rental apartment complexes which became common in the later Mamluk and Ottoman periods. However, in 1168 the Fatimid vizier Shawar set fire to the unfortified Fustat to prevent its potential capture by Amalric, the Crusader king of Jerusalem. While the fire did not destroy the city and it continued to exist afterward, it did mark the beginning of its decline. Over the following centuries it was Cairo, the former palace-city, that became the new economic center and attracted migration from Fustat. While the Crusaders did not capture the city in 1168, a continuing power struggle between Shawar, King Amalric, and the Zengid general Shirkuh led to the downfall of the Fatimid establishment. In 1169, Shirkuh's nephew Saladin was appointed as the new vizier of Egypt by the Fatimids and two years later he seized power from the family of the last Fatimid caliph, al-'Āḍid. As the first Sultan of Egypt, Saladin established the Ayyubid dynasty, based in Cairo, and aligned Egypt with the Sunni Abbasids, who were based in Baghdad. In 1176, Saladin began construction on the Cairo Citadel, which was to serve as the seat of the Egyptian government until the mid-19th century. The construction of the Citadel definitively ended Fatimid-built Cairo's status as an exclusive palace-city and opened it up to common Egyptians and to foreign merchants, spurring its commercial development. Along with the Citadel, Saladin also began the construction of a new 20-kilometre-long wall that would protect both Cairo and Fustat on their eastern side and connect them with the new Citadel. These construction projects continued beyond Saladin's lifetime and were completed under his Ayyubid successors. In 1250, during the Seventh Crusade, the Ayyubid dynasty had a crisis with the death of al-Salih and power transitioned instead to the Mamluks, partly with the help of al-Salih's wife, Shajar ad-Durr, who ruled for a brief period around this time. Mamluks were soldiers who were purchased as young slaves and raised to serve in the sultan's army. Between 1250 and 1517 the throne of the Mamluk Sultanate passed from one mamluk to another in a system of succession that was generally non-hereditary, but also frequently violent and chaotic. The Mamluk Empire nonetheless became a major power in the region and was responsible for repelling the advance of the Mongols (most famously at the Battle of Ain Jalut in 1260) and for eliminating the last Crusader states in the Levant. Despite their military character, the Mamluks were also prolific builders and left a rich architectural legacy throughout Cairo. Continuing a practice started by the Ayyubids, much of the land occupied by former Fatimid palaces was sold and replaced by newer buildings, becoming a prestigious site for the construction of Mamluk religious and funerary complexes. Construction projects initiated by the Mamluks pushed the city outward while also bringing new infrastructure to the centre of the city. Meanwhile, Cairo flourished as a centre of Islamic scholarship and a crossroads on the spice trade route among the civilisations in Afro-Eurasia. Under the reign of the Mamluk sultan al-Nasir Muhammad (1293–1341, with interregnums), Cairo reached its apogee in terms of population and wealth. By 1340, Cairo had a population of close to half a million, making it the largest city west of China. Multi-story buildings occupied by rental apartments, known as a rab' (plural ribā' or urbu), became common in the Mamluk period and continued to be a feature of the city's housing during the later Ottoman period. These apartments were often laid out as multi-story duplexes or triplexes. They were sometimes attached to caravanserais, where the two lower floors were for commercial and storage purposes and the multiple stories above them were rented out to tenants. The oldest partially-preserved example of this type of structure is the Wikala of Amir Qawsun, built before 1341. Residential buildings were in turn organised into close-knit neighbourhoods called a harat, which in many cases had gates that could be closed off at night or during disturbances. When the traveller Ibn Battuta first came to Cairo in 1326, he described it as the principal district of Egypt. When he passed through the area again on his return journey in 1348, the Black Death was ravaging most major cities. He cited reports of thousands of deaths per day in Cairo. Although Cairo avoided Europe's stagnation during the Late Middle Ages, it could not escape the Black Death, which struck the city more than fifty times between 1348 and 1517. During its initial, and most deadly waves, approximately 200,000 people were killed by the plague, and, by the 15th century, Cairo's population had been reduced to between 150,000 and 300,000. The population decline was accompanied by a period of political instability between 1348 and 1412. It was nonetheless in this period that the largest Mamluk-era religious monument, the Madrasa-Mosque of Sultan Hasan, was built. In the late 14th century, the Burji Mamluks replaced the Bahri Mamluks as rulers of the Mamluk state, but the Mamluk system continued to decline. Though the plagues returned frequently throughout the 15th century, Cairo remained a major metropolis and its population recovered in part through rural migration. More conscious efforts were conducted by rulers and city officials to redress the city's infrastructure and cleanliness. Its economy and politics also became more deeply connected with the wider Mediterranean. Some Mamluk sultans in this period, such as Barbsay (r. 1422–1438) and Qaytbay (r. 1468–1496), had relatively long and successful reigns. After al-Nasir Muhammad, Qaytbay was one of the most prolific patrons of art and architecture of the Mamluk era. He built or restored numerous monuments in Cairo, in addition to commissioning projects beyond Egypt. The crisis of Mamluk power and of Cairo's economic role deepened after Qaytbay. The city's status was diminished after Vasco da Gama discovered a sea route around the Cape of Good Hope between 1497 and 1499, thereby allowing spice traders to avoid Cairo. Cairo's political influence diminished significantly after the Ottomans defeated Sultan al-Ghuri in the Battle of Marj Dabiq in 1516 and conquered Egypt in 1517. Ruling from Constantinople, Sultan Selim I relegated Egypt to a province, with Cairo as its capital. For this reason, the history of Cairo during Ottoman times is often described as inconsequential, especially in comparison to other time periods. During the 16th and 17th centuries, Cairo still remained an important economic and cultural centre. Although no longer on the spice route, the city facilitated the transportation of Yemeni coffee and Indian textiles, primarily to Anatolia, North Africa, and the Balkans. Cairene merchants were instrumental in bringing goods to the barren Hejaz, especially during the annual hajj to Mecca. It was during this same period that al-Azhar University reached the predominance among Islamic schools that it continues to hold today; pilgrims on their way to hajj often attested to the superiority of the institution, which had become associated with Egypt's body of Islamic scholars. The first printing press of the Middle East, printing in Hebrew, was established in Cairo c. 1557 by a scion of the Soncino family of printers, Italian Jews of Ashkenazi origin who operated a press in Constantinople. The existence of the press is known solely from two fragments discovered in the Cairo Geniza. Under the Ottomans, Cairo expanded south and west from its nucleus around the Citadel. The city was the second-largest in the empire, behind Constantinople, and, although migration was not the primary source of Cairo's growth, twenty percent of its population at the end of the 18th century consisted of religious minorities and foreigners from around the Mediterranean. Still, when Napoleon arrived in Cairo in 1798, the city's population was less than 300,000, forty percent lower than it was at the height of Mamluk—and Cairene—influence in the mid-14th century. The French occupation was short-lived as British and Ottoman forces, including a sizeable Albanian contingent, recaptured the country in 1801. Cairo itself was besieged by a British and Ottoman force culminating with the French surrender on 22 June 1801. The British vacated Egypt two years later, leaving the Ottomans, the Albanians, and the long-weakened Mamluks jostling for control of the country. Continued civil war allowed an Albanian named Muhammad Ali Pasha to ascend to the role of commander and eventually, with the approval of the religious establishment, viceroy of Egypt in 1805. Until his death in 1848, Muhammad Ali Pasha instituted a number of social and economic reforms that earned him the title of founder of modern Egypt. However, while Muhammad Ali initiated the construction of public buildings in the city, those reforms had minimal effect on Cairo's landscape. Bigger changes came to Cairo under Isma'il Pasha (r. 1863–1879), who continued the modernisation processes started by his grandfather. Drawing inspiration from Paris, Isma'il envisioned a city of maidans and wide avenues; due to financial constraints, only some of them, in the area now composing Downtown Cairo, came to fruition. Isma'il also sought to modernise the city, which was merging with neighbouring settlements, by establishing a public works ministry, bringing gas and lighting to the city, and opening a theatre and opera house. The immense debt resulting from Isma'il's projects provided a pretext for increasing European control, which culminated with the British invasion in 1882. The city's economic centre quickly moved west toward the Nile, away from the historic Islamic Cairo section and toward the contemporary, European-style areas built by Isma'il. Europeans accounted for five percent of Cairo's population at the end of the 19th century, by which point they held most top governmental positions. In 1906, the Heliopolis Oasis Company headed by the Belgian industrialist Édouard Empain and his Egyptian counterpart Boghos Nubar, built a suburb called Heliopolis (city of the sun in Greek) ten kilometers from the center of Cairo. In 1905–1907 the northern part of the Gezira island was developed by the Baehler Company into Zamalek, which would later become Cairo's upscale "chic" neighbourhood. In 1906 construction began on Garden City, a neighbourhood of urban villas with gardens and curved streets, which would become the seat of British rule in Egypt in 1914. The British occupation was intended to be temporary, but it lasted well into the 20th century. Nationalists staged large-scale demonstrations in Cairo in 1919, five years after Egypt had been declared a British protectorate. Nevertheless, this led to Egypt's independence in 1922. The King Fuad I Edition of the Qur'an was first published on 10 July 1924 in Cairo under the patronage of King Fuad. The goal of the government of the newly formed Kingdom of Egypt was not to delegitimise the other variant Quranic texts ("qira'at"), but to eliminate errors found in Qur'anic texts used in state schools. A committee of teachers chose to preserve a single one of the canonical qira'at "readings", namely that of the "Ḥafṣ" version, an 8th-century Kufic recitation. This edition has become the standard for modern printings of the Quran for much of the Islamic world. The publication has been called a "terrific success", and the edition has been described as one "now widely seen as the official text of the Qur'an", so popular among both Sunni and Shi'a that the common belief among less well-informed Muslims is "that the Qur'an has a single, unambiguous reading". Minor amendments were made later in 1924 and in 1936 - the "Faruq edition" in honour of then ruler, King Faruq. British troops remained in the country until 1956. During this time, urban Cairo, spurred by new bridges and transport links, continued to expand to include the upscale neighbourhoods of Garden City, Zamalek, and Heliopolis. Between 1882 and 1937, the population of Cairo more than tripled—from 347,000 to 1.3 million —and its area increased from 10 to 163 km2 (4 to 63 sq mi). The city was devastated during the 1952 riots known as the Cairo Fire or Black Saturday, which saw the destruction of nearly 700 shops, movie theatres, casinos and hotels in downtown Cairo. The British departed Cairo following the Egyptian Revolution of 1952, but the city's rapid growth showed no signs of abating. Seeking to accommodate the increasing population, President Gamal Abdel Nasser redeveloped Tahrir Square and the Nile Corniche, and improved the city's network of bridges and highways. Meanwhile, additional controls of the Nile fostered development within Gezira Island and along the city's waterfront. The metropolis began to encroach on the fertile Nile Delta, prompting the government to build desert satellite towns and devise incentives for city-dwellers to move to them. In the second half of the 20th century, Cairo continued to grow enormously in both population and area. Between 1947 and 2006, the population of Greater Cairo went from 2,986,280 to 16,292,269. The population explosion also drove the rise of "informal" housing ('ashwa'iyyat), meaning housing that was built without any official planning or control. The exact form of this type of housing varies considerably but usually has a much higher population density than formal housing. By 2009, over 63% of the population of Greater Cairo lived in informal neighbourhoods, even though these occupied only 17% of the total area of Greater Cairo. According to economist David Sims, informal housing has the benefits of providing affordable accommodation and vibrant communities to huge numbers of Cairo's working classes, but it also suffers from government neglect, a relative lack of services, and overcrowding. The "formal" city was also expanded. The most notable example was the creation of Madinat Nasr, a huge government-sponsored expansion of the city to the east which officially began in 1959 but was primarily developed in the mid-1970s. In 1979, the government established the New Urban Communities Authority (NUCA) to initiate and direct the development of new towns on the outskirts of Cairo, generally established on desert land. These new satellite cities were intended to provide housing, investment, and employment opportunities for the region's growing population as well as to pre-empt the further growth of informal neighbourhoods. As of 2014, about 10% of the population of Greater Cairo lived in the new cities. This portion also carried over through 2023, showing how most housing units in the new urban communities (NUCs) remained vacant. Concurrently, Cairo established itself as a political and economic hub for North Africa and the Arab world, with many multinational businesses and organisations, including the Arab League, operating out of the city. In 1979 the historic districts of Cairo were listed as a UNESCO World Heritage Site. In 1992, Cairo was hit by an earthquake which caused 545 deaths, injured 6,512 and left around 50,000 people homeless. Cairo's Tahrir Square was the focal point of the 2011 Egyptian revolution against former president Hosni Mubarak. More than 50,000 protesters first occupied the square on 25 January, during which the area's wireless services were reported to be impaired. In the following days Tahrir Square continued to be the primary destination for protests in Cairo. The uprising was mainly a campaign of non-violent civil resistance, which featured a series of demonstrations, marches, acts of civil disobedience, and labour strikes. Millions of protesters from a variety of socio-economic and religious backgrounds demanded the overthrow of the regime of Egyptian President Hosni Mubarak. Despite being predominantly peaceful in nature, the revolution was not without violent clashes between security forces and protesters, with at least 846 people killed and 6,000 injured. The uprising took place in Cairo, Alexandria, and in other cities in Egypt, following the Tunisian revolution that resulted in the overthrow of the long-time Tunisian president Zine El Abidine Ben Ali. On 11 February, following weeks of determined popular protest and pressure, Hosni Mubarak resigned from office. Under the rule of President el-Sisi, in March 2015 plans were announced for another yet-unnamed planned city to be built further east of the existing satellite city of New Cairo, intended to serve as the new capital of Egypt. Geography Cairo is located in northern Egypt, known as Lower Egypt, 165 km (100 mi) south of the Mediterranean Sea and 120 km (75 mi) west of the Gulf of Suez and Suez Canal. The city lies along the Nile River, immediately south of the point where the river leaves its desert-bound valley and branches into the low-lying Nile Delta region. Although the Cairo metropolis extends away from the Nile in all directions, the city of Cairo resides only on the east bank of the river and two islands within it on a total area of 453 km2 (175 sq mi). Geologically, Cairo lies on alluvium and sand dunes which date from the quaternary period. Until the mid-19th century, when the river was tamed by dams, levees, and other controls, the Nile in the vicinity of Cairo was highly susceptible to changes in course and surface level. Over the years, the Nile gradually shifted westward, providing the site between the eastern edge of the river and the Mokattam highlands on which the city now stands. The land on which Cairo was established in 969 (present-day Islamic Cairo) was located underwater just over three hundred years earlier, when Fustat was first built. Low periods of the Nile during the 11th century continued to add to the landscape of Cairo; a new island, known as Geziret al-Fil, first appeared in 1174, but eventually became connected to the mainland. Today, the site of Geziret al-Fil is occupied by the Shubra district. The low periods created another island at the turn of the 14th century that now composes Zamalek and Gezira. Land reclamation efforts by the Mamluks and Ottomans further contributed to expansion on the east bank of the river. Because of the Nile's movement, the newer parts of the city—Garden City, Downtown Cairo, and Zamalek—are located closest to the riverbank. The areas, which are home to most of Cairo's embassies, are surrounded on the north, east, and south by the older parts of the city. Old Cairo, located south of the centre, holds the remnants of Fustat and the heart of Egypt's Coptic Christian community, Coptic Cairo. The Boulaq district, which lies in the northern part of the city, was born out of a major 16th-century port and is now a major industrial centre. The Citadel is located east of the city centre around Islamic Cairo, which dates back to the Fatimid era and the foundation of Cairo. While western Cairo is dominated by wide boulevards, open spaces, and modern architecture of European influence, the eastern half, having grown haphazardly over the centuries, is dominated by small lanes, crowded tenements, and Islamic architecture. Northern and extreme eastern parts of Cairo, which include satellite towns, are among the most recent additions to the city, as they developed in the late 20th and early 21st centuries to accommodate the city's rapid growth. The western bank of the Nile is commonly included within the urban area of Cairo, but it composes the city of Giza and the Giza Governorate. Giza city has also undergone significant expansion over recent years, and today has a population of 2.7 million. The Cairo Governorate was just north of the Helwan Governorate from 2008 when some Cairo's southern districts, including Maadi and New Cairo, were split off and annexed into the new governorate, to 2011 when the Helwan Governorate was reincorporated into the Cairo Governorate. According to the World Health Organization, the level of air pollution in Cairo is nearly 12 times higher than the recommended safety level. In Cairo, and along the Nile River Valley, the climate is a hot desert climate (BWh according to the Köppen climate classification system). Wind storms can be frequent, bringing Saharan dust into the city, from March to May and the air often becomes uncomfortably dry. Winters are mild, while summers are long and hot. High temperatures in winter range from 14 to 22 °C (57 to 72 °F), while night-time lows drop to below 11 °C (52 °F), often to 5 °C (41 °F). In summer, the highs often exceed 31 °C (88 °F) but rarely surpass 40 °C (104 °F), and lows drop to about 20 °C (68 °F). Rainfall is sparse and only happens in the colder months, but sudden showers can cause severe flooding. The summer months have high humidity due to its proximity to the Mediterranean coast. Snowfall is extremely rare; a small amount of graupel, widely believed to be snow, fell on Cairo's easternmost suburbs on 13 December 2013, the first time Cairo's area had received this kind of precipitation in many decades. Dew points in the hottest months range from 13.9 °C (57 °F) in June to 18.3 °C (65 °F) in August. The city of Cairo forms part of Greater Cairo, the largest metropolitan area in Africa. While it has no administrative body, the Ministry of Planning considers it as an economic region consisting of Cairo Governorate, Giza Governorate, and Qalyubia Governorate. As a contiguous metropolitan area, various studies have considered Greater Cairo be composed of the administrative cities that are Cairo, Giza and Shubra al-Kheima, in addition to the satellite cities/new towns surrounding them. Cairo is a city-state where the governor is also the head of the city. Cairo City itself differs from other Egyptian cities in that it has an extra administrative division between the city and district levels, and that is areas, which are headed by deputy governors. Cairo consists of four areas (manatiq, singl. mantiqa) divided into 38 districts (ahya', singl. hayy) and 46 qisms (police wards, 1-2 per district): The Northern Area is divided into eight districts: The Eastern Area is divided into nine districts and three new cities: The Western Area is divided into nine districts: The Southern Area is divided into 12 districts: Since 1977 a number of new towns have been planned and built by the New Urban Communities Authority (NUCA) in the Eastern Desert around Cairo, ostensibly to accommodate additional population growth and development of the city and stem the development of self-built informal areas, especially over agricultural land. As of 2022 four new towns have been built and have residential populations: 15th of May City, Badr City, Shorouk City, and New Cairo. In addition, two more are under construction: the New Administrative Capital. And Capital Gardens, where land was allocated in 2021, and which will house most of the civil servants employed in the new capital. In March 2015, plans were announced for a new city to be built east of Cairo, in an undeveloped area of the Cairo Governorate, which would serve as the New Administrative Capital of Egypt. Cairo also introduced more modern metro lines to reduce traffic in central areas. Demographics According to the 2017 census, Cairo had a population of 9,539,673 people, distributed across 46 qisms (police wards): The majority of Egypt and Cairo's population is Sunni Muslim. A significant Christian minority exists, among whom Coptic Orthodox are the majority. Precise numbers for each religious community in Egypt are not available and estimates vary. Other churches that have, or had, a presence in modern Cairo include the Catholic Church (including Armenian Catholic, Coptic Catholic, Chaldean Catholic, Syrian Catholic, and Maronite), the Greek Orthodox Church, the Evangelical Church of Egypt (Synod of the Nile), and some Protestant churches. Cairo has been the seat of the Coptic Orthodox Church since the 12th century, and the seat of the Coptic Orthodox Pope is located in Saint Mark's Coptic Orthodox Cathedral. Until the 20th century, Cairo had a sizeable Jewish community, but as of 2022 only three Jews were reported to be living in the city. A total of 12 synagogues in Cairo still exist. Economy Cairo was ranked the top business city in Africa for 2025 and continues to be a major destination for foreign direct investment (FDI) due to its massive consumer market and strategic location. As of 2025, the city remains the wealthiest in North Africa, though it faces significant challenges from high inflation and currency depreciation. Cairo is home to a significant concentration of the country's high-net-worth individuals, including roughly 7,200 millionaires and 30 billionaires. Cairo accounts for 11% of Egypt's population and 22% of its economy (PPP). The majority of the nation's commerce is generated there, or passes through the city. The great majority of publishing houses and media outlets and nearly all film studios are there. Cairo is the vibrant heart of both the African and Arab film industries, often called the "Hollywood on the Nile", dominating Middle Eastern cinema with high production output. The city also hosts major events like the Cairo International Film Festival (CIFF), and featuring dedicated infrastructure, huge number of production companies, while also fostering talent. Cairo's economy has traditionally been based on governmental institutions and services, with the modern productive sector expanding in the 20th century to include developments in textiles and food processing – specifically the production of sugar cane. As of 2005, Egypt has the largest non-oil based GDP in the Arab world. This growth until recently surged well ahead of city services. Homes, roads, electricity, telephone and sewer services were all in short supply. Analysts trying to grasp the magnitude of the change coined terms like "hyper-urbanization". The city is a growing hub for digital innovation and outsourcing, with major firms like Amazon and Microsoft operating there. The sector currently contributes 5% to the GDP. Cairo ICT is the region's largest annual technology exhibition. The 29th edition in November 2025, themed "AI Everywhere," drew over 160,000 attendees and featured tracks on cybersecurity, 5G, and fintech. Cairo's industry is diverse, centering on manufacturing, such as textile, automotive, food processing, chemicals, appliances, in key zones such as 10th of Ramadan alongside burgeoning sectors in IT and Tech, energy (oil, gas, renewables), construction, and a strong startup ecosystem, driven by its role as Egypt's economic hub and a gateway for foreign investment. Home to a diverse industrial landscape ranging from heavy iron and steel production to traditional handicrafts and modern high-tech startups. The Helwan district, just south of Cairo, is the center of Egypt's iron and steel industry. Large conglomerates such as El Araby Group and ElSewedy Electric are headquartered in Cairo, producing home appliances and electrical infrastructure for both local and international markets. The city of Cairo hosts significant pharmaceutical manufacturing, including Minapharm Pharmaceuticals and Global Napi Pharma, which produce everything from basic generics to complex biologics. Also the food and consumer goods included in Cairo are global giants such as Procter & Gamble, Nestlé, Unilever, and The Coca-Cola Company, maintain large manufacturing operations in Cairo. Local leaders include Edita Food Industries and Cairo 3A (agri-commodities). Textile and Garments industries is a cornerstone of the economy, Cairo produces world-renowned Egyptian cotton textiles. Companies like Jade Textile and Alpha Omega Egypt manufacture apparel for premium global brands. Cairo offers a wide range of financial and governmental services through a mix of established institutions and modern digital platforms. Key services include traditional banking, investment opportunities, and core government functions such as civil registration and real estate services. Cairo hosts a robust financial sector with numerous local and international banks, along with a growing FinTech industry. Major Egyptian banks such as the National Bank of Egypt (NBE) and Banque Misr, Commercial International Bank (CIB), Banque du Caire, along with international players such as HSBC Bank Egypt and Arab Bank Egypt, offer comprehensive services including current/savings accounts, personal and business loans, credit cards, and wealth management. The Central Bank of Egypt (CBE) regulates the sector. Digital payment solutions are widely available through companies like Fawry, Paymob, and AMAN, which facilitate various electronic payments including taxes, utility bills, and social insurance. Firms such as EFG Holding and the General Authority for Investments (GAFI) provide investment banking, asset management, brokerage services, and consulting for both local and foreign investors. GAFI's Investor Service Center offers support for company incorporation and other investment-related services. The Egyptian government is actively digitizing its services, with several physical and e-service centers available across the city. Model centers provide multiple government services in one location, including real estate registration, civil status documentation, and traffic-related services. The government's financial network, eFinance, provides the backbone for digital payment solutions for government collections and transactions, enabling online payment of taxes and customs. The Ministry of Finance handles the state's financial affairs. The General Authority for Governmental Services (GAGS) is Involved in government procurement. General Authority for Investments (GAFI) also is a key point of contact for investors and offers services such as Investor Service Center, company registration and the "Golden License" for certain projects. Cairo's real estate market shows resilient growth due to significant ongoing infrastructure projects and high demand in new urban communities. Areas such as New Cairo are hotspots for both residential and commercial investments, supported by modern infrastructure and government initiatives. Cairo was ranked first among 30 African cities in infrastructure and transport for 2025, reflecting comprehensive development transformations. Cairo Metro Line is a planned metro line will span 34 kilometers with 26 stations, connecting northern and southern Cairo districts, with an international tender process expected by mid-2025. Cairo Monorail, lines connecting the New Capital and 6th of October City are in development, linking major areas like Sheikh Zayed City and New Cairo to Giza governorate and the Cairo Metro Line 3. The real estate business is booming, with strong local demand driving price increases and new developments. The residential sector market is dominated by local buyers, with a strong demand for housing in integrated, gated communities. Sales prices in areas such as New Cairo saw notable annual increases in Q2 2025. Developers are offering flexible payment plans, to manage affordability challenges. There's rising demand for high-quality Grade A office spaces, with occupancy rates improving and rents increasing. New office parks, particularly in New Cairo, are meeting the needs of multinational corporations and new market entrants. Infrastructure Cairo, as well as neighbouring Giza, has been established as Egypt's main centre for medical treatment, and despite some exceptions, has the most advanced level of medical care in the country. Cairo's hospitals include the JCI-accredited As-Salaam International Hospital, Ain Shams University Hospital, Dar Al Fouad, Nile Badrawi Hospital, 57357 Hospital, and International Medical Center. Cairo has numerous public hospitals, including large university hospitals and general hospitals managed by the Ministry of Health. University and major public hospitals include; Qasr El Eyni Hospital, Ain Shams University Hospital (El Demerdash), Nasser Specialized Hospital, Manshiet el Bakry Hospital, Victoria Hospital, General and Specialized Ministry of Health Hospitals, Shoubra General Hospital (Kitchener), Dar el Salam General Hospital, Abbassiya Psychiatric Hospital, National Cancer Institute, National Heart Institute, and Rod El Farag Eye Hospital. The city was also chosen as the headquarters for the largest hospitals of the Egyptian Armed Forces Medical Services Administration, including the Armed Forces Medical Complex in Qubba, the Armed Forces Medical Complex in Maadi, Ghamra Military Hospital, Almaza Military Hospital, Helmiya Military Hospital for Orthopedics and Rehabilitation, Armed Forces Civilian Employees Hospital, and El Galaa Hospital for the families of Armed Forces officers. Greater Cairo has long been the hub of education and educational services for Egypt and the region. Today, Greater Cairo is the centre for many government offices governing the Egyptian educational system, has the largest number of educational schools, and higher education institutes among other cities and governorates of Egypt. Some of the public schools in the city of Cairo include Zamalek National Language School, Zamalek National Co-educational School; Umm Ghafa Secondary School, Ali Ibn Abi Talib School, El-Nibras School for Boys, and the School for Gifted Students in Ain Shams. Cairo public schools also include; Mustafa Kamel Experimental Secondary School, Qasim Amin Secondary School for Girls, Ramses New College, and Ramses College (Girls). Asmaa Fahmy School, Al-Saidiya School, Martyr Ahmed Saeed School; Sayeda Khadija School, Al-Mu'taz Billah School, and Gamal Abdel Nasser School (mentioned in the context of the Dar El-Salam area adjacent to Maadi). Also in the Heliopolis district, public schools are Sina Primary, Al-Tahrir Primary, Al-Khulafa Preparatory School for Boys, and Martyr Amr Salah El-Din Preparatory School for Girls. Some of the international schools in Cairo Universities in Greater Cairo Transport The largest airport in Egypt, Cairo International Airport, is located near in the Heliopolis district and is accessible by car, taxi and bus. The third line of the Cairo Metro, opened in 2012, was originally planned to reach the Airport, but those plans were cancelled in mid-2020 in place of a future shuttle bus system that runs directly from Adly Mansour Station to the Airport. However, some current maps still shows the line being connected to it. The Cairo Airport Shuttle Bus also operates all over Cairo for trips to or from the airport. Cairo International is the second busiest airport in Africa after Johannesburg International Airport in South Africa. Cairo Airport handles about 3,400 daily flights, more than 12,100 weekly flights, and about 125,000 yearly flights. Cairo is served by its "white taxis" which have been introduced in the early 2010s and aren't run by a company, but rather by individuals. These taxis have plummeted in popularity, due to things such as the drivers not turning on their meters and instead demanding a fare which is usually considerably inflated, and other problems such as the lack of air-conditioning. This service uses a luxury sedan or saloon car driven by chauffeur to drive passengers from the airport or other locations to their destination. There are types of limousine services the main one is Airport limousine and the second one to transport people from town to town in Egypt. The Cairo Metro is the first rapid transit system in Greater Cairo, Egypt and the first of only two full-fledged metro systems in Africa and only four in the Arab world. It was opened in 1987 as Line 1 from Helwan to Ramsis square, with a length of 29 kilometres (18.0 mi). As of 2014, the Cairo Metro has 61 stations (mostly At-grade), of which three are transfer stations, with a total length of 77.9 kilometres (48.4 mi). The system consists of three operational lines numbered from 1 to 3. As of 2013, the metro carried nearly 4 million passengers per day. The cost of the Metro was E£1 to go anywhere until 2017, when it rose to E£2 (still heavily subsidized). Constructed near the beginning of the 20th century, until 2014, the Cairo tramway network was still used in modern-day Cairo, especially in modern areas, like Heliopolis, Nasr City. During the 1970s government policies favoured making space for cars, resulting in the removal of over half of the 120 km network. Trams were removed entirely from central Cairo but continued to run in Heliopolis and Helwan. However, in 2015, the tramway rails were removed and the streets and side walks became wider. The reason according to the city council is that "it has been rarely used by anyone during the past decade as it is a slow mean of transportation and it has a limited geographical coverage". Cairo is extensively connected to other Egyptian cities and villages by rail operated by the Egyptian National Railways. Cairo's main railway station - Ramses Station (Mahattat Ramses) is located on Midan Ramses. In May 2022, the Egyptian National Authority for Tunnels (NAT) and Siemens Mobility have signed a contract to create the sixth largest high-speed rail system - 2,000 km long - in the world. The Cairo Light Rail Transit or LRT, inaugurated in June 2022, links Cairo to the 10th of Ramadan City and the New Administrative Capital, providing a connection to several other communities east of Cairo along the way. The LRT's western terminus is at Adly Mansour station, where transfer to the Cairo Metro Line 3 is possible. There's a maritime ferry boat system that crosses the Nile River. Here's an article from 2005 about the Nile Bus system. There is also a brief article, published in Al Ahram 2014 about the Nile Bus system. In 2015 plans to construct two monorail systems were announced, one linking October City to suburban Giza, a distance of 42 km, and the other linking Nasr City to New Cairo, a distance of 54 km. They will be Egypt's first monorail systems. In May 2019 the contract to build 70 four-car trains was awarded to Bombardier Transportation with assembly to take place at its Derby Litchurch Lane Works in England. Delivery of the trains is expected between 2021 and 2024. The network is to be built by Orascom Construction and Arab Contractors. Two trans-African automobile routes originate in Cairo: the Cairo-Cape Town Highway and the Cairo-Dakar Highway. An extensive road network connects Cairo with other Egyptian cities and villages. There is a new Ring Road that surrounds the outskirts of the city, with exits that reach outer Cairo districts. There are flyovers and bridges, such as the 6th October Bridge that, when the traffic is not heavy, allow fast means of transportation from one side of the city to the other. Cairo traffic is known to be overwhelming and overcrowded. Traffic moves at a relatively fluid pace. Drivers tend to be aggressive, but are more courteous at junctions, taking turns going, with police aiding in traffic control of some congested areas. There are two types of buses in Cairo, those run by the Cairo Transport Authority, and those run by private companies, with the latter using smaller minibuses. Bus lines are spread all over the Greater Cairo area, and are considered the main mean of transport in the city for many Cairenes. Culture President Mubarak inaugurated the new Cairo Opera House of the Egyptian National Cultural Centres on 10 October 1988, 17 years after the Royal Opera House had been destroyed by fire. The National Cultural Centre was built with the help of JICA, the Japan International Co-operation Agency and stands as a prominent feature for the Japanese-Egyptian co-operation and the friendship between the two nations. The Egyptian Royal Opera House, or Royal Opera House, was the original opera house in Cairo. It was dedicated on 1 November 1869 and burned down on 28 October 1971. After the original opera house was destroyed, Cairo was without an opera house for nearly two decades until the opening of the new Cairo Opera House in 1988. Considered the nexus of the regional entertainment industry, including films, television, and recorded music, Cairo has been the capital of film industry in Africa since the early 20th century to the present day. The city knew film industry since the late 19th century. Films set in Cairo range from classic Egyptian cinema like Cairo Station to Hollywood blockbusters such as Death on the Nile, covering detective stories, historical dramas, modern life, and adventure, often featuring iconic locations like the bustling city streets, exploring themes from cultural identity to international intrigue. Key examples of classical films include The Ten Commandments and newer films such as The Spy Who Loved Me, showcasing diverse perspectives on this vibrant metropolis. The Egyptian Catholic Center for Cinema Festival is an annual national film festival held in Cairo. Founded in 1952, it is the oldest film festival in Egypt, Africa, and the Middle East. The festival only awards films that are distinguished by their high artistic level and whose ideas and topics revolve around issues that concern and benefit society and have a positive impact on its audience. Cairo held its first international film festival 16 August 1976, when the first Cairo International Film Festival was launched by the Egyptian Association of Film Writers and Critics, headed by Kamal El-Mallakh. The association ran the festival for seven years until 1983. This achievement led to the president of the festival again contacting the FIAPF with the request that a competition should be included at the 1991 festival. The request was granted. In 1998, the festival took place under the presidency of one of Egypt's leading actors, Hussein Fahmy, who was appointed by the Minister of Culture, Farouk Hosni, after the death of Saad El-Din Wahba. Four years later, the journalist and writer Cherif El-Shoubashy became president. Cairo is a global literary capital, home to Nobel Prize-winning authors and a vibrant scene of historic bookstores and festivals. Its literature reflects a deep-rooted history that spans from ancient papyrus texts to modern masterpieces of the Egyptian novel. The city is home to a vibrant writing scene, from world-renowned Egyptian literary figures such Ibn Yunus (950–1009), Al-Maqrizi (1364–1442), Al-Sha'rani (1492–1565), Abd al-Rahman al-Jabarti (1753–1825), Aisha Taymur (1840–1902), Qasim Amin (1863–1908), Ahmed Taymour (1871–1930). From the 20th century writers, the Nobel laureate Naguib Mahfouz (1911–2006) tops the list, who captured Cairo's essence and feminist author Nawal El Saadawi (1931–2021), to a vast network of contemporary freelance writers, copywriters, academic writers, and social media content creators found on platforms. Notable contemporary voices from the city of Cairo are; Alaa Al Aswany, Ahmed Khaled Tawfik (1962-2018), Salwa Bakr, Radwa Ashour (1946-2014), Bahaa Taher (1938-2024), Yusuf Idris, Ahdaf Soueif and Omar Taher are also associated to the city of Cairo. Cairo Literature Festival is an annual event, typically held in April that brings together authors from around the world for panels, poetry readings, and discussions on themes like "Memory and the City". Cairo International Book Fair is usually held in late January/early February, it is the largest and oldest book fair in the Arab world, attracting millions of visitors and publishers from across the globe. Cairo is home to numerous libraries, ranging from large public institutions to specialized academic collections. Founded in 1004 AD, the House of Knowledge, the oldest library in Modern Egypt's history. Some of the most notable include the Greater Cairo Public Library, the Misr Public Library. The city also includes on-line library services such as The Egyptian Cabinet Information and Decision Support Center, The Egyptian Libraries Network, Egyptian University Libraries Consortium, and the American University in Cairo. Founded in 1870, the Egyptian National Library and Archives houses several million volumes on a wide range of topics. It is one of the largest in the world with thousands of ancient collections. It contains a vast variety of Arabic-language and other Eastern manuscripts, the oldest in the world. The main library is a seven-story building in Ramlet Boulaq, a district of Cairo. The Egyptian National Archives are contained in an annex beside the building. There are also several academic libraries and archives in the city of Cairo. The Ain Shams University Library, is the largest university library in Cairo. Located in heart of the city, in downtown Cairo, the Geographical Association of Egypt was founded in 1875 to promote the exploration and study of the geography of Egypt and Africa. Cairo celebrates its national day on the 6th of July of each year, which coincides with the day in 969 AD when Commander Jawhar al-Siqilli laid the foundation stone for the new capital of Egypt at the time by order of the country's ruler, the Fatimid Caliph Al-Mu'izz li-Din Allah and named it el-Mansuriyya. When the Fatimid Caliph al-Mu'izz li-Din Allah arrived back to Egypt and entered the newly built city, he named it "Cairo", which means the conqueror. The city celebrates every year by organizing events such as marathons and festivals. Cairo also celebrates a mix of Islamic, Coptic Christian, and national holidays, featuring vibrant festivals like the Eid holidays such as Eid al-Fitr, Eid al-Adha, religious observances like Coptic Christmas on January 7th and Mawlid al-Nabi, and national days such as Revolution Day on January 25th and Armed Forces Day on October 6th, alongside cultural events like Sham El-Nessim and unique spectacles offering diverse cultural experiences year-round. Ramadan is the holy month of fasting, observed with special meals and community focus. Cairo's music history spans almost a thousand years ago, evolving Egyptian Music from ancient traditions to a modern African and Arab music powerhouse, evolving from courtly and religious performances such as Sufi, epic poetry to public cafes in the late 19th century, blooming into a Golden Age in the 1930s-60s with radio, iconic stars such as Umm Kulthum and Mohamed Abdel Wahab, and establishing institutions such the Egyptian Royal Opera House, Cairo Opera House and the Arab Music Institute. Music existed in ancient Egypt, but Cairo's distinct modern scene emerged later. From Sufi and Folk, Al-Awalem and Ghawazi to the Golden Age and Modernization. The radio and state support fueled Cairo's rise as a music capital. Iconic Stars such blended eastern and western music and others defined this era, creating "Arab Music". Music became a tool for modernization and cultural influence. The Cairo Symphony Orchestra established in 1959, introduced Western classical music, enriching the scene. Cairo remains a center, mixing traditional sounds with contemporary global influences. El Sawy Culture Wheel is a premier private cultural center in Cairo. Established in 2003 by architect Mohamed El-Sawy, it is located on Gezira Island in the Zamalek district. It serves as a vibrant cultural hub and platform for indie music, alternative arts, and community engagement. The "Culture Wheel" symbol represents continuous movement and growth in thought. The venue is famous for "The White Circle" campaign, maintaining a strict non-smoking policy across all premises. The center has organized hundreds of concerts and musical events, including those for oud, jazz, musical theatre, children chorus and performances for several Egyptian and Arab bands, singers and entertainers. It also hosts several seminars, workshops, art exhibitions, book fairs and movie shows. The Culture Wheel is also committed to promoting open-source culture by teaching Linux and other open-source software. The center's activities are generally funded by donations and annual membership fees from its more than ten thousand members. The center receives over 20,000 visitors monthly. Cairo is a major global hub for both ancient and contemporary art, offering a diverse landscape of monumental sculptures and intricate paintings that span over 5,000 years of history. The city's modern art scene is centered in districts like Zamalek and New Cairo, where established galleries and annual fairs thrive. Museum Of Modern Egyptian Art in the Cairo Opera House grounds in Zamalek, this museum houses the largest collection of paintings, sculptures, and other artworks by Egyptian artists from the 20th century onward. Gayer Anderson Museum housed in a beautifully preserved 17th-century home, showcasing diverse art pieces and architecture, including Islamic, Persian, and Chinese art. The Museum of Islamic Art features an extensive collection of artifacts from the Islamic era, including metallic and porcelain utensils, textiles, and architectural elements. Known for the largest collection of contemporary Egyptian art, hosting the annual Cairo Art Fair, a long-standing hub for modern Egyptian art since 1968, located in Zamalek. Egypt is the first country in the Middle East and Africa to provide television broadcasting, with its first signals transmitted in 1960, from the Maspero television building hosting the Egyptian Television Network on the Nile banks in the city of Cairo. Today, the industry is a mix of state-run national channels and a wide array of private satellite networks. Cairo is the media hub of the Arab world, hosting major state and private television networks, influential newspapers, and massive production facilities. Also the Middle East News Agency is based in Cairo. National Media Authority (NMA) is the state-run body that operates domestic networks and international channels. The United Media Services (UMS) is the dominant media conglomerate that manages several major TV channels and digital platforms. Major newspapers and digital platforms are based in Cairo, such as Al-Ahram, the country's most influential national daily and the second-oldest in Modern Egypt's history. Others include Al-Akhbar and Al-Gomhuria. Private arabic dailies and major private outlets include Al-Masry Al-Youm, Youm7, and Al-Shorouk. English Language newspapers based in the city, include Ahram Online, and Egypt Independent, and Mada Masr, a prominent independent news site. The Cairo Geniza is an accumulation of almost 200,000 Jewish manuscripts that were found in the genizah of the Ben Ezra Synagogue (built 882) of Fustat, Egypt (now Old Cairo), the Basatin cemetery east of Old Cairo, and a number of old documents that were bought in Cairo in the later 19th century. These documents were written from about 870 to 1880 AD and have been archived in various American and European libraries. The Taylor-Schechter collection in the University of Cambridge runs to 140,000 manuscripts; a further 40,000 manuscripts are housed at the Jewish Theological Seminary of America. Sports Football is the most popular sport in Egypt, and Cairo has sporting teams that compete in national and regional leagues, most notably Al Ahly and Zamalek SC, who were the CAF first and second African clubs of the 20th century. The annual match between Al Ahly and El Zamalek is one of the most watched sports events in Egypt. The teams form the major rivalry of Egyptian football. They play their home games at Cairo International Stadium, which is the second largest stadium in Egypt, as well as the largest in Cairo. The Cairo International Stadium was built in 1960. Its multi-purpose sports complex houses the main football stadium, an indoor stadium, satellite fields that hold regional and continental games, including the African Games, U17 Football World Championship and the 2006 Africa Cup of Nations. Egypt later won the competition and the next edition in Ghana (2008) making the Egyptian and Ghanaian national teams the only to win the African Nations Cup back to back. Egypt won the title for a record six times in the history of African Continental Competition. This was followed by a third consecutive win in Angola in 2010, making Egypt the only country with a record 3-consecutive and 7-total Continental Football Competition winner. As of 2021, Egypt's national team is ranked #46 in the world by FIFA. Cairo failed at the applicant stage when bidding for the 2008 Summer Olympics, which was hosted in Beijing. However, Cairo did host the 2007 Pan Arab Games. There are other sports teams in the city that participate in several sports including Gezira Sporting Club, el Shams Club, Shooting Club, Heliopolis Sporting Club, and several smaller clubs. There are new sports clubs in the area of New Cairo (one hour far from Cairo's downtown), these are Al Zohour sporting club, Wadi Degla sporting club and Platinum Club. Most of the sports federations of the country are located in the city suburbs, including the Egyptian Football Association. The headquarters of the Confederation of African Football (CAF) was previously located in Cairo, before relocating to its new headquarters in 6 October City, a small city away from Cairo's crowded districts. In 2008, the Egyptian Rugby Federation was officially formed and granted membership into the International Rugby Board. Egypt is internationally known for the excellence of its squash players who excel in professional and junior divisions. Egypt has seven players in the top ten of the PSA men's world rankings, and three in the women's top ten. Mohamed El Shorbagy held the world number one position for more than a year. Nour El Sherbini has won the Women's World Championship twice and has been the women's world number one. On 30 April 2016, she became the youngest woman to win the Women's World Championship. In 2017 she retained her title. Cairo is the official endpoint of Cross Egypt Challenge where its route ends yearly in the most sacred place in Egypt, under the Great Pyramids of Giza with a huge trophy-giving ceremony. Cityscape and landmarks Downtown Cairo is the heart of the city and was established during the reign of Isma'il Pasha of Egypt, coinciding with the occasion of the opening of the Suez Canal. Its core layout, with public squares and grand boulevards, was inspired by the ideas of Baron Haussmann, the French administrator who remodelled Paris in the same era. The area includes major landmarks such as Tahrir Square, the adjacent Egyptian Museum, and Talaat Harb Street and its public square, where a statue of economist and entrepreneur Talaat Harb stands. The area also encompasses Abdeen Palace, the former royal residence now used as the official presidential palace, which houses several museums. Its districts feature buildings blending a variety of architectural styles, including Egyptian Islamic architecture, contemporary Beaux-Arts architecture, and other styles of international or European architecture that were fashionable during the late 19th and early 20th centuries. The Qasr El Nil Bridge is also located in the heart of downtown Cairo. The current bridge was completed in 1933 in an Art Deco style, replacing an earlier 19th-century bridge whose four colossal bronze lion statues were preserved and reinstalled on the current bridge. Groppi Ice Cream Shop, located in Talaat Harb Square, was founded in 1909, one of Cairo's oldest ice cream parlors. Opened in 1908, Café Riche has been a meeting place for intellectuals and revolutionaries, witnessing significant events throughout the 20th century. Among the cafe's frequent visitors were the Egyptian Nobel laureate novelist and nationalist writer Naguib Mahfouz, and the future president Gamal Abdel Nasser. Tahrir Square was founded during the mid-19th century with the establishment of modern downtown Cairo. It was first named Ismailia Square, after the 19th-century ruler Khedive Ismail, who commissioned the new downtown district's 'Paris on the Nile' design. After the Egyptian Revolution of 1919 the square became widely known as Tahrir (Liberation) Square, though it was not officially renamed as such until after the 1952 Revolution which eliminated the monarchy. Several notable buildings surround the square including, the American University in Cairo's downtown campus, the Mogamma governmental administrative Building, the headquarters of the Arab League, the Nile Ritz Carlton Hotel, and the Egyptian Museum. Being at the heart of Cairo, the square has witnessed several major protests over the years. In 2020, the government completed the erection of a new monument in the center of the square featuring an ancient obelisk from the reign of Ramses II, originally unearthed at Tanis (San al-Hagar) in 2019, and four ram-headed sphinx statues moved from Karnak. The Museum of Egyptian Antiquities, known commonly as the Egyptian Museum, is home to the most extensive collection of ancient Egyptian antiquities in the world. It has 136,000 items on display, with many more hundreds of thousands in its basement storerooms. Among the collections on display are the finds from the tomb of Tutankhamun. The National Museum of Egyptian Civilization is a museum located in Fustat, covering an area of 33.5 acres. It houses 50,000 artifacts that tell the story of the development of Egyptian civilization, showcasing the achievements of the Egyptian people in various fields of life from the dawn of history to the present day. The museum also contains models, photographs, manuscripts, oil paintings, works of art, and artifacts from the Stone Age, Ancient Egyptian, Coptic, and modern periods. The museum site overlooks the Ain El Sira natural lake. The Cairo Tower is a free-standing tower with a revolving restaurant at the top. It is one of Cairo's landmarks and provides a bird's eye view of the city to restaurant patrons. It stands in the Zamalek district on Gezira Island on the Nile River, in the city centre. At 187 m (614 ft), it is 44 m (144 ft) higher than the Great Pyramid of Giza, which stands some 15 km (9 mi) to the southwest. This area of Cairo is so-named as it contains the remains of the ancient Roman fortress of Babylon and also overlaps the original site of Fustat, the first Arab settlement in Egypt (7th century AD) and the predecessor of later Cairo. The area includes Coptic Cairo, which holds a high concentration of old Christian churches such as the Hanging Church, the Greek Orthodox Church of St. George, and other Christian or Coptic buildings, most of which are located in an enclave on the site of the ancient Roman fortress. It is also the location of the Coptic Museum, which showcases the history of Coptic art from Greco-Roman to Islamic times, and of the Ben Ezra Synagogue, the oldest and best-known synagogue in Cairo, where the important collection of Geniza documents were discovered in the 19th century. To the north of this Coptic enclave is the Amr ibn al-'As Mosque, the first mosque in Egypt and the most important religious centre of what was formerly Fustat, founded in 642 AD right after the Arab conquest but rebuilt many times since. A part of the former city of Fustat has also been excavated to the east of the mosque and of the Coptic enclave, although the archeological site is threatened by encroaching construction and modern development. To the northwest of Babylon Fortress and the mosque is the Monastery of Saint Mercurius (or Dayr Abu Sayfayn), an important and historic Coptic religious complex consisting of the Church of Saint Mercurius, the Church of Saint Shenute, and the Church of the Virgin (also known as al-Damshiriya). Several other historic churches are also situated to the south of Babylon Fortress. Cairo holds one of the greatest concentrations of historical monuments of Islamic architecture in the world. The areas around the old walled city and around the Citadel are characterised by hundreds of mosques, tombs, madrasas, mansions, caravanserais, and fortifications dating from the Islamic era and are often referred to as "Islamic Cairo", especially in English travel literature. It is also the location of several important religious shrines such as the al-Hussein Mosque (whose shrine is believed to hold the head of Husayn ibn Ali), the Mausoleum of Imam al-Shafi'i (founder of the Shafi'i madhhab, one of the primary schools of thought in Sunni Islamic jurisprudence), the Tomb of Sayyida Ruqayya, the Mosque of Sayyida Nafisa, and others. The first mosque in Egypt was the Mosque of Amr ibn al-As in what was formerly Fustat, the first Arab-Muslim settlement in the area. However, the Mosque of Ibn Tulun is the oldest mosque that still retains its original form and is a rare example of Abbasid architecture from the classical period of Islamic civilisation. It was built in 876–879 AD in a style inspired by the Abbasid capital of Samarra in Iraq. It is one of the largest mosques in Cairo and is often cited as one of the most beautiful. Another Abbasid construction, the Nilometer on Roda Island, is the oldest original structure in Cairo, built in 862 AD. It was designed to measure the level of the Nile, which was important for agricultural and administrative purposes. The settlement that was formally named Cairo (Arabic: al-Qahira) was founded to the northeast of Fustat in 959 AD by the victorious Fatimid army. The Fatimids built it as a separate palatial city which contained their palaces and institutions of government. It was enclosed by a circuit of walls, which were rebuilt in stone in the late 11th century AD by the vizier Badr al-Gamali, parts of which survive today at Bab Zuwayla in the south and Bab al-Futuh and Bab al-Nasr in the north. Among the extant monuments from the Fatimid era are the large Mosque of al-Hakim, the Aqmar Mosque, Juyushi Mosque, Lulua Mosque, and the Mosque of Al-Salih Tala'i. One of the most important and lasting institutions founded in the Fatimid period was the Mosque of al-Azhar, founded in 970 AD, which competes with the al-Qarawiyyin in Fes for the title of oldest university in the world. Today, al-Azhar University is the foremost Center of Islamic learning in the world and one of Egypt's largest universities with campuses across the country. The mosque itself retains significant Fatimid elements but has been added to and expanded in subsequent centuries, notably by the Mamluk sultans Qaytbay and al-Ghuri and by Abd al-Rahman Katkhuda in the 18th century. The most prominent architectural heritage of medieval Cairo, however, dates from the Mamluk period, from 1250 to 1517 AD. The Mamluk sultans and elites were eager patrons of religious and scholarly life, commonly building religious or funerary complexes whose functions could include a mosque, madrasa, khanqah (for Sufis), a sabil (water dispensary), and a mausoleum for themselves and their families. Among the best-known examples of Mamluk monuments in Cairo are the huge Mosque-Madrasa of Sultan Hasan, the Mosque of Amir al-Maridani, the Mosque of Sultan al-Mu'ayyad (whose twin minarets were built above the gate of Bab Zuwayla), the Sultan Al-Ghuri complex, the funerary complex of Sultan Qaytbay in the Northern Cemetery, and the trio of monuments in the Bayn al-Qasrayn area comprising the complex of Sultan al-Mansur Qalawun, the Madrasa of al-Nasir Muhammad, and the Madrasa of Sultan Barquq. Some mosques include spolia (often columns or capitals) from earlier buildings built by the Romans, Byzantines, or Copts. The Mamluks, and the later Ottomans, also built wikalas or caravanserais to house merchants and goods due to the important role of trade and commerce in Cairo's economy. Still intact today is the Wikala al-Ghuri, which today hosts regular performances by the Al-Tannoura Egyptian Heritage Dance Troupe. The Khan al-Khalili is a commercial hub which also integrated caravanserais (also known as khans). The Citadel is a fortified enclosure begun by Salah al-Din in 1176 AD on an outcrop of the Muqattam Hills as part of a large defensive system to protect both Cairo to the north and Fustat to the southwest. It was the centre of Egyptian government and residence of its rulers until 1874, when Khedive Isma'il moved to 'Abdin Palace. It is still occupied by the military today, but is now open as a tourist attraction comprising, notably, the National Military Museum, the 14th century Mosque of al-Nasir Muhammad, and the 19th century Mosque of Muhammad Ali which commands a dominant position on Cairo's skyline. Khan el-Khalili is an ancient bazaar, or marketplace adjacent to the Al-Hussein Mosque. It dates back to 1385, when Amir Jarkas el-Khalili built a large caravanserai, or khan. (A caravanserai is a hotel for traders, and usually the focal point for any surrounding area.) This original caravanserai building was demolished by Sultan al-Ghuri, who rebuilt it as a new commercial complex in the early 16th century, forming the basis for the network of souqs existing today. Many medieval elements remain today, including the ornate Mamluk-style gateways. Today, Khan el-Khalili is a major tourist attraction and popular stop for tour groups. Opened in 2005, the park is a spacious elevated public park, offers green spaces, restaurants, and panoramic views of Cairo's historical landmarks such as the Citadel of Cairo and the Ayyubid Walls. The park was once a medieval waste dump, its western side overlooks the Fatimid city and its extension, Al-Darb al-Ahmar, adorned with long rows of minarets. To the south lie the Mosque-Madrasa of Sultan Hasan and its surrounding area, as well as the Citadel of Cairo. Society In the present day, Cairo is a heavily urbanised city. Because of the influx of people into the city, lone standing houses are rare, and apartment buildings accommodate for the limited space and abundance of people. Single detached houses are usually owned by the wealthy. Formal education is also seen as important, with twelve years of standard formal education. Cairenes can take a standardised test similar to the SAT to be accepted to an institution of higher learning, but most children do not finish school and opt to pick up a trade to enter the workforce. Egypt still struggles with poverty, with almost half the population living on $2 or less a day. The civil rights movement for women in Cairo – and by extent, Egypt – has been a struggle for years. Women are reported to face constant discrimination, sexual harassment, and abuse throughout Cairo. A 2013 UN study found that over 99% of Egyptian women reported experiencing sexual harassment at some point in their lives. The problem has persisted in spite of new national laws since 2014 defining and criminalising sexual harassment. The situation is so severe that in 2017, Cairo was named by one poll as the most dangerous megacity for women in the world. In 2020, the social media account "Assault Police" began to name and shame perpetrators of violence against women, in an effort to dissuade potential offenders. The account was founded by student Nadeen Ashraf, who is credited for instigating an iteration of the #MeToo movement in Egypt. Challenges Traffic in Cairo is known for being congested. As of 2026, the city remains a global outlier for high-density vehicle traffic on limited road space. Most of Cairo operates in a near-permanent rush-hour state. Major corridors often see average speeds drop to 15–40 km/h, which is roughly half of their design speed. While traffic is heavy throughout the day, the primary peak hours are between 8:00 AM – 9:00 AM and 6:00 PM – 7:00 PM. One of the most critical arterial routes, frequently prone to severe bottlenecks. A major road running along the river that serves as one of the city's busiest and most vital thoroughfares. Areas around Tahrir Square, Ramsses, and Ataba are congested due to dense pedestrian activity. The Cairo Metro is a faster alternative to bypass street-level traffic, though it is often extremely crowded. The persistent congestion is estimated to cost Egypt approximately 4% of its annual GDP due to wasted time, excessive fuel consumption, and vehicle wear-and-tear. The air pollution in Cairo is a matter of serious concern. Greater Cairo's volatile aromatic hydrocarbon levels are higher than many other similar cities. Air quality measurements in Cairo have also been recording dangerous levels of lead, carbon dioxide, sulphur dioxide, and suspended particulate matter concentrations due to decades of unregulated vehicle emissions, urban industrial operations, and chaff and trash burning. There are over 4,500,000 cars on the streets of Cairo, 60% of which are over 10 years old, and therefore lack modern emission cutting features. Cairo has a very poor dispersion factor because of its lack of rain and its layout of tall buildings and narrow streets, which create a bowl effect. In recent years, a black cloud (as Egyptians refer to it) of smog has appeared over Cairo every autumn due to temperature inversion. Smog causes serious respiratory diseases and eye irritations for the city's citizens. Tourists who are not familiar with such high levels of pollution must take extra care. Cairo also has many unregistered lead and copper smelters which heavily pollute the city. The results of this has been a permanent haze over the city with particulate matter in the air reaching over three times normal levels. It is estimated that 10,000 to 25,000 people a year in Cairo die due to air pollution-related diseases. Lead has been shown to cause harm to the central nervous system and neurotoxicity particularly in children. In 1995, the first environmental acts were introduced and the situation has seen some improvement with 36 air monitoring stations and emissions tests on cars. Twenty thousand buses have also been commissioned to the city to improve congestion levels, which are very high. The city also suffers from a high level of land pollution. Cairo produces 10,000 tons of waste material each day, 4,000 tons of which are not collected or managed. This is a huge health hazard, and the Egyptian Government is looking for ways to combat this. The Cairo Cleaning and Beautification Agency was founded to collect and recycle the waste; they work with the Zabbaleen community that has been collecting and recycling Cairo's waste since the turn of the 20th century and live in an area known locally as Manshiyat Naser. Both are working together to pick up as much waste as possible within the city limits, though it remains a pressing problem. International relations The Headquarters of the Arab League is located at Tahrir Square in downtown Cairo. Cairo is twinned with: Notable people See also Explanatory notes References Further reading External links Lat. and Long. 30°2′40″N 31°14′9″E / 30.04444°N 31.23583°E / 30.04444; 31.23583 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-New3DS2_195-0] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Wool] | [TOKENS: 5183] |
Contents Wool Page version status This is an accepted version of this page Wool is the textile fiber obtained from sheep and other mammals, especially goats, rabbits, and camelids. The term may also refer to inorganic materials, such as mineral wool and glass wool, that have some properties similar to animal wool. Wool is an animal fiber and consists of protein together with a small percentage of lipids. This makes it chemically quite distinct from cotton and other plant fibers, which are mainly cellulose, and akin to silk, another animal fiber composed primarily of protein. Like silk, the proteinaceous nature of wool requires special detergents when being cleaned because laundry detergents made to clean cellulose-based fabrics often contain stain-removing enzymes that digest protein. Characteristics Wool is produced by follicles which are small cells located in the skin. These follicles are located in the upper layer of the skin called the epidermis and push down into the second skin layer called the dermis as the wool fibers grow. Follicles can be classed as either primary or secondary follicles. Primary follicles produce three types of fiber: kemp, medullated fibers, and true wool fibers. Secondary follicles only produce true wool fibers. Medullated fibers share nearly identical characteristics to hair and are long but lack crimp and elasticity. Kemp fibers are very coarse and shed out. Wool's crimp refers to the strong natural wave present in each wool fiber as it is presented on the animal. Wool's crimp, and to a lesser degree scales, make it easier to spin the fleece by helping the individual fibers attach, so they stay together. Because of the crimp, wool fabrics have greater bulk than other textiles, and they hold air, which causes the fabric to retain heat. Wool has a high specific thermal resistance, so it impedes heat transfer in general. This effect has benefited desert peoples, as Bedouins and Tuaregs use wool clothes for insulation.[citation needed] The felting of wool occurs upon hammering or other mechanical agitation as the microscopic barbs on the surface of wool fibers hook together. Felting generally comes under two main areas, dry felting and wet felting. Wet felting occurs when water and a lubricant (especially an alkali such as soap) are applied to the wool which is then agitated until the fibers mix and bond together. Temperature shock while damp or wet accentuates the felting process. Some natural felting can occur on the animal's back. Wool has several qualities that distinguish it from hair or fur: it is crimped and elastic. The amount of crimp corresponds to the fineness of the wool fibers. A fine wool like Merino may have up to 40 crimps per centimetre (100 crimps per inch), while coarser wool like karakul may have less than one (one or two crimps per inch). In contrast, hair has little if any scale and no crimp, and little ability to bind into yarn. On sheep, the hair part of the fleece is called kemp. The relative amounts of kemp to wool vary from breed to breed and make some fleeces more desirable for spinning, felting, or carding into batts for quilts or other insulating products, including the famous tweed cloth of Scotland. Wool fibers readily absorb moisture, but are not hollow. Wool can absorb almost one-third of its own weight in water. Wool absorbs sound like many other fabrics. It is generally a creamy white color, although some breeds of sheep produce natural colors, such as black, brown, silver, and random mixes. Wool ignites at a higher temperature than cotton and some synthetic fibers. It has a lower rate of flame spread, a lower rate of heat release, a lower heat of combustion, and does not melt or drip; it forms a char that is insulating and self-extinguishing, and it contributes less to toxic gases and smoke than other flooring products when used in carpets. Wool carpets are specified for high safety environments, such as trains and aircraft. Wool is usually specified for garments for firefighters, soldiers, and others in occupations where they are exposed to the likelihood of fire. Wool causes an allergic reaction in some people. Processing Sheep shearing is the process in which a worker (a shearer) cuts off the woollen fleece of a sheep. After shearing, wool-classers separate the wool into four main categories: The quality of fleeces is determined by a technique known as wool classing, whereby a qualified person, called a wool classer, groups wools of similar grading together to maximize the return for the farmer or sheep owner. In Australia, before being auctioned, all Merino fleece wool is objectively measured for average diameter (micron), yield (including the amount of vegetable matter), staple length, staple strength, and sometimes color and comfort factor. Wool straight off a sheep is known as "raw wool", "greasy wool" or "wool in the grease". This wool contains a high level of valuable lanolin, as well as the sheep's dead skin and sweat residue, and generally also contains pesticides and vegetable matter from the animal's environment. Before the wool can be used for commercial purposes, it must be scoured, a process of cleaning the greasy wool. Scouring may be as simple as a bath in warm water or as complicated as an industrial process using detergent and alkali in specialized equipment. In north west England, special potash pits were constructed to produce potash used in the manufacture of a soft soap for scouring locally produced white wool. Vegetable matter in commercial wool is often removed by chemical carbonization. In less-processed wools, vegetable matter may be removed by hand and some of the lanolin left intact through the use of gentler detergents. This semigrease wool can be worked into yarn and knitted into particularly water-resistant mittens or sweaters, such as those of the Aran Island fishermen. Lanolin removed from wool is widely used in cosmetic products such as hand creams. Fineness and yield Raw wool has many impurities; vegetable matter, sand, dirt and yolk which is a mixture of suint (sweat), grease, urine stains and dung locks. The sheep's body yields many types of wool with differing strengths, thicknesses, length of staple and impurities. The raw wool (greasy) is processed into 'top'. 'Worsted top' requires strong straight and parallel fibres. The quality of wool is determined by its fiber diameter, crimp, yield, color, and staple strength. Fiber diameter is the single most important wool characteristic determining quality and price. Merino wool is typically 90–115 mm (3.5–4.5 in) in length and is very fine (between 12 and 24 microns). The finest and most valuable wool comes from Merino hoggets. Wool taken from sheep produced for meat is typically coarser, and has fibers 40–150 mm (1.5–6 in) in length. Damage or breaks in the wool can occur if the sheep is stressed while it is growing its fleece, resulting in a thin spot where the fleece is likely to break. Wool is also separated into grades based on the measurement of the wool's diameter in microns and also its style. These grades may vary depending on the breed or purpose of the wool. For example: Any wool finer than 25 microns can be used for garments, while coarser grades are used for outerwear or rugs. The finer the wool, the softer it is, while coarser grades are more durable and less prone to pilling. The finest Australian and New Zealand Merino wools are known as 1PP, which is the industry benchmark of excellence for Merino wool 16.9 microns and finer. This style represents the top level of fineness, character, color, and style as determined on the basis of a series of parameters in accordance with the original dictates of British wool as applied by the Australian Wool Exchange (AWEX) Council. Only a few dozen of the millions of bales auctioned every year can be classified and marked 1PP. In the United States, three classifications of wool are named in the Wool Products Labeling Act of 1939. Wool is "the fiber from the fleece of the sheep or lamb or hair of the Angora or Cashmere goat (and may include the so-called specialty fibers from the hair of the camel, alpaca, llama, and vicuna) which has never been reclaimed from any woven or felted wool product". "Virgin wool" and "new wool" are also used to refer to such never used wool. There are two categories of recycled wool (also called reclaimed or shoddy wool). "Reprocessed wool" identifies "wool which has been woven or felted into a wool product and subsequently reduced to a fibrous state without having been used by the ultimate consumer". "Reused wool" refers to such wool that has been used by the ultimate consumer. History Wild sheep were more hairy than woolly. Although sheep were domesticated some 9,000 to 11,000 years ago, archaeological evidence from statuary found at sites in Iran suggests selection for woolly sheep may have begun around 6000 BC, with the earliest known woven wool garments having only been dated to two to three thousand years later. Woolly sheep were introduced into Europe from the Near East in the early part of the 4th millennium BC. The oldest known European wool textile, c. 1500 BC, was preserved in a Danish bog. Prior to the invention of shears—probably in the Iron Age — wool was plucked out by hand or with bronze combs. In Roman times, wool, linen, and leather clothed the European population; cotton from India was a curiosity of which only naturalists had heard, and silks, imported along the Silk Road from China, were extravagant luxury-goods. Pliny the Elder records in his Natural History that the reputation for producing the finest wool was enjoyed by Tarentum, where selective breeding had produced sheep with superior fleeces, but which required special care. In medieval times, as trade connections expanded, the Champagne fairs revolved around the production of wool cloth in small centers such as Provins. The network developed by the annual fairs meant that the woolens of Provins might find their way to Naples, Sicily, Cyprus, Mallorca, Spain, and even Constantinople. The wool trade developed into serious undertaking, a generator of capital. In the 13th century, the wool trade became the economic engine of the Low Countries and central Italy. By the end of the 14th century, Italy predominated. The Florentine wool guild, Arte della Lana, sent imported English wool to the San Martino convent for processing. Italian wool from Abruzzo and Spanish merino wools were processed at Garbo workshops. Abruzzo wool had once been the most accessible for the Florentine guild, until improved relations with merchants in Iberia made merino wool more available. In the 15th century Pisa established a factory "which would export its cloths to the Crimea in exchange for Russian furs". By the 16th century Italian wool exports to the Levant had declined, eventually replaced by silk production. The value of exports of English raw wool were rivaled only by the 15th-century sheepwalks of Castile and were a significant source of income to the English crown, which in 1275 had imposed an export tax on wool called the "Great Custom". The importance of wool to the English economy can be seen in the fact that since the 14th century, the presiding officer of the House of Lords has sat on the "Woolsack", a chair stuffed with wool. Economies of scale were instituted in the Cistercian houses, which had accumulated great tracts of land during the 12th and early 13th centuries, when land prices were low and labor still scarce. Raw wool was baled and shipped from North Sea ports to the textile cities of Flanders, notably Ypres and Ghent, where it was dyed and worked up as cloth. At the time of the Black Death (1346-1353), English textile industries consumed about 10% of English wool production. The English textile trade grew during the 15th century, to the point where the export of wool was discouraged. Over the centuries, various British laws controlled the wool trade or required the use of wool even in burials. The smuggling of wool out of the country, known as owling, was at one time punishable by the cutting off of a hand. After the Restoration of 1660, fine English woolens began to compete with silks in the international market, partly aided by the Navigation Acts; in 1699, the English Crown forbade its American colonies to trade wool with anyone but England herself. A great deal of the value of woollen textiles was in the dyeing and finishing of the woven product. In each of the centers of the textile trade, the manufacturing process came to be subdivided into a collection of trades, overseen by an entrepreneur in a system called by the English the "putting-out" system, or "cottage industry", and the Verlagssystem by the Germans. In this system of producing wool cloth, once perpetuated in the production of Harris tweeds, the entrepreneur provides the raw materials and an advance, the remainder being paid upon delivery of the product. Written contracts bound the artisans to specified terms. Fernand Braudel traces the appearance of the system in the 13th-century economic boom, quoting a document of 1275. The system effectively bypassed the guilds' restrictions. Before the flowering of the Renaissance, the Medici and other great banking houses of Florence had built their wealth and banking system on their textile industry based on wool, overseen by the Arte della Lana, the wool guild: wool-textile interests guided Florentine policies. Francesco Datini, the "merchant of Prato", established in 1383 an Arte della Lana for that small Tuscan city. The sheepwalks of Castile were controlled by the Mesta union of sheep-owners. They shaped the landscape and the fortunes of the meseta that lies in the heart of the Iberian peninsula; in the 16th century, a unified Spain allowed export of merino lambs only with royal permission. The German wool-market – based on sheep of Spanish origin – did not overtake British wool until comparatively late. Later, the Industrial Revolution introduced mass-production technology into wool- and wool-cloth-manufacturing. Australia's colonial economy came to depend on sheep-raising, and the Australian wool trade eventually overtook that of the Germans by 1845, furnishing wool for Bradford, which developed as the heart of industrialized woolens production. Due to decreasing demand for wool with increased use of synthetic fibers, wool production is much less than what it was in the past. The collapse in the price of wool began in late 1966 with a 40% drop; with occasional interruptions, the price has tended down. The result has been sharply reduced production and the movement of resources into production of other commodities, in the case of sheep growers, to production of meat. Superwash wool (or washable wool) technology first appeared in the early 1970s, producing wool that has been specially treated so it is machine washable and may be tumble-dried. This wool is produced using an acid bath that removes the "scales" from the fiber, or by coating the fiber with a polymer that prevents the scales from attaching to each other and causing shrinkage. This process results in a fiber that holds longevity and durability better than synthetic materials, while retaining garment shape. In December 2004, a bale of the then world's finest wool, averaging 11.8 microns, sold for AU$3,000 per kilogram at auction in Melbourne. This fleece wool tested with an average yield of 74.5%, 68 mm (2.7 in) long, and had 40 newtons per kilotex strength. The result was A$279,000 for the bale. The finest bale of wool ever auctioned was sold for a seasonal record of AU$2690 per kilo during June 2008. This bale was produced by the Hillcreston Pinehill Partnership and measured 11.6 microns, 72.1% yield, and had a 43 newtons per kilotex strength measurement. The bale realized $247,480 and was exported to India. In 2007, a new wool suit was developed and sold in Japan which can be washed in the shower, and which dries off ready to wear within hours with no ironing required. The suit, developed using Australian merino wool, enables woven products made from wool, such as suits, trousers, and skirts, to be cleaned using a domestic shower. In December 2006, the General Assembly of the United Nations proclaimed 2009 to be the International Year of Natural Fibres, so as to raise the profile of wool and of other natural fibers. Production Global wool production is about 2 million tonnes (2.2 million short tons) per year, of which 60% goes into apparel. Wool comprises around 3% of the global textile market, but its value is higher owing to dyeing and other modifications of the material. Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight. New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets. In the United States, Texas, New Mexico, and Colorado have large commercial sheep flocks and their mainstay is the Rambouillet (or French Merino). Also, a thriving home-flock contingent of small-scale farmers raise small hobby flocks of specialty sheep for the hand-spinning market. These small-scale farmers offer a wide selection of fleece. Global woolclip (total amount of wool shorn) 2020 Organic wool has gained in popularity. This wool is limited in supply and much of it comes from New Zealand and Australia. Organic wool has become easier to find in clothing and other products, but these products often carry a higher price. Wool is environmentally preferable (as compared to petroleum-based nylon or polypropylene) as a material for carpets, as well, in particular when combined with a natural binding and the use of formaldehyde-free glues. Animal rights groups have noted issues with the production of wool, such as mulesing. Marketing About 85% of wool sold in Australia is sold by open cry auction. The British Wool Marketing Board operates a central marketing system for UK fleece wool with the aim of achieving the best possible net returns for farmers. Less than half of New Zealand's wool is sold at auction, while around 45% of farmers sell wool directly to private buyers and end-users. United States sheep producers market wool with private or cooperative wool warehouses, but wool pools are common in many states. In some cases, wool is pooled in a local market area, but sold through a wool warehouse. Wool offered with objective measurement test results is preferred. Imported apparel wool and carpet wool goes directly to central markets, where it is handled by the large merchants and manufacturers. Yarn Shoddy or recycled wool is made by cutting or tearing apart existing wool fabric and respinning the resulting fibers. As this process makes the wool fibers shorter, the remanufactured fabric is inferior to the original. The recycled wool may be mixed with raw wool, wool noil, or another fiber such as cotton to increase the average fiber length. Such yarns are typically used as weft yarns with a cotton warp. This process was invented in the Heavy Woollen District of West Yorkshire and created a microeconomy in this area for many years. Worsted is a strong, long-staple, combed wool yarn with a hard surface. Woolen is a soft, short-staple, carded wool yarn typically used for knitting. In traditional weaving, woolen weft yarn (for softness and warmth) is frequently combined with a worsted warp yarn for strength on the loom. Uses In addition to clothing, wool has been used for blankets, suits, horse rugs, saddle cloths, carpeting, insulation and upholstery. Dyed wool can be used to create other forms of art such as wet and needle felting. Wool felt covers piano hammers, and it is used to absorb odors and noise in heavy machinery and stereo speakers. Ancient Greeks lined their helmets with felt, and Roman legionnaires used breastplates made of wool felt. Wool as well as cotton has also been traditionally used for cloth diapers. Wool fiber exteriors are hydrophobic (repel water) and the interior of the wool fiber is hygroscopic (attracts water); this makes a wool garment suitable cover for a wet diaper by inhibiting wicking, so outer garments remain dry. Wool felted and treated with lanolin is water resistant, air permeable, and slightly antibacterial, so it resists the buildup of odor. Some modern cloth diapers use felted wool fabric for covers, and there are several modern commercial knitting patterns for wool diaper covers. Initial studies of woollen underwear have found it prevented heat and sweat rashes because it more readily absorbs the moisture than other fibers. As an animal protein, wool can be used as a soil fertilizer, being a slow-release source of nitrogen. Researchers at the Royal Melbourne Institute of Technology school of fashion and textiles have discovered a blend of wool and Kevlar, the synthetic fiber widely used in body armor, was lighter, cheaper and worked better in damp conditions than Kevlar alone. Kevlar, when used alone, loses about 20% of its effectiveness when wet, so required an expensive waterproofing process. Wool increased friction in a vest with 28–30 layers of fabric, to provide the same level of bullet resistance as 36 layers of Kevlar alone. Events A buyer of Merino wool, Ermenegildo Zegna, has offered awards for Australian wool producers. In 1963, the first Ermenegildo Zegna Perpetual Trophy was presented in Tasmania for growers of "Superfine skirted Merino fleece". In 1980, a national award, the Ermenegildo Zegna Trophy for Extrafine Wool Production, was launched. In 2004, this award became known as the Ermenegildo Zegna Unprotected Wool Trophy. In 1998, an Ermenegildo Zegna Protected Wool Trophy was launched for fleece from sheep coated for around nine months of the year. In 2002, the Ermenegildo Zegna Vellus Aureum Trophy was launched for wool that is 13.9 microns or finer. Wool from Australia, New Zealand, Argentina, and South Africa may enter, and a winner is named from each country. In April 2008, New Zealand won the Ermenegildo Zegna Vellus Aureum Trophy for the first time with a fleece that measured 10.8 microns. This contest awards the winning fleece weight with the same weight in gold as a prize, hence the name. In 2010, an ultrafine, 10-micron fleece, from Windradeen, near Pyramul, New South Wales, won the Ermenegildo Zegna Vellus Aureum International Trophy. Since 2000, Loro Piana has awarded a cup for the world's finest bale of wool that produces just enough fabric for 50 tailor-made suits. The prize is awarded to an Australian or New Zealand wool grower who produces the year's finest bale. The New England Merino Field days which display local studs, wool, and sheep are held during January, in even numbered years around the Walcha, New South Wales district. The Annual Wool Fashion Awards, which showcase the use of Merino wool by fashion designers, are hosted by the city of Armidale, New South Wales, in March each year. This event encourages young and established fashion designers to display their talents. During each May, Armidale hosts the annual New England Wool Expo to display wool fashions, handicrafts, demonstrations, shearing competitions, yard dog trials, and more. In July, the annual Australian Sheep and Wool Show is held in Bendigo, Victoria. This is the largest sheep and wool show in the world, with goats and alpacas, as well as woolcraft competitions and displays, fleece competitions, sheepdog trials, shearing, and wool handling. The largest competition in the world for objectively measured fleeces is the Australian Fleece Competition, which is held annually at Bendigo. In 2008, 475 entries came from all states of Australia, with first and second prizes going to the Northern Tablelands fleeces. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Eastern_Christianity] | [TOKENS: 4776] |
Contents Eastern Christianity Eastern Christianity comprises Christian traditions and church families that originally developed during classical and late antiquity in the Eastern Mediterranean region or locations further east, south or north. The term does not describe a single communion or religious denomination. Eastern Christianity is a category distinguished from Western Christianity, which is composed of those Christian traditions and churches that originally developed further west. Major Eastern Christian bodies include the Eastern Orthodox Church and the Oriental Orthodox Churches, along with those groups descended from the historic Church of the East (also called the Assyrian Church), as well as the Eastern Catholic Churches (which are in communion with Rome while maintaining Eastern liturgies), and the Eastern Protestant churches. Most Eastern churches do not normally refer to themselves as "Eastern", with the exception of the Assyrian Church of the East and its offshoot, the Ancient Church of the East. The Eastern Orthodox are the largest body within Eastern Christianity with a worldwide population of 220 million, followed by the Oriental Orthodox at 60 million. The Eastern Catholic Churches consist of about 16–18 million and are a small minority within the Catholic Church. Eastern Protestant Christian churches do not form a single communion; churches like the Ukrainian Lutheran Church and the Malankara Mar Thoma Syrian Church have under a million members. The Assyrian Church of the East and the Ancient Church of the East, descendant churches of the Assyria-based Church of the East, have a combined membership of approximately 400,000. Historically, Eastern Christianity was centered in the Middle East and surrounding areas, where Christianity originated. However, after the Muslim conquest of the Levant in the 7th century, the term Eastern Church increasingly came to be used for the Greek Church centered in Constantinople, in contrast with the (Western) Latin Church, centered on Rome, which uses the Latin liturgical rites. The terms "Eastern" and "Western" in this regard originated with geographical divisions in Christianity mirroring the cultural divide between the Hellenistic East and the Latin West, and the political divide of 395 AD between the Western and Eastern Roman Empires. Since the Protestant Reformation of the 16th century, the term "Eastern Christianity" may be used in contrast with "Western Christianity", which contains not only the Latin Church but also forms of Protestantism and Independent Catholicism. Some Eastern churches have more in common historically and theologically with Western Christianity than with one another. Because the largest church in the East is the body currently known as the Eastern Orthodox Church, the term "Orthodox" is often used in a similar fashion to "Eastern", to refer to specific historical Christian communions. However, strictly speaking, most Christian denominations, whether Eastern or Western, regard themselves as "orthodox" (meaning "following correct beliefs") as well as "catholic" (meaning "universal"), and as sharing in the Four Marks of the Church listed in the Nicene-Constantinopolitan Creed (381 AD): "One, Holy, Catholic and Apostolic" (Greek: μία, ἁγία, καθολικὴ καὶ ἀποστολικὴ ἐκκλησία).[note 1] Eastern churches (excepting the non-liturgical dissenting bodies) utilize several liturgical rites: the Alexandrian Rite, the Armenian Rite, the Byzantine Rite, the East Syriac Rite (also known as Persian or Assyrian Rite), and the West Syriac Rite (also called the Antiochian Rite). Families of churches Eastern Christians do not all share the same religious traditions, but many do share cultural traditions. Christianity divided itself in the East during its early centuries both within and outside of the Roman Empire in disputes about Christology and fundamental theology, as well as through national divisions (Roman, Persian, etc.). It would be many centuries later that Western Christianity fully split from these traditions as its own communion. Major branches or families of Eastern Christianity, each holding a distinct theology and dogma, include the Eastern Orthodox Church, the Oriental Orthodox communion, the Eastern Catholic Churches and the Assyrian Church of the East. In most Eastern churches, parish priests administer the sacrament of chrismation to infants after baptism, and priests are allowed to marry before ordination. The Eastern Catholic Churches recognize the authority of the Pope of Rome, but some of them who have originally been part of the Orthodox Church or Oriental Orthodox churches closely follow the traditions of Orthodoxy or Oriental Orthodoxy, including the tradition of allowing married men to become priests. The Eastern churches' differences from Western Christianity have to do with theology, as well as liturgy, culture, language, and politics. For the non-Catholic Eastern churches, a definitive date for the commencement of schism cannot usually be given (see East–West Schism). The Church of the East declared independence from the churches of the Roman Empire at its general council in 424, which was before the Council of Ephesus in 431, and so had nothing to do with the theology declared at that council. Oriental Orthodoxy separated after the Council of Chalcedon in 451 but did not immediately form separate patriarchates until 518 (in the case of the Syriac Patriarchate of Antioch) and 536 (in the case of the Coptic Patriarchate of Alexandria). Since the time of the historian Edward Gibbon, the split between the Church of Rome and the Orthodox Church has been conveniently dated to 1054, though the reality is more complex. This split is sometimes referred to as the Great Schism, but is now more usually called the East–West Schism. This final schism reflected a larger cultural and political division which had developed in Europe and Southwest Asia during the Middle Ages and coincided with Western Europe's re-emergence from the collapse of the Western Roman Empire. The Ukrainian Lutheran Church developed within Galicia around 1926, with its rites being based on the Liturgy of Saint John Chrysostom, rather than on the Western Formula Missae. The Eastern Orthodox Church is a Christian body whose adherents are largely based in Western Asia (particularly Syria, Lebanon, Jordan, Israel, and Palestine) and Turkey, Eastern Europe, the Balkans and the Caucasus (Georgia), with a growing presence in the Western world. Eastern Orthodox Christians accept the decisions of the first seven ecumenical councils. Eastern Orthodox Christianity identifies itself as the original Christian church (see early centers of Christianity) founded by Christ and the Apostles, and traces its lineage back to the early Church through the process of apostolic succession and unchanged theology and practice. Characteristics of the Eastern Orthodox Church include the Byzantine Rite (shared with some Eastern Catholic Churches) and an emphasis on the continuation of Holy Tradition, which it holds to be apostolic in nature. The Eastern Orthodox Church is organized into self-governing jurisdictions along geographical, national, ethnic or linguistic lines. Eastern Orthodoxy is thus made up of fourteen or sixteen autocephalous bodies. Smaller churches are autonomous and each have a mother church that is autocephalous. All Eastern Orthodox are united in doctrinal agreement with each other, though a few are not in communion at present, for non-doctrinal reasons. This is in contrast to the Catholic Church and its various churches. Members of the latter are all in communion with each other, parts of a top-down hierarchy (see primus inter pares). The Eastern Orthodox reject the Filioque clause as contrast to Catholics. The Catholic Church was once in communion with the Eastern Orthodox Church, but the two split after the East–West Schism and are no longer in communion. It is estimated that there are approximately 240 million Eastern Orthodox Christians in the world.[note 2] Today, many adherents shun the term "Eastern" as denying the church's universal character. They refer to Eastern Orthodoxy simply as the Orthodox Church. Oriental Orthodoxy refers to the churches of Eastern Christian tradition that keep the faith of the first three ecumenical councils of the undivided Christian Church: the First Council of Nicaea (AD 325), the First Council of Constantinople (381) and the Council of Ephesus (431), while rejecting the dogmatic definitions of the Council of Chalcedon (451). Hence, these churches are also called the Old Oriental churches. They comprise the Coptic Orthodox Church, the Malankara Orthodox Church (India), the Eritrean Orthodox Tewahedo Church, the Ethiopian Orthodox Tewahedo Church, the Syriac Orthodox Church and the Armenian Apostolic Church. Oriental Orthodoxy developed in reaction to Chalcedon on the eastern limit of the Byzantine Empire and in Egypt, Syria and Mesopotamia. In those locations, there are also Eastern Orthodox patriarchs, but the rivalry between the two has largely vanished in the centuries since the schism. Historically, the Church of the East was the widest reaching branch of Eastern Christianity, at its height spreading from its heartland in Persian-ruled Assyria to the Mediterranean, India, and China. Originally the only Christian church recognized by Zoroastrian-led Sassanid Persia (through its alliance with the Lakhmids, the regional rivals to the Byzantines and its Ghassanid vassal), the Church of the East declared itself independent of other churches in 424 and over the next century became affiliated with Nestorianism, a Christological doctrine advanced by Nestorius, Patriarch of Constantinople from 428 to 431, which had been declared heretical in the Roman Empire. Thereafter it was often known in the West, possibly inaccurately, as the Nestorian Church. Surviving a period of persecution within Persia, the Church of the East flourished under the Abbasid Caliphate and branched out, establishing dioceses throughout Asia. After another period of expansion under the Mongol Empire, the church went into decline starting in the 14th century, and was eventually largely confined to its founding Assyrian adherent's heartland in the Assyrian homeland, although another remnant survived on the Malabar Coast of India. In the 16th century, dynastic struggles sent the church into schism, resulting in the formation of two rival churches: The Chaldean Catholic Church, which entered into communion with Rome as an Eastern Catholic Church, and the Assyrian Church of the East. The followers of these two churches are almost exclusively ethnic Assyrians. In India, the local Church of the East community, known as the Saint Thomas Christians, experienced its own rifts as a result of Portuguese influence. The Assyrian Church of the East emerged from the historical Church of the East, which was centered in Mesopotamia/Assyria, then part of the Persian Empire, and spread widely throughout Asia. The modern Assyrian Church of the East emerged in the 16th century following a split with the Chaldean Church, which later entered into communion with Rome as an Eastern Catholic Church. The Church of the East was associated with the doctrine of Nestorianism, advanced by Nestorius, Patriarch of Constantinople from 428 to 431, which emphasized the disunion between the human and divine natures of Jesus. Nestorius and his doctrine were condemned at the Council of Ephesus in 431, leading to the Nestorian Schism in which churches supporting Nestorius split from the rest of Christianity. Many followers relocated to Persia and became affiliated with the local Christian community there. This community adopted an increasingly Nestorian theology and was thereafter often known as the Nestorian Church. As such, the Church of the East accepts only the first two ecumenical councils of the undivided Church—the First Council of Nicaea and the First Council of Constantinople—as defining its faith tradition, and rapidly took a different course from other Eastern Christians. The Church of the East spread widely through Persia and into Asia, being introduced to India by the 6th century and to the Mongols and China in the 7th century. It experienced periodic expansion until the 14th century, when the church was nearly destroyed by the collapse of the Mongol Empire and the conquests of Timur. By the 16th century it was largely confined to Iraq, northeast Syria, southeast Turkey, northwest Iran and the Malabar Coast of India (Kerala). The split of the 15th century, which saw the emergence of separate Assyrian and Chaldean Churches, left only the former as an independent sect. Additional splits into the 20th century further affected the history of the Assyrian Church of the East. The Saint Thomas Syrian Christians are an ancient body of Syrian Christians in Kerala, Malabar coast of India who trace their origins to the evangelical activity of Thomas the Apostle in the 1st century. Many Assyrian and Jewish communities like the Knanaya and the Cochin Jews assimilated into the Saint Thomas Syrian Christian community. By the 5th century the Saint Thomas Syrian Christians were part of the Church of the East (Nestorian Church). Until the middle of the 17th century and the arrival of the Portuguese, the Thomas Christians were all one in faith and rite. Thereafter, divisions arose among them, and consequently they are today of several different rites. The East Syriac Chaldean Rite (Edessan Rite) Churches among the Saint Thomas Syrian Christians are the Syro Malabar Church and the Chaldean Syrian Church. The West Syriac Antiochian Rite Churches among the Saint Thomas Syrian Christians are the Malankara Jacobite Syrian Church, the Malankara Orthodox Syrian Church, the Mar Thoma Syrian Church, the Syro Malankara Church and the Thozhiyur Church. The twenty-three Eastern Catholic Churches are in communion with the Holy See at the Vatican whilst being rooted in the theological and liturgical traditions of Eastern Christianity. Most of these churches were originally part of the Orthodox East, but have since been reconciled to the Latin Church. Many of these churches were originally part of one of the above families and so are closely related to them by way of ethos and liturgical practice. As in the other Eastern churches, married men may become priests, and parish priests administer the mystery of confirmation to newborn infants immediately after baptism, via the rite of chrismation; the infants are then administered Holy Communion. The Syro-Malabar Church, which is part of the Saint Thomas Christian community in India, follows East Syriac traditions and liturgy. Other Saint Thomas Christians of India, who were originally of the same East Syriac tradition, passed instead to the West Syriac tradition and now form part of Oriental Orthodoxy (some from the Oriental Orthodox in India united with the Catholic Church in 1930 and became the Syro-Malankara Catholic Church). The Maronite Church claims never to have been separated from Rome, and has no counterpart Orthodox Church out of communion with the Pope. It is therefore inaccurate to refer to it as a "Uniate" Church. The Italo-Albanian Catholic Church has also never been out of communion with Rome, but, unlike the Maronite Church, it resembles the liturgical rite of the Eastern Orthodox Church. In addition to these four mainstream branches, there are a number of much smaller groups which originated from disputes with the dominant tradition of their original areas. Most of these are either part of the more traditional Old Believer movement, which arose from a schism within Russian Orthodoxy, or the more radical Spiritual Christianity movement. The latter includes a number of diverse "low-church" groups, from the Bible-centered Molokans to the anarchic Doukhobors to the self-mutilating Skoptsy. None of these groups are in communion with the mainstream churches listed above. There are also national dissidents, where ethnic groups want their own nation-church, such as the Montenegrin Orthodox Church in domicile of the Serbian Orthodox Church. There are also some Reformed Churches which share characteristics of Eastern Christianity, to varying extents. Starting in the 1920s, parallel hierarchies formed in opposition to local Orthodox churches over ecumenism and other matters. These jurisdictions sometimes refer to themselves as being "True Orthodox". In Russia, underground churches formed and maintained solidarity with the Russian Orthodox Church Outside Russia until the late 1970s. There are now traditionalist Orthodox in every area, though in Asia and Egypt their presence is negligible. Eastern Protestant Christianity comprises a collection of heterogeneous Protestant denominations which are mostly the result of Protestant Churches adopting Reformation variants of Orthodox Christian liturgy and worship. Some others are the result of reformations of Orthodox Christian beliefs and practices, inspired by the teachings of Western Protestant missionaries. Denominations of this category include the Malankara Mar Thoma Syrian Church , Ukrainian Lutheran Church, St. Thomas Evangelical Church of India, Evangelical Orthodox Church, etc. Byzantine Rite Lutheranism arose in the Ukrainian Lutheran Church around 1926. It sprung up in the region of Galicia and its rites are based on the Liturgy of Saint John Chrysostom. The church suffered persecution under the Communist régime, which implemented a policy of state atheism. Catholic–Orthodox ecumenism Ecumenical dialogue since the 1964 meeting between Pope Paul VI and Orthodox Patriarch Athenagoras I has awoken the nearly 1,000-year hopes for Christian unity. Since the lifting of excommunications during the Paul VI and Athenagoras I meeting in Jerusalem there have been other significant meetings between Popes and Ecumenical Patriarchs of Constantinople. One of the most recent meetings was between Benedict XVI and Bartholomew I, who jointly signed the Common Declaration. It states that "We give thanks to the Author of all that is good, who allows us once again, in prayer and in dialogue, to express the joy we feel as brothers and to renew our commitment to move towards full communion". In 2013 Patriarch Bartholomew I attended the installation ceremony of the new Catholic Pope, Francis, which was the first time any Ecumenical Patriarch of Constantinople had ever attended such an installation. In 2019, Primate of the OCU Metropolitan of Kyiv and All Ukraine Epiphanius stated that "theoretically" the Orthodox Church of Ukraine and the Ukrainian Greek Catholic Church could in the future unite into a united church around the Kyiv throne. In 2019, the primate of the UGCC, Major Archbishop of Kyiv-Galicia Sviatoslav, stated that every effort should be made to restore the original unity of the Kyivan Church in its Orthodox and Catholic branches, saying that the restoration of Eucharistic communion between Rome and Constantinople is not a utopia. At a meeting in Balamand, Lebanon, in June 1993, the Joint International Commission for the Theological Dialogue between the Catholic Church and the Orthodox Church declared that these initiatives that "led to the union of certain communities with the See of Rome and brought with them, as a consequence, the breaking of communion with their Mother Churches of the East … took place not without the interference of extra-ecclesial interests"; and that what has been called "uniatism" "can no longer be accepted either as a method to be followed nor as a model of the unity our Churches are seeking" (section 12). At the same time, the Commission stated: Migration trends There has been a significant Christian migration in the 20th century from the Near East. Fifteen hundred years ago Christians were the majority population in today's Turkey, Iraq, Syria, Lebanon, Jordan, Palestine and Egypt. In 1914 Christians constituted 25% of the population of the Ottoman Empire. At the beginning of the 21st century Christians constituted 6–7% of the region's population: less than 1% in Turkey, 3% in Iraq, 12% in Syria, 39% in Lebanon, 6% in Jordan, 2.5% in Israel/Palestine and 15–20% in Egypt. As of 2011 Eastern Orthodox Christians are among the wealthiest Christians in the United States. They also tend to be better educated than most other religious groups in America, having a high number of graduate (68%) and post-graduate (28%) degrees per capita. Role of Christians in Arabic culture Scholars and intellectuals agree Christians have made significant contributions to Arab and Islamic civilization since the introduction of Islam, and they have had a significant impact contributing the culture of the Middle East and North Africa and other areas. Byzantine science played an important and crucial role in the transmission of classical knowledge to the Islamic world. Christians, especially Nestorians, contributed to the Arab Islamic Civilization during the Umayyads and the Abbasids by translating works of Greek philosophers to Syriac and afterwards to Arabic. They also excelled in philosophy, science (such as Hunayn ibn Ishaq, Qusta ibn Luqa, Masawaiyh, Patriarch Eutychius, Jabril ibn Bukhtishu etc.) and theology (such as Tatian, Bar Daisan, Babai the Great, Nestorius, Toma bar Yacoub, etc.) and the personal physicians of the Abbasid Caliphs were often Assyrian Christians such as the long serving Bukhtishus. Many scholars of the House of Wisdom were of Christian background. A hospital and medical training center existed at Gundeshapur. The city of Gundeshapur was founded in AD 271 by the Sassanid king Shapur I. It was one of the major cities in Khuzestan province of the Persian empire in what is today Iran. A large percentage of the population was Syriacs, most of whom were Christians. Under the rule of Khusraw I, refuge was granted to Greek Nestorian Christian philosophers including the scholars of the Persian School of Edessa (Urfa), also called the academy of Athens, a Christian theological and medical university. These scholars made their way to Gundeshapur in 529 following the closing of the academy by Emperor Justinian. They were engaged in medical sciences and initiated the first translation projects of medical texts. The arrival of these medical practitioners from Edessa marks the beginning of the hospital and medical center at Gundeshapur. It included a medical school and hospital (bimaristan), a pharmacology laboratory, a translation house, a library and an observatory. Indian doctors also contributed to the school at Gundeshapur, most notably the medical researcher Mankah. Later after Islamic invasion, the writings of Mankah and of the Indian doctor Sustura were translated into Arabic at Baghdad. Daud al-Antaki was one of the last generation of influential Arab Christian writers. Arab Christians and Arabic-speaking Christians, played important roles in the Nahda, and because Arab Christians formed the educated upper and bourgeois classes, they have had a significant impact in politics, business and culture, and most important figures of the Nahda movement were Christian Arabs. Today Arab Christians still play important roles in the Arab world, and Christians are relatively wealthy, well educated, and politically moderate. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/20/prepare-for-pitch-battle-startup-battlefield-200-nominations-are-open/] | [TOKENS: 690] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Prepare for pitch battle: Startup Battlefield 200 nominations are open Pre-Series A founders, this is your moment! If your startup is ready to be tested under real pressure, the battlefield is open. We’re still accepting nominations for TechCrunch Startup Battlefield 200. This is your shot to step into the arena at TechCrunch Disrupt 2026 and go head-to-head in front of world-class VCs and the full TechCrunch audience. Where breakout companies are forged You’ll compete for $100,000 in equity-free funding, earn global exposure on the main stage at Disrupt in San Francisco, and get raw, unfiltered feedback from the investors who shape the future of tech. In addition, you’ll join an elite cohort of founders preparing to take on the biggest startup stage in the world. Startup Battlefield alumni include Trello, Mint, Dropbox, Discord, and Fitbit — and more than 1,500 startups that once stood exactly where you are now: early, hungry, and ready to prove it. Make your mark or get left behind. Nominate now. Competition is fierce — enter early Thousands of startups applied last year, and this year’s field will be even more competitive. TechCrunch reviews every nomination, but founders who enter early give themselves a strategic edge. If you’re serious about winning, don’t wait to join the fight. Startup Battlefield 200 perks Explore the full list of perks. What it takes to enter the arena We’re looking for bold, early-stage founders with an MVP and a vision that challenges the status quo. If you’re bootstrapped, pre-seed, or seed-funded, this battlefield was built for you. Series A startups in capital-intensive industries may also qualify. Start the battle today Nominations close on June 8, but champions don’t wait for the last call. Prepare early. Nominate early. Enter Startup Battlefield 200 and fight for your place on tech’s biggest stage. Topics Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Specials_(Unicode_block)#Replacement_character] | [TOKENS: 745] |
Contents Specials (Unicode block) Specials is a short Unicode block of characters allocated at the very end of the Basic Multilingual Plane, at U+FFF0–FFFF, containing these code points: U+FFFE <noncharacter-FFFE> and U+FFFF <noncharacter-FFFF> are noncharacters, meaning they are reserved but do not cause ill-formed Unicode text. Versions of the Unicode standard from 3.1.0 to 6.3.0 claimed that these characters should never be interchanged, leading some applications to use them to guess text encoding by interpreting the presence of either as a sign that the text is not Unicode. However, Corrigendum #9 later specified that noncharacters are not illegal and so this method of checking text encoding is incorrect. An example of an internal usage of U+FFFE is the CLDR algorithm; this extended Unicode algorithm maps the noncharacter to a minimal, unique primary weight. Unicode's U+FEFF ZERO WIDTH NO-BREAK SPACE character can be inserted at the beginning of a Unicode text as a byte order mark to signal its endianness: a program reading a text encoded in for example UTF-16 and encountering U+FFFE <noncharacter-FFFE> would then know that it should switch the byte order for all the following characters. Its block name in Unicode 1.0 was Special. Replacement character The replacement character � (often displayed as a black rhombus with a white question mark) is a symbol found in the Unicode standard at code point U+FFFD in the Specials table. It is used to indicate problems when a system is unable to render a stream of data to correct symbols. As an example, a text file encoded in ISO 8859-1 containing the German word für contains the bytes 0x66 0xFC 0x72. If this file is opened with a text editor that assumes the input is UTF-8, the first and third bytes are valid UTF-8 encodings of ASCII, but the second byte (0xFC) is not valid in UTF-8. The text editor could replace this byte with the replacement character to produce a valid string of Unicode code points for display, so the user sees "f�r". A poorly implemented text editor might write out the replacement character (0xEF 0xBF 0xBD) when the user saves the file; the data in the file will then become 0x66 0xEF 0xBF 0xBD 0x72. If the file is re-opened using ISO 8859-1, it will display "f�r" (this is called mojibake). As the editor likely turns different errors into the same replacement character, it is also impossible to recover the original text. At one time the replacement character was often used when there was no glyph available in a font for that character, as in font substitution. However, most modern text rendering systems instead use a font's .notdef character, which in most cases is an empty box, or "?" or "X" in a box (this browser displays for U+10FFEE), sometimes called a 'tofu'. There is no Unicode code point for this symbol. Thus the replacement character is now only seen for encoding errors. Some software programs translate invalid UTF-8 bytes to matching characters in Windows-1252 (since that is the most common source of these errors), so that the replacement character is never seen.[citation needed] Unicode chart History The following Unicode-related documents record the purpose and process of defining specific characters in the Specials block: See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Pet] | [TOKENS: 4628] |
Contents Pet A pet, or companion animal, is an animal kept primarily for a person's company or entertainment rather than as a working animal, livestock, or a laboratory animal. Popular pets are often considered to have attractive/cute appearances, intelligence, and relatable personalities, but some pets may be taken in on an altruistic basis (such as a stray animal) and accepted by the owner regardless of these characteristics. Two of the most popular pets are dogs and cats. Other animals commonly kept include rabbits; ferrets; pigs; rodents such as gerbils, hamsters, chinchillas, rats, mice, and guinea pigs; birds such as parrots, passerines, and fowls; reptiles such as turtles, lizards, snakes, and iguanas; aquatic pets such as fish, freshwater snails, and saltwater snails; amphibians such as frogs and salamanders; and arthropod pets such as tarantulas and hermit crabs. Smaller pets include rodents, while the equine and bovine group include the largest companion animals. Pets provide their owners, or guardians, both physical and emotional benefits. Walking a dog can provide both the human and the dog with exercise, fresh air, and social interaction. Pets can give companionship to people who are living alone or elderly adults who do not have adequate social interaction with other people. There is a medically approved class of therapy animals that are brought to visit confined humans, such as children in hospitals or elders in nursing homes. Pet therapy utilizes trained animals and handlers to achieve specific physical, social, cognitive, or emotional goals with patients. People most commonly get pets for companionship, to protect a home or property, or because of the perceived beauty or attractiveness of the animals. A 1994 Canadian study found that the most common reasons for not owning a pet were lack of ability to care for the pet when traveling (34.6%), lack of time (28.6%), and lack of suitable housing (28.3%), with dislike of pets being less common (19.6%). Some scholars, ethicists, and animal rights organizations have raised concerns over keeping pets because of the lack of autonomy and the objectification of non-human animals. Pet popularity In China, spending on domestic animals has grown from an estimated $3.12 billion in 2010 to $25 billion in 2018. The Chinese people own 51 million dogs and 41 million cats, with pet owners often preferring to source pet food internationally. There are a total of 755 million pets, increased from 389 million in 2013. A survey in 2002 found approximately 45 million pets in Italy, including 7 million dogs, 7.5 million cats, 16 million fish, 12 million birds and 10 thousand snakes. A 2007 survey by the University of Bristol found that 26% of UK households owned cats and 31% owned dogs, estimating total domestic populations of approximately 10.3 million cats and 10.5 million dogs in 2006. The survey also found that 47.2% of households with a cat had at least one person educated to degree level, compared with 38.4% of homes with dogs. There are approximately 86.4 million pet cats and approximately 78.2 million pet dogs in the United States, and a United States 2007–2008 survey showed that dog-owning households outnumbered those owning cats, but that the total number of pet cats was higher than that of dogs. The same was true for 2011. In 2013, pets outnumbered children four to one in the United States. Effects on pets' health Keeping animals as pets may be detrimental to their health if certain requirements are not met. An important issue is inappropriate feeding, which may produce clinical effects. The consumption of chocolate or grapes can be fatal to dogs. Similarly, onions and garlic can lead to hemolytic anemia in dogs. Certain species of houseplants can also prove toxic if consumed by pets. Examples include philodendrons and Easter lilies, which can cause severe kidney damage to cats, and poinsettias, begonia, and aloe vera, which are mildly toxic to dogs. For birds, chocolate can be deadly, and foods intended for human consumption, such as bread, crackers, and dairy items, can potentially cause health problems. House pets, particularly dogs and cats in industrialized societies, are highly susceptible to obesity. Overweight pets have been shown to be at a higher risk of developing diabetes, liver problems, joint pain, kidney failure, and cancer. Lack of exercise and high-caloric diets are considered to be the primary contributors to pet obesity. Effects of pets on their caregivers' health It is widely believed among the public, and among many scientists, that pets probably bring mental and physical health benefits to their owners; a 1987 NIH statement cautiously argued that existing data was "suggestive" of a significant benefit. A recent dissent comes from a 2017 RAND study, which found that at least in the case of children, having a pet per se failed to improve physical or mental health by a statistically significant amount; instead, the study found children who were already prone to being healthy were more likely to get pets in the first place. Conducting long-term randomized trials to settle the issue would be costly or infeasible. Pets might have the ability to stimulate their caregivers, in particular the elderly, giving people someone to take care of, someone to exercise with, and someone to help them heal from a physically or psychologically troubled past. Animal company can also help people to preserve acceptable levels of happiness despite the presence of mood symptoms like anxiety or depression. Having a pet may also help people achieve health goals, such as lowered blood pressure, or mental goals, such as decreased stress. There is evidence that having a pet can help a person lead a longer, healthier life. In a 1986 study of 92 people hospitalized for coronary ailments, within a year, 11 of the 29 patients without pets had died, compared to only 3 of the 52 patients who had pets. Having pet(s) was shown to significantly reduce triglycerides, and thus heart disease risk, in the elderly. A study by the National Institute of Health found that people who owned dogs were less likely to die as a result of a heart attack than those who did not own one. There is some evidence that pets may have a therapeutic effect in dementia cases. Other studies have shown that for the elderly, good health may be a requirement for having a pet, and not a result. Dogs trained to be guide dogs can help people with vision impairment. Dogs trained in the field of Animal-Assisted Therapy (AAT) can also benefit people with other disabilities. People residing in a long-term care facility, such as a hospice or nursing home, may experience health benefits from pets. Pets help them to cope with the emotional issues related to their illness. They also offer physical contact with another living creature, something that is often missing in an elder's life. Pets for nursing homes are chosen based on the size of the pet, the amount of care that the breed needs, and the population and size of the care institution. Appropriate pets go through a screening process and, if it is a dog, additional training programs to become a therapy dog. There are three types of therapy dogs: facility therapy dogs, animal-assisted therapy dogs, and therapeutic visitation dogs. The most common therapy dogs are therapeutic visitation dogs. These dogs are household pets whose handlers take time to visit hospitals, nursing homes, detention facilities, and rehabilitation facilities. Different pets require varying amounts of attention and care; for example, cats may have lower maintenance requirements than dogs. In addition to offering health benefits to their owners, pets also influence their owners' social lives and connections within their communities. Research suggests that pets may facilitate social interactions, fostering communication and engagement among individuals. Assistant Professor of Sociology at the University of Colorado at Boulder, Leslie Irvine has focused her attention on pets of the homeless population. Her studies of pet ownership among the homeless found that many modify their life activities for fear of losing their pets. Pet ownership prompts them to act responsibly, with many making a deliberate choice not to drink or use drugs, and to avoid contact with substance abusers or those involved in any criminal activity for fear of being separated from their pet. Additionally, many refuse to house in shelters if their pet is not allowed to stay with them. Health risks that are associated with pets include: Legislation The European Convention for the Protection of Pet Animals is a 1987 treaty of the Council of Europe – but accession to the treaty is open to all states in the world – to promote the welfare of pet animals and ensure minimum standards for their treatment and protection. It went into effect on 1 May 1992, and as of June 2020, it has been ratified by 24 states. Pets have commonly been considered private property, owned by individual persons. Many legal protections have existed (historically and today) with the intention of safeguarding pets' and other animals' well-being. Since the year 2000, a small but increasing number of jurisdictions in North America have enacted laws redefining pet's owners as guardians. Intentions have been characterized as simply changing attitudes and perceptions but not legal consequences to working toward legal personhood for pets themselves. Some veterinarians and breeders have opposed these moves. The question of pets' legal status can arise with concern to purchase or adoption, custody, divorce, estate and inheritance, injury, damage, and veterinary malpractice. In the United Kingdom, the minimum age to own a pet is 16. States, cities, and towns in Western countries commonly enact local ordinances to limit the number or kind of pets a person may keep personally or for business purposes. Prohibited pets may be specific to certain breeds such as pit bulls or Rottweilers, they may apply to general categories of animals (such as livestock, exotic animals, wild animals, and canid or felid hybrids), or they may simply be based on the animal's size. Additional or different maintenance rules and regulations may also apply. Condominium associations and owners of rental properties also commonly limit or forbid tenants' keeping of pets. In Belgium and the Netherlands, the government publishes white lists and black lists (called 'positive' and 'negative lists') with animal species that are designated to be appropriate to be kept as pets (positive) or not (negative). The Dutch Ministry of Economic Affairs and Climate Policy originally established its first positive list (positieflijst) per 1 February 2015 for a set of 100 mammals (including cats, dogs and production animals) deemed appropriate as pets on the recommendations of Wageningen University. Parliamentary debates about such a pet list date back to the 1980s, with continuous disagreements about which species should be included and how the law should be enforced. In January 2017, the white list was expanded to 123 species, while the black list that had been set up was expanded (with animals like the brown bear and two great kangaroo species) to contain 153 species unfit for petting, such as the armadillo, the sloth, the European hare, and the wild boar. In January 2011, the Belgian Federal Agency for the Safety of the Food Chain stated that people are not allowed to kill miscellaneous or unknown cats walking in their garden, but "nowhere in the law does it say that you can't eat your cat, dog, rabbit, fish or whatever. You just have to kill them in an animal-friendly way." Since 1 July 2014, it is illegal in the Netherlands for owners to kill their own cats and dogs kept as pets. Parakeets, guinea pigs, hamsters and other animals may still be killed by their owners, but nonetheless when owners mistreat their companion animals (for example, in the process of killing them), the owners can still be prosecuted under Dutch law. Environmental impact Pets have a considerable environmental impact, especially in countries where they are common or held in high densities. For instance, the 163 million dogs and cats kept in the United States consume about 20% of the amount of dietary energy that humans do and an estimated 33% of the animal-derived energy. They produce about 30% ± 13%, by mass, as much feces as Americans, and through their diet, constitute about 25–30% of the environmental impacts from animal production in terms of the use of land, water, fossil fuel, phosphate, and biocides. Dog and cat animal product consumption is responsible for the release of up to 64 ± 16 million tons CO2-equivalent methane and nitrous oxide, two powerful greenhouse gasses. Americans are the largest pet owners in the world, but pet ownership in the US has considerable environmental costs. Types While many people have kept many different species of animals in captivity over the course of human history, only a relative few have been kept long enough to be considered domesticated. Other types of animal, notably monkeys, have never been domesticated but are still sold and kept as pets. Some wild animals are kept as pets, such as tigers, even though this is illegal. There is a market for illegal pets. Domesticated pets are the most common. A domesticated animal is a species that has been made fit for a human environment, by being consistently kept in captivity and selectively bred over a long enough period of time that it exhibits marked differences in behavior and appearance from its wild relatives. Domestication contrasts with taming, which is simply when an un-domesticated, wild animal has become tolerant of human presence, and perhaps even enjoys it. Large mammals that might be kept as pets include alpaca, camel, cattle, donkey, goat, horse, llama, pig, reindeer, sheep and yak. Small mammals that might be kept as pets include: ferret, hedgehog, rabbit, sugar glider, and rodents, including rat, mouse, hamster, guinea pig, gerbil, and chinchilla. Other mammals include cat, dog, monkey, and domesticated silver fox. Birds kept as pets include companion parrots like the budgie (parakeet) and cockatiel, fowl such as chickens, turkeys, ducks, geese, and quail, columbines, and passerines, namely finches and canaries. Fish kept as pets include: goldfish, koi, Siamese fighting fish (Betta), barb, guppy, molly, Japanese rice fish (Medaka), and oscar. Arthropods kept as pets include bees, such as honey bees and stingless bees, Silk moth, and ant farms. Reptiles and amphibians kept as pets include snakes, turtles, axolotl, frogs and salamanders. Wild animals are kept as pets. The term wild in this context specifically applies to any species of animal which has not undergone a fundamental change in behavior to facilitate a close co-existence with humans. Some species may have been bred in captivity for a considerable length of time, but are still not recognized as domesticated. Generally, wild animals are recognized as not suitable to keep as pets, and this practice is completely banned in many places. In other areas, certain species are allowed to be kept, and it is usually required for the owner to obtain a permit. It is considered animal cruelty by some, as most often, wild animals require precise and constant care that is very difficult to meet in captive conditions. Many large and instinctively aggressive animals are extremely dangerous, and they have killed their handlers on numerous occasions. History Archaeology suggests that human ownership of dogs as pets may date back to at least 12,000 years ago. Ancient Greeks and Romans would openly grieve for the loss of a dog, evidenced by inscriptions left on tombstones commemorating their loss. The surviving epitaphs dedicated to horses are more likely to reference a gratitude for the companionship that had come from war horses rather than race horses. The latter may have chiefly been commemorated as a way to further the owner's fame and glory. In Ancient Egypt, dogs and baboons were kept as pets and buried with their owners. Dogs were given names, which is significant as Egyptians considered names to have magical properties. In the Old Testament passage in 2 Samuel 12, the prophet Nathan, in order to indicate to King David the seriousness of his adulterous and murderous affair with Bathsheba, uses the parable of a poor man's pet lamb being slaughtered by a rich neighbor who uses it to feed a guest. David, who had spent his youth as a shepherd and had compassion and affection for such a creature, becomes enraged at the rich man in the parable, only to be told by Nathan, "You are the man!" David, having been thus exposed as a hypocrite, confesses, "I have sinned." This is one of the only instances in Scripture of an animal being kept for companionship rather than for utilitarian purposes, apart from the acquisition of exotic animals by David's son King Solomon for a menagerie (2 Chronicles 9) Throughout the 17th and 18th-century pet keeping in the modern sense gradually became accepted throughout Britain. Initially, aristocrats kept dogs for both companionship and hunting. Thus, pet keeping was a sign of elitism within society. By the 19th century, the rise of the middle class stimulated the development of pet keeping and it became inscribed within the bourgeois culture. As the popularity of pet-keeping in the modern sense rose during the Victorian era, animals became a fixture within urban culture as commodities and decorative objects. Pet keeping generated a commercial opportunity for entrepreneurs. By the mid-19th century, nearly twenty thousand street vendors in London dealt with live animals. The popularity of animals also developed a demand for animal goods such as accessories and guides for pet keeping. Pet care developed into a big business by the end of the nineteenth century. Profiteers also sought out pet stealing as a means for economic gain. Utilizing the affection that owners had for their pets, professional dog stealers would capture animals and hold them for ransom. The development of dog stealing reflects the increased value of pets. Pets gradually became defined as the property of their owners. Laws were created that punished offenders for their burglary. Pets and animals also had social and cultural implications throughout the nineteenth century. The categorization of dogs by their breeds reflected the hierarchical, social order of the Victorian era. The pedigree of a dog represented the high status and lineage of their owners and reinforced social stratification. Middle-class owners valued the ability to associate with the upper-class through ownership of their pets. The ability to care for a pet signified respectability and the capability to be self-sufficient. According to Harriet Ritvo, the identification of "elite animal and elite owner was not a confirmation of the owner's status but a way of redefining it." The popularity of dog and pet keeping generated animal fancy. Dog fanciers showed enthusiasm for owning pets, breeding dogs, and showing dogs in various shows. The first dog show took place on 28 June 1859 in Newcastle and focused mostly on sporting and hunting dogs. However, pet owners produced an eagerness to demonstrate their pets as well as have an outlet to compete. Thus, pet animals gradually were included within dog shows. The first large show, which would host one thousand entries, took place in Chelsea in 1863. The Kennel Club was created in 1873 to ensure fairness and organization within dog shows. The development of the Stud Book by the Kennel Club defined policies, presented a national registry system of purebred dogs, and essentially institutionalized dog shows. Pet ownership by non-humans Pet ownership by animals in the wild, as an analogue to the human phenomenon, has not been observed and is likely non-existent in nature. One group of capuchin monkeys was observed appearing to care for a marmoset, a fellow New World monkey species; however, observations of chimpanzees apparently playing with small animals like hyraxes have ended with the chimpanzees killing the animals and tossing the corpses around. A 2010 study states that human relationships with animals have an exclusive human cognitive component and that pet-keeping is a fundamental and ancient attribute of the human species. Anthropomorphism, or the projection of human feelings, thoughts and attributes on to animals, is a defining feature of human pet-keeping. The study identifies it as the same trait in evolution responsible for domestication and concern for animal welfare. It is estimated to have arisen at least 100,000 years before present (ybp) in Homo sapiens. It is debated whether this redirection of human nurturing behaviour towards non-human animals, in the form of pet-keeping, was maladaptive, due to being biologically costly, or whether it was positively selected for. Two studies suggest that the human ability to domesticate and keep pets came from the same fundamental evolutionary trait and that this trait provided a material benefit in the form of domestication that was sufficiently adaptive to be positively selected for.: 300 A 2011 study suggests that the practical functions that some pets provide, such as assisting hunting or removing pests, could have resulted in enough evolutionary advantage to allow for the persistence of this behaviour in humans and outweigh the economic burden held by pets kept as playthings for immediate emotional rewards. Two other studies suggest that the behaviour constitutes an error, side effect or misapplication of the evolved mechanisms responsible for human empathy and theory of mind to cover non-human animals which has not sufficiently impacted its evolutionary advantage in the long run.: 300 Animals in captivity, with the help of caretakers, have been considered to have owned pets. Examples of this include Koko the gorilla who had several pet cats, Tonda the orangutan and a pet cat and Tarra the elephant and a dog named Bella. Ethics Some scholars, ethicists, and animal rights organizations have raised concerns over keeping pets because of the lack of autonomy and the objectification of non-human animals. By contrast, Ikechukwu Monday Osebor writing in the Aquino Journal from the University of Nigeria argues from a consequentialist perspective that pet ownership can be ethical. Gary Francione and Anna Charlton argue that pet breeding and ownership are unethical because they view it as treating animals as property and commodifying them. Further concerns arise from vegan and vegetarian perspectives with the fact that some pets require the consumption of meat and thus keeping pets may be unethical.[citation needed] See also References Further reading External links |
======================================== |
[SOURCE: https://techcrunch.com/author/techcrunch-events/] | [TOKENS: 130] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us TechCrunch Events Latest from TechCrunch Events © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-12] | [TOKENS: 4993] |
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Urban_sociology] | [TOKENS: 3128] |
Contents Urban sociology 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias Urban sociology is the sociological study of cities and urban life. One of the field’s oldest sub-disciplines, urban sociology studies and examines the social, historical, political, cultural, economic, and environmental forces that have shaped urban environments. Like most areas of sociology, urban sociologists use statistical analysis, observation, archival research, census data, social theory, interviews, and other methods to study a range of topics, including poverty, racial residential segregation, economic development, migration and demographic trends, gentrification, homelessness, blight and crime, urban decline, and neighborhood changes and revitalization. Urban sociological analysis provides critical insights that shape and guide urban planning and policy-making. The philosophical foundations of modern urban sociology originate from the work of sociologists such as Karl Marx, Ferdinand Tönnies, Émile Durkheim, Max Weber and Georg Simmel who studied and theorized the economic, social and cultural processes of urbanization and its effects on social alienation, class formation, and the production or destruction of collective and individual identities. These theoretical foundations were further expanded upon and analyzed by a group of sociologists and researchers who worked at the University of Chicago in the early twentieth century. In what became known as the Chicago School of sociology the work of Robert Park, Louis Wirth and Ernest Burgess on the inner city of Chicago revolutionized not only the purpose of urban research in sociology but also the development of human geography through its use of quantitative and ethnographic research methods. The importance of theories developed by the Chicago School within urban sociology has been critically sustained and critiqued but still, remains one of the most significant historical advancements in understanding urbanization and the city within the social sciences. The discipline may draw from several fields, including cultural sociology, economic sociology, and political sociology. Development and rise Urban sociology rose to prominence within North American academics through a group of sociologists and theorists at the University of Chicago from 1910s to 1940s in what became known as the Chicago School of Sociology. The Chicago School of Sociology combined sociological and anthropological theory with ethnographic fieldwork in order to understand how individuals, groups, and communities interact within urban social systems. Unlike the primarily macro-based sociology that had marked earlier subfields, members of the Chicago School placed greater emphasis on micro-scale social interactions that sought to provide subjective meaning to how humans interact under structural, cultural and social conditions. The theory of symbolic interaction, the basis through which many methodologically groundbreaking ethnographies were framed in this period, took primitive shape alongside urban sociology and shaped its early methodological leanings. Symbolic interaction was forged out of the writings of early micro-sociologists George Mead and Max Weber, and sought to frame how individuals interpret symbols in everyday interactions. With early urban sociologists framing the city as a 'superorganism', the concept of symbolic interaction aided in parsing out how individual communities contribute to the seamless functioning of the city itself. Scholars of the Chicago School originally sought to answer a single question: how did an increase in urbanism during the time of the Industrial Revolution contribute to the magnification of contemporary social problems? Sociologists centred on Chicago due to its tabula rasa state, having expanded from a small town of 10,000 in 1860 to an urban metropolis of over two million in the next half-century. Along with this expansion came many of the era's emerging social problems – ranging from issues with concentrated homelessness and harsh living conditions to the low wages and long hours that characterized the work of the many newly arrived European immigrants. Furthermore, unlike many other metropolitan areas, Chicago did not expand outward at the edges as predicted by early expansionist theorists, but instead 'reformatted' the space available in a concentric ring pattern. As with many modern cities the business district occupied the city centre and was surrounded by slum and blighted neighbourhoods, which were further surrounded by workingmens' homes and the early forms of the modern suburbs. Urban theorists suggested that these spatially distinct regions helped to solidify and isolate class relations within the modern city, moving the middle class away from the urban core and into the privatized environment of the outer suburbs. Due to the high concentration of first-generation immigrant families in the inner city of Chicago during the early 20th century, many prominent early studies in urban sociology focused on the transmission of immigrants' native culture roles and norms into new and developing environments. Political participation and the rise in inter-community organizations were also frequently covered in this period, with many metropolitan areas adopting census techniques that allowed for information to be stored and easily accessed by participating institutions such as the University of Chicago. Park, Burgess and McKenzie, professors at the University of Chicago and three of the earliest proponents of urban sociology, developed the Subculture Theories, which helped to explain the often-positive role of local institutions on the formation of community acceptance and social ties. When race relations break down and expansion renders one's community members anonymous, as was proposed to be occurring in this period, the inner city becomes marked by high levels of social disorganization that prevent local ties from being established and maintained in local political arenas. The rise of urban sociology coincided with the expansion of statistical inference in the behavioural sciences, which helped ease its transition and acceptance in educational institutions along with other burgeoning social sciences. Micro-sociology courses at the University of Chicago were among the earliest and most prominent courses on urban sociological research in the United States. Evolution of the discipline The evolution and transition of sociological theory from the Chicago School began to emerge in the 1970s with the publication of Claude Fischer's (1975) "Toward a Theory of Subculture Urbanism" which incorporated Bourdieu's theories on social capital and symbolic capital within the invasion and succession framework of the Chicago School in explaining how cultural groups form, expand and solidify a neighbourhood. The theme of transition by subcultures and groups within the city was further expanded by Barry Wellman's (1979) "The Community Question: The Intimate Networks of East Yorkers" which determined the function and position of the individual, institution and community in the urban landscape in relation to their community. Wellman's categorization and incorporation of community-focused theories such as "Community Lost", "Community Saved", and "Community Liberated" which centre around the structure of the urban community in shaping interactions between individuals and facilitating active participation in the local community are explained in detail below: Community lost: The earliest of the three theories, this concept was developed in the late 19th century to account for the rapid development of industrial patterns that seemingly caused rifts between the individual and their local community. Urbanites were claimed to hold networks that were “impersonal, transitory and segmental”, maintaining ties in multiple social networks while at the same time lacking the strong ties that bound them to any specific group. This disorganization in turn caused members of urban communities to subsist almost solely on secondary affiliations with others and rarely allowed them to rely on other members of the community for assistance with their needs. Community saved: A critical response to the community lost theory that developed during the 1960s, the community saved argument suggests that multistranded ties often emerge in sparsely-knit communities as time goes on, and that urban communities often possess these strong ties, albeit in different forms. Especially among low-income communities, individuals have a tendency to adapt to their environment and pool resources in order to protect themselves collectively against structural changes. Over time urban communities have tendencies to become “urban villages”, where individuals possess strong ties with only a few individuals that connect them to an intricate web of other urbanities within the same local environment. Community liberated: A cross-section of the community lost and community saved arguments, the community liberated theory suggests that the separation of workplace, residence and familial kinship groups has caused urbanites to maintain weak ties in multiple community groups that are further weakened by high rates of residential mobility. However, the concentrated number of environments present in the city for interaction increases the likelihood of individuals developing secondary ties, even if they simultaneously maintain distance from tightly knit communities. Primary ties that offer the individual assistance in everyday life form out of sparsely-knit and spatially dispersed interactions, with the individual's access to resources dependent on the quality of the ties they maintain within their community. Along with the development of these theories, urban sociologists have increasingly begun to study the differences between the urban, rural and suburban environments within the last half-century. Consistent with the community-liberated argument, researchers have in large part found that urban residents tend to maintain more spatially dispersed networks of ties than rural or suburban residents. Among lower-income urban residents, the lack of mobility and communal space within the city often disrupts the formation of social ties and lends itself to creating an unintegrated and distant community space. While the high density of networks within the city weakens relations between individuals, it increases the likelihood that at least one individual within a network can provide the primary support found among smaller and more tightly knit networks. Since the 1970s, research into social networks has focused primarily on the types of ties developed within residential environments. Bonding ties, common in tightly knit neighbourhoods, consist of connections that provide an individual with primary support, such as access to income or upward mobility among a neighbourhood organization. Bridging ties, in contrast, are the ties that weakly connect strong networks of individuals together. A group of communities concerned about the placement of a nearby highway may only be connected through a few individuals that represent their views at a community board meeting, for instance. However, as the theory surrounding social networks has developed, sociologists such as Alejandro Portes and the Wisconsin model of sociological research began placing increased leverage on the importance of these weak ties. While strong ties are necessary for providing residents with primary services and a sense of community, weak ties bring together elements of different cultural and economic landscapes in solving problems affecting a great number of individuals. As theorist Eric Oliver notes, neighbourhoods with vast social networks are also those that most commonly rely on heterogeneous support in problem-solving, and are also the most politically active. As the suburban landscape developed during the 20th century and the outer city became a refuge for the wealthy and, later, the burgeoning middle class, sociologists and urban geographers such as Harvey Molotch, David Harvey and Neil Smith began to study the structure and revitalization of the most impoverished areas of the inner city. In their research, impoverished neighbourhoods, which often rely on tightly knit local ties for economic and social support, were found to be targeted by developers for gentrification which displaced residents living within these communities. Political experimentation in providing these residents with semi-permanent housing and structural support – ranging from Section 8 housing to Community Development Block Grant programs- has in many cases eased the transition of low-income residents into stable housing and employment. Yet research covering the social impact of forced movement among these residents has noted the difficulties individuals often have with maintaining a level of economic comfort, which is spurred by rising land values and inter-urban competition between cities as a means to attract capital investment. The interaction between inner-city dwellers and middle class passersby in such settings has also been a topic of study for urban sociologists. In a September 2015 issue of "City & Community(C&C)," the article discusses future plans and discusses research needed for the coming future. The article proposes certain steps in order to react to urban trends, create a safer environment, and prepare for future urbanization. The steps include: publishing more C&C articles, more research towards segregation in metropolitan areas, focusing on trends and patterns in segregation and poverty, decreasing micro-level segregation, and research towards international urbanization changes. However, in a June 2018 issue of C&C, Mike Owen Benediktsson argues that spatial inequality, the idea of a lack of resources through a specific space, would be problematic for the future of urban sociology. Problems in neighbourhoods arise from political forms and issues. He argues that attention should be more on the relationship between spaces rather than the expansion of more urban cities. On the opposite side of the Atlantic Ocean, in Europe, urban sociology is growing, with large debates coordinated by ESA RN 37. In Paris, the so called Urban School of Sciences Po has fundamentally advanced the understanding of how cities are governed through incomplete, conflictual, and negotiated arrangements, as proved in the work by Patrick Le Galès on fragmented, discontinuous and non linear urban and metropolitan governance. Tommaso Vitale has theorized mechanisms such as contentious embeddedness and decommodification to explain how marginalized groups interact with institutions, policies, and urban spaces. Criticism Many theories in urban sociology have been criticized, most prominently directed toward the ethnocentric approaches taken by many early theorists that lay the groundwork for urban studies throughout the 20th century. Early theories that sought to frame the city as an adaptable “superorganism” often disregarded the intricate roles of social ties within local communities, suggesting that the urban environment itself rather than the individuals living within it controlled the spread and shape of the city. For impoverished inner-city residents, the role of highway planning policies and other government-spurred initiatives instituted by the planner Robert Moses and others have been criticized as unsightly and unresponsive to residential needs. The slow development of empirically based urban research reflects the failure of local urban governments to adapt and ease the transition of local residents to the short-lived industrialization of the city. Some modern social theorists have also been critical of the apparent shortsightedness that urban sociologists have shown toward the role of culture in the inner city. William Julius Wilson has criticized theory developed throughout the middle of the twentieth century as relying primarily on the structural roles of institutions, and not how culture itself affects common aspects of inner-city life such as poverty. The distance shown toward this topic, he argues, presents an incomplete picture of inner-city life. The urban sociological theory is viewed as one important aspect of sociology. The concept of urban sociology as a whole has often been challenged and criticized by sociologists through time. Several different aspects from race, land, resources, etc. have broadened the idea. Manuel Castells questioned if urban sociology even exists and devoted 40 years' worth of research in order to redefine and reorganize the concept. With the growing population and majority of Americans living in suburbs, Castells believes that most researchers focus their work of urban sociology around cities, neglecting the other major communities of suburbs, towns, and rural areas. He also believes that urban sociologists have overcomplicated the term of urban sociology and should possibly create a more clear and organized explanation for their studies, arguing that a "Sociology of Settlements," would cover most issues around the term. Urban sociologists focus on a range of concepts such as peri-urban settlements, human overpopulation, and field studies of urban social interaction. Perry Burnett, who studied at the University of Southern Indiana, researched the idea of Urban sprawl and city optimization for the human population. Some sociologists study relationships between urban patterns/policy and social issues like racial discrimination or high-income taxes. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTECarrell2008308-28] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Trope_(literature)] | [TOKENS: 1792] |
Contents Literary device In writing and speaking, a literary device, literary technique, rhetorical device, stylistic device, or trope is any deliberate strategy of using language that an author or orator employs to be more effective at achieving some purpose. This purpose may be: to focus or guide the audience's attention, to make the language or its content memorable, or to evoke a particular emotional, rational, aesthetic, or other response. Literary devices are classifiable into sub-categories, such as narrative devices, poetic devices, argumentative devices, linguistic schemes or templates, or other techniques distinct to certain forms of language. They can be difficult to cleanly classify, however, as many are common across multiple such forms and can intersect under various categories, such as figures of speech. Terminology In literature, a device is a common term for any intentional strategy of language use. The word trope originally meant an artistic effect realized with figurative language, or "a substitution of a word or phrase by a less literal word or phrase". Semantic change has expanded the definition of trope to also describe a writer's usage of commonly recurring or overused devices (including types of characters and situations), motifs, and clichés in a work of creative literature. Trope entered English from Latin tropus, 'figure of speech', itself derived from the Koine Greek τρόπος (tropos), 'a turn, a change'. The term figure of speech, or even just figure, has two related literary meanings: the broader technical meaning that includes both tropes (as defined narrowly above) and schemes. However, colloquially, it tends to carry just the specific meaning of trope: any instance of figurative or non-literal language. Likewise, rhetorical device is used as a simple synonym for any literary device, though more narrowly it may refer to a technique specifically of persuasive or argumentative language usage (rhetoric). Rhetorical devices, in this sense, aim to make a position or argument more compelling, emotionally or otherwise, or to prompt the audience to take action.[page needed] Narrative devices Various literary devices are specifically applied to enhance narratives and storytelling. Some examples include: Poetic and sound-based devices Sonic language, the communication of content more complexly, quickly, or artistically through a reliance on sound or through evoking sounds in the imagination, is often a defining feature of poetry. It delivers messages to the audience by prompting specific reactions through auditory perception.[page needed] Here are some examples: Rhetorical and argumentative devices Rhetoric is the art of persuasion, so, strictly speaking, rhetorical devices or rhetorical figures are techniques of language used to persuade people. Traditionally, three broad classifications of rhetorical devices by what they appeal to include: emotions, logic, or the writer's or speaker's credibility (i.e. pathos, logos, and ethos, respectively). In the following list, rhetorical device is used narrowly to mean any such device at the phrase- or sentence-level that departs from ordinary or literal language "mainly by the arrangement of their words to achieve special effects, and not [or not merely], like metaphors and other tropes, by a radical change in the meaning of the words themselves". Often they relate to how new arguments are introduced into the text or how arguments are emphasized. Figurative language An instance of figurative language (sometimes also called a figure of speech or trope in their narrower meanings) is any way of wording something other than the ordinary literal way, often to provide some heightened effect, more complex meaning, or deeper connection. American literary theorist Kenneth Burke has called metaphor, metonymy, synecdoche and irony the "four master tropes", due to their frequency in everyday discourse. Examples of figurative language include: Irony is the figure of speech in which a speaker uses words that intend to express a meaning that is the direct opposite of those words. This is the simplest form of irony, in which a speaker says the opposite of what he or she intends. There are several forms, including euphemism, understatement, sarcasm, and some forms of humor. This is when the author creates a surprising event or situation that is the exact opposite of what the reader would expect, often creating humor or an eerie feeling. For example, in Steinbeck's novel The Pearl, the reader may think that Kino and Juana would become happy and successful after discovering the "Pearl of the World", with all its value. However, their lives change dramatically for the worse after discovering it. Similarly, in Shakespeare's Hamlet, the title character almost kills King Claudius at one point but resists because Claudius is praying and therefore may go to heaven. As Hamlet wants Claudius to go to hell, he waits. A few moments later, after Hamlet leaves the stage, Claudius reveals to the audience that he doesn't mean his prayers ("words without thoughts never to heaven go"), so Hamlet could have killed him after all. Dramatic irony is when the audience knows something important about the story that one or more characters in the story do not know. For example, in William Shakespeare's Romeo and Juliet, the drama of Act V comes from the fact that the audience knows Juliet is alive, but Romeo thinks she's dead. If the audience had thought, like Romeo, that she was dead, the scene would not have had anywhere near the same power. Likewise, in Edgar Allan Poe's "The Tell-Tale Heart", the energy at the end of the story comes from the fact that we know the narrator killed the old man, while the guests are oblivious. If we were as oblivious as the guests, there would be virtually no point in the story. Schemes A linguistic scheme is a discourse-level literary device relying on intentional relations, or the exact ordering, of words inside phrases, clauses, and sentences. Word repetition rhetorical devices operate via repeating words or phrases in various ways, usually for emphasis. Some types include: Word relation rhetorical devices operate via deliberate connections between words within a sentence. He caught the train and a bad cold. I held my breath and the door for you. Dumbledore was striding serenely across the room wearing long midnight-blue robes and a perfectly calm expression. — J. K. Rowling, Harry Potter and the Order of the Phoenix Fred excelled at sports; Harvey at eating; Tom with girls. Friends, Romans, countrymen, lend me your ears. — William Shakespeare, Julius Caesar 3.2 General linguistic choices Diction is the choice of specific words to communicate not only meaning, but emotion as well. Authors writing their texts consider not only a word's denotation but also its connotation. For example, a person may be described as stubborn or tenacious, both of which have the same basic meaning but are opposite in terms of their emotional background (the first is an insult, while the second is a compliment). Similarly, a bargain-seeker may be described as either thrifty (compliment) or stingy (insult). An author's diction is extremely important in discovering the narrator's tone, or attitude. Sentences can be long or short; constructed in the active voice or passive voice; and composed as simple, compound, complex, or compound-complex. They may also include such techniques as inversion or such structures as appositive phrases, verbal phrases (gerund, participle, and infinitive), and subordinate clauses (noun, adjective, and adverb). These tools can be highly effective in achieving an author's purpose. An example is "The ghetto was ruled by neither German nor Jew; it was ruled by delusion" from Night by Elie Wiesel. In this sentence, Wiesel uses two parallel independent clauses written in the passive voice. The first clause establishes suspense about who rules the ghetto, and then the first few words of the second clause set up the reader with the expectation of an answer, which is metaphorically revealed only in the final word of the sentence. Verbs, which provide actions (and states of being) in a sentence, have a variety of ways they can be modified in languages like English, including: grammatical tense, grammatical aspect, and grammatical mood. There are three basic tenses: past, present, and future. There are three main aspects: simple, perfect, and progressive. The perfect and progressive aspects convey information not strictly about time-period but about the change-across-time, or nature of time, occurring in a sentence. There are many moods (also called modes), with some important ones being: the indicative/declarative mood (ordinary statements that provide information or description), imperative mood (commands), and interrogative mood (questions). Other moods include the affirmative, negative, emphatic, conditional, and subjunctive. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/C*] | [TOKENS: 254] |
Contents C* C* (or C-star) is a data-parallel superset of ANSI C with synchronous semantics. History It was developed in 1987 as an alternative language to *Lisp and CM-Fortran for the Connection Machine CM-2 and above. The language C* adds to C a "domain" data type and a selection statement for parallel execution in domains. For the CM-2 models the C* compiler translated the code into serial C, calling PARIS (Parallel Instruction Set) functions, and passed the resulting code to the front end computer's native compiler. The resulting executables were executed on the front end computer with PARIS calls being executed on the Connection Machine. On the CM-5 and CM-5E parallel C* Code was executed in a SIMD style fashion on processing elements, whereas serial code was executed on the PM (Partition Manager) Node, with the PM acting as a "front end" if directly compared to a CM-2. The latest version of C* as of 27 August 1993 is 6.x. An unimplemented language dubbed "Parallel C" (not to be confused with Unified Parallel C) influenced the design of C*. Dataparallel-C was based on C*. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_note-26] | [TOKENS: 1856] |
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links |
======================================== |
[SOURCE: https://he.wikipedia.org/wiki/Grand_Theft_Auto:_Chinatown_Wars] | [TOKENS: 966] |
תוכן עניינים Grand Theft Auto: Chinatown Wars Grand Theft Auto: Chinatown Wars הוא משחק פעולה מסוג עולם פתוח, שפותח על ידי Rockstar Leeds (אנ') בשיתוף עם Rockstar North, ושווק על ידי Rockstar Games. המשחק יצא לנינטנדו DS במרץ 2009, ל-PSP באוקטובר 2009, ל-iOS בינואר 2010 ולמכשירי האנדרואיד וה-Fire OS בדצמבר 2014. זהו המשחק ה-13 בסדרת משחקי Grand Theft Auto. תקציר העלילה הסיפור, שמתרחש בשנת 2009 בעיר ליברטי סיטי הבדיונית (מבוססת על ניו יורק), עוקב אחר חבר טריאדה צעיר בשם הואנג לי, שתוקפים לא ידועים גונבים ממנו ירושה משפחתית: חרב שהעניק אביו המנוח של הואנג. לאחר ששרד, הואנג יוצא למסע למצוא את החרב ולנקום באחראים לגניבה, מה שמוביל אותו להיות מעורב במאבק כוחות בליברטי סיטי ובסופו של דבר לחשוף את האמת על רצח אביו. משחקיות המשחק תוכנן באופן בסיסי עבור שחקנים על מנת לקיים אינטראקציות בולטות עם אובייקטים במערכות ה-DS והסמארטפונים באמצעות שליטה במסכי המגע שלהם, תוך שהוא מציע אלמנטים ייחודיים של משחק שלא היו בסדרת Grand Theft Auto. האלמנט הבולט ביותר במשחק הוא היכולת לקנות סמים מספקים ולמכור אותם לסוחרים כדי להרוויח כסף, שהיה שנוי במחלוקת לאחר צאת המשחק. למרות זאת, המשחק קיבל ביקורות חיוביות במידה רבה מצד המבקרים. קישורים חיצוניים |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Metal_Gear_Solid_(1998_video_game)] | [TOKENS: 8161] |
Contents Metal Gear Solid (1998 video game) Metal Gear Solid[d] is a 1998 action-adventure stealth game developed and published by Konami for the PlayStation. It was directed, produced, and written by Hideo Kojima, and follows the MSX2 games Metal Gear and Metal Gear 2: Solid Snake. Players control Solid Snake, a soldier who infiltrates a nuclear weapons facility to neutralize the terrorist threat from FOXHOUND, a renegade special forces unit. Snake must liberate hostages and stop the terrorists from launching a nuclear strike. Cinematic cutscenes were rendered using the in-game engine and graphics, and voice acting is used throughout. Metal Gear Solid was unveiled at the 1996 Tokyo Game Show and demonstrated at trade shows including the 1997 Electronic Entertainment Expo. Metal Gear Solid received unanimous acclaim. Regarded as one of the greatest and most important video games ever made, it helped popularize the stealth genre and in-engine cinematic cutscenes. It sold more than seven million copies worldwide and shipped 12 million demos. It was followed by an expanded version for PlayStation and Windows, Metal Gear Solid: Integral (1999); media adaptations including a radio drama, comics and novels; and numerous sequels, starting with Metal Gear Solid 2: Sons of Liberty (2001). It has been rereleased on multiple formats, and a remake, Metal Gear Solid: The Twin Snakes, was released for GameCube in 2004. Gameplay The player must navigate the protagonist, Solid Snake, through a nuclear weapons facility without being detected by enemies. When Snake moves into an enemy's field of vision, he sets off an "alert mode" that draws enemies. The player must then hide until "evasion mode" begins; when the counter reaches zero, the game returns to "infiltration mode", where enemies are no longer suspicious. The radar cannot be used in alert or evasion mode. In addition to the stealth gameplay, set-piece sequences entail firefights between the player and enemies. To remain undetected, the player can perform techniques that make use of Snake's abilities and the environment, such as crawling under objects, using boxes as cover, ducking or hiding around walls, and making noise to distract enemies. An on-screen radar provides the player with the location of nearby enemies and their field of vision. Snake can also make use of many items and gadgets, such as infra-red goggles and a cardboard box disguise. The emphasis on stealth promotes a less violent form of gameplay, as fights against large groups of enemies will often result in severe damage to Snake. Despite the switch to 3D, the game is still played primarily from an overhead perspective similar to the original 2D Metal Gear games. However, the camera angle will change during certain situations, such as a corner view when Snake flattens himself to a wall next to an open space, or into first-person when crawling under tight spaces or when equipping certain items such as the binoculars or a sniper rifle. The player can also use the first-person view while remaining idle to look around Snake's surroundings and see what is ahead of him. Progress is punctuated by cutscenes and radio conversations, as well as encounters with bosses. To progress, players must discover the weaknesses of each boss and defeat them. Play controls and strategies can also be accessed via the Codec, a radio communication device where advice is delivered from Snake's support team; for example, the support team may chastise Snake for not saving his progress often enough, or explain his combat moves in terms of which buttons to press on the gamepad. The Codec is also used to provide exposition on the game's backstory. In addition to the main story, there is also a VR training mode in which the player can test out their sneaking skills in a series of artificially constructed environments. This mode is divided into three main categories (practice, time attack, and gun shooting), each consisting of ten stages. After completing all 30 stages, a survival mission is unlocked in which the player must sneak their way through ten consecutive stages under a seven-minute limit. Synopsis Metal Gear Solid takes place in an alternate history in which the Cold War continued into the 1990s, ending at some point near the end of the 20th century. The game's events take place years after those of Metal Gear 2: Solid Snake, and form the third chapter in an overarching plot concerning the character of Solid Snake. The protagonist is Solid Snake, a legendary infiltrator and saboteur. During the mission, Snake receives support and advice via codec radio. Colonel Roy Campbell, Solid Snake's former commanding officer, supports Snake with information and tactics. While he initially keeps some secrets from Snake, he gradually reveals them. He is joined by Naomi Hunter, who gives medical advice; Nastasha Romanenko, who provides item and weapon tips; Master Miller, a former drill instructor and survival coach; and Mei Ling, who invented the Soliton radar system used in the mission and is also in charge of mission data; the player can call her to save the game. The main antagonist of the game is Liquid Snake, leader of a now-terrorist splinter cell of the organization FOXHOUND, and genetic counterpart to Solid Snake. An elite special forces unit, FOXHOUND contains experts specializing in different tasks. Members are Revolver Ocelot, a Western-style gunslinger and expert interrogator whose weapon of choice is the Colt Single Action Army; Sniper Wolf, a preternatural sniper; Vulcan Raven, a hulking Alaskan shaman armed with an M61 Vulcan torn from a downed F-16; Psycho Mantis, a psychic profiler and psychokinesis expert; and Decoy Octopus, a master of disguise. Other characters include Meryl Silverburgh, Colonel Campbell's niece and a rookie soldier stationed in Shadow Moses who did not join the revolt; Dr. Hal Emmerich, the lead developer of Metal Gear REX; and Gray Fox, also known as the "Ninja", a mysterious cybernetically enhanced agent who is neither an ally nor an enemy of Snake but does oppose FOXHOUND. In 2005, renegade genetically enhanced special forces unit FOXHOUND seizes control of a remote island in Alaska's Fox Archipelago codenamed "Shadow Moses", which houses a nuclear weapons disposal facility. FOXHOUND threatens to use the nuclear-capable mecha Metal Gear REX, being secretly tested at the facility, against the US government, if they do not receive the remains of Big Boss and a ransom of $1 billion within 24 hours. Solid Snake is forced out of retirement by Colonel Roy Campbell to infiltrate the island and neutralize the threat. Snake enters the facility via an air vent and locates the first hostage, DARPA Chief Donald Anderson. Anderson reveals that Metal Gear REX can be deactivated with a secret detonation override code, but dies of a heart attack. Colonel Campbell's niece Meryl Silverburgh, held hostage in a neighboring cell, helps Snake escape. Snake locates another hostage, ArmsTech president Kenneth Baker, but is confronted by FOXHOUND member Revolver Ocelot. Their gunfight is interrupted by a mysterious cyborg ninja who cuts off Ocelot's right hand. Baker briefs Snake on the Metal Gear project and advises him to contact Meryl, whom he gave a PAL card that might prevent the launch, but he too dies of a sudden heart attack. Over Codec, Meryl agrees to meet in the warhead disposal area on the condition that Snake contacts Metal Gear's designer, Dr. Hal "Otacon" Emmerich. En route, Snake receives an anonymous codec call calling themselves "Deepthroat", warning him of a tank ambush. Snake fends off the attack from Vulcan Raven and proceeds to the rendezvous, where he locates Otacon. The ninja reappears, and Snake realizes it is his former ally Gray Fox, believed dead. Devastated over learning REX's true intentions, Otacon agrees to aid Snake remotely using special camouflage to procure information and supplies. Snake meets Meryl and receives the PAL card. As they head for the underground base, Meryl is possessed by psychic Psycho Mantis and pulls her gun on Snake. He disarms her and defeats Mantis, who informs Snake that he has "a large place" in her heart. After they reach the underground passageway, Sniper Wolf ambushes them, wounds Meryl, and captures Snake. Liquid confirms Snake's suspicion that they are twin brothers. After being tortured by Ocelot, Snake is confused to discover Anderson's body in his cell, seemingly dead for days. He escapes with the help of Otacon, makes his way up the communications tower, and fends off a Hind D helicopter attack from Liquid. As he emerges onto a snowfield, he is confronted again by Sniper Wolf. He kills her, devastating Otacon, who was infatuated with her. Snake continues to REX's hangar and is ambushed again by Raven. After Snake defeats him, Raven tells Snake that the "Anderson" he conversed with was, in fact, FOXHOUND disguise artist Decoy Octopus. Infiltrating Metal Gear's hangar, Snake overhears Liquid and Ocelot preparing the REX launch sequence and uses the PAL card, but this unexpectedly activates REX. Liquid reveals that he has been impersonating Snake's advisor Master Miller and that FOXHOUND has used Snake to facilitate REX's launch. He and Snake are the product of the Les Enfants Terribles project, a 1970s government program to clone Big Boss. He also reveals to Snake the government's true reason for sending him: Snake is unknowingly carrying a weaponized "FOXDIE" virus that causes cardiac arrest in FOXHOUND members on contact, allowing the government to retrieve REX undamaged. As Liquid, in REX, battles Snake, Gray Fox appears. He reveals to Snake that he was Deepthroat, destroys REX's radome, and is crushed to death by REX. Snake destroys REX and defeats Liquid, then escapes with Meryl or Otacon[e] via a tunnel, pursued by Liquid in a jeep. After their vehicles crash, Liquid pulls a gun on Snake but dies from FOXDIE. Colonel Campbell, briefly ousted from command, calls off a nuclear strike to destroy evidence of the operation and has Snake registered as killed in action to stop the US government searching for him. Naomi Hunter, who injected Snake with the FOXDIE virus, tells him that he has an indeterminate amount of time before it kills him. Ocelot calls the US President; he was a double agent whose mission was to steal Baker's disk of Metal Gear specifications, and identifies the President as being the secret third clone of Big Boss. Development Director Hideo Kojima originally planned his third Metal Gear game in 1994 for the 3DO Interactive Multiplayer. Kojima was initially planning Metal Gear Solid while Policenauts (1994), an adventure game, was still in development. Conceptual artwork by Yoji Shinkawa of the characters Solid Snake, Meryl Silverburgh, who was also a character in Policenauts, and the FOXHOUND team, were included in the Policenauts: Pilot Disk preceding the release of the full 3DO version of Policenauts in 1995. The game was titled Metal Gear Solid, rather than Metal Gear 3, as Kojima felt that the previous MSX2 games that he worked on were not widely known, due to the fact that they were not released in North America and only the first one was released in Europe (an NES version of the first Metal Gear was released in North America, but Kojima had no involvement with it or its sequel Snake's Revenge). The word "Solid," derived from the codename of series's protagonist Solid Snake (as well as the title of the second MSX2 game), was chosen not only to represent the fact that it was the third entry of the series, but also the transition from 2D to 3D computer graphics. Considering first person games difficult to control, the team opted to give the gameplay a 2D style by having it predominantly played from an overhead angle, while using 3D graphics and the ability to switch to first person on the fly to make it feel as though the game were taking place in a real 3D world. Development for Metal Gear Solid began in 1995, but was briefly halted due to the Great Hanshin earthquake which caused major damage to the development studio. When development of Metal Gear Solid resumed, it was moved over to the PlayStation platform. Developers aimed for accuracy and realism while making the game enjoyable and tense. In the early stages of development, the Huntington Beach SWAT team educated the creators with a demonstration of vehicles, weapons, and explosives. Weapons expert Motosada Mori was also tapped as a technical adviser in the research, which included visits to Fort Irwin and firing sessions at Stembridge Gun Rentals. Kojima stated that "if the player isn't tricked into believing that the world is real, then there's no point in making the game." To fulfill this, adjustments were made to every detail, such as individually designed desks. The characters and mecha designs were made by artist Yoji Shinkawa based on Kojima's concepts. When designing props and hardware, he first built plastic models at home, and then drew the final designs from the models. According to Shinkawa, Solid Snake's physique in this particular installment was based on Jean-Claude Van Damme, while his facial appearance was based on Christopher Walken. Konami had decided that the middle-aged appearance of Snake in the previous games did not have good commercial appeal, and opted to redesign him so that he would look younger. The characters were completed by polygonal artists using brush drawings and clay models by Shinkawa. According to Kojima, "the ninja's cloaking effect is the result of a bug. Of course, it wasn't totally coincidence since we wanted that effect anyway, but we did get a somewhat unexpected result." Kojima wanted greater interaction with objects and the environment, such as allowing the player to hide bodies in a storage compartment. Additionally, he wanted "a full orchestra right next to the player"; a system which made modifications such as tempo and texture to the currently playing track, instead of switching to another pre-recorded track. Although these features could not be achieved, they were implemented in Metal Gear Solid 2: Sons of Liberty. Kojima used Lego building blocks and toy figurines to model 3D areas and see what the planned camera views would look like. The game was developed by a staff of twenty people, a small team for such a major title. Kojima preferred to have a smaller team so that he got to know everyone in the team and what they were working on, and could know if anyone was sick or unhappy. The team size did not expand to full strength until September 1996; initially, there was only a single programmer working on the game's code. Because the developers wanted the game's action to be stylized and movie-like rather than realistic, they opted not to use motion capture, instead having an artist with experience in anime design the animations by hand. A gameplay demo of Metal Gear Solid was first revealed to the public at the 1996 Tokyo Game Show and was later shown at E3 1997 as a short video. The 1997 version had several differences, including a more controllable camera and blue-colored vision cones. The demo generated significant buzz and positive reviews at the event, for its game design emphasizing stealth and strategy (like earlier Metal Gear games), its presentation, and the unprecedented level of real-time 3D graphical detail for the PlayStation. The enthusiastic response to the game at E3 took Kojima by surprise, and increased his expectations for the game's performance in the American market. The game's Japanese release was originally planned for late 1997, but was delayed to 1998. It was playable for the first time at the Tokyo Game Show in 1998 and released the same year in Japan with an extensive promotional campaign. Television and magazine advertisements, in-store samples, and demo giveaways contributed to a total of $8 million in promotional costs. Except for David Hayter (Solid Snake) and Doug Stone (Psycho Mantis), the English voice cast was credited with pseudonyms. Reportedly, this was done because the Screen Actors Guild's rules at the time were unclear regarding performances for video games. When the actors returned for the 2004 remake Metal Gear Solid: The Twin Snakes, they were credited with their real names. The musical score of Metal Gear Solid was composed by Konami's in-house musicians, including Kazuki Muraoka, Hiroyuki Togo, Takanari Ishiyama, Lee Jeon Myung, and Maki Kirioka. Composer and lyricist Rika Muranaka provided a song called "The Best is Yet To Come" for the game's ending credits sequence. The song is performed in Irish by Aoife Ní Fhearraigh. The main theme was composed by Tappi Iwase from the Konami Kukeiha Club. Music played in-game has a synthetic feel with increased pace and introduction of strings during tense moments, with a looping style endemic to video games. Overtly cinematic music, with stronger orchestral and choral elements, appears in cutscenes. The soundtrack was released on September 23, 1998, under the King Records label. Release Metal Gear Solid was first released for the PlayStation in Japan on September 3, 1998. The game was available in a standard edition, as well as a limited "Premium Package" edition sold in a large box that also contained a t-shirt, a pair of FOXHOUND-themed dog tags, memory card stickers, an audio CD featuring the soundtracks from the MSX2 Metal Gear games (including a few bonus arranged tracks), and a 40-page booklet, Metal Gear Solid Classified, featuring production notes, interviews with the developers, and a glossary of terminology in the game. The North American version was released a month later on October 20. Changes and additions were made to this version, such as a choice of three difficulty settings when starting a new game (with a fourth setting that is unlocked after completing the game once), an alternate tuxedo outfit for Snake (which the character wears on every third playthrough on the same save file), and a "demo theater" mode where the player views every cutscene and radio conversations relevant to the main story. Jeremy Blaustein, who previously worked on the English localization of Snatcher for the Sega CD, wrote the English version of the script. One change in the English script was the addition of Western sources and authors to Mei-Ling's pool of motivational quotes; originally the character only cited Chinese proverbs natively, providing an explanation afterward in Japanese, but this proved challenging to adapt during the translation. The games detected by Psycho Mantis when he reads the player's memory card were also changed, due to certain games (such as the Tokimeki Memorial series) not being released outside Japan. This resulted in Kojima's cameo (in which he thanks the player for supporting his work via a voiceover) being cut from the Western versions, as save data from two PlayStation games not available outside Japan, Snatcher and Policenauts, needed to be present on the player's memory card for this Easter egg to appear. The game was launched in Europe on February 22, 1999, with versions voiced in French, Italian, and German available in addition to English. A Spanish dubbed version was later released on May 1. Like in Japan, a limited edition of the game was released, although the contents of the European limited edition differs from the Japanese counterpart. The European Premium Package comes with the game software itself and its soundtrack album CD, along with a t-shirt, dog tags, memory card stickers, a double sided movie-style poster and a set of postcards. The Japanese PlayStation version of Metal Gear Solid was reissued twice: once under "The Best" series and later under "PS one Books." Likewise, the American and European versions of Metal Gear Solid were reissued under the "Greatest Hits" and "Platinum" series respectively. For the 20th anniversary of the original Metal Gear in 2007, the original Metal Gear Solid was re-released in Japan as a stand-alone 20th anniversary-themed edition, as well as in the included in the 20th anniversary Metal Gear Solid Collection set, while in North America it was bundled in the Metal Gear Solid: Essential Collection set. The original Metal Gear Solid was released on the PlayStation Store for download on the PlayStation 3 and PlayStation Portable on March 21, 2008, in Japan and on June 18, 2009, in North America and on November 19 of the same year in Europe. Metal Gear Solid is one of the twenty PlayStation games included in the PlayStation Classic released in 2018. The game is included in both the Japanese and western models of the unit in their respective versions. Released on June 25, 1999, for the PlayStation in Japan, Metal Gear Solid: Integral[f] is an expanded edition of the game that features the added content from the American and European versions. It replaces the Japanese voices from the original version with the English dub, offering players a choice between Japanese and English subtitles during cutscenes and CODEC conversations (item descriptions, mission logs, and other text are still in Japanese). Further additional content to the main game include an alternate "sneaking suit" outfit for Meryl (which she wears when Snake is dressed in the tuxedo), a "Very Easy" difficulty setting where the player starts the mission armed with a suppressor-equipped MP5 submachine gun with infinite ammo (substituting the FAMAS rifle in Snake's inventory), an eighth Codec frequency featuring commentary from the development team (unvoiced and in Japanese text only) on every area and boss encounter, hidden music tracks, an alternate game mode where the player controls Snake from a first-person perspective (on Normal difficulty only), an option for alternative patrol routes for enemies, and a downloadable PocketStation minigame. The Torture Event was also made easier, reducing the number of rounds to three per session on all five difficulty settings. The VR training mode is now stored on a separate third disc, known as the "VR Disc", and has been expanded into 300 missions. These new set of missions are divided into four main categories: Sneaking, Weapons, Advanced, and Special. The first three categories feature standard training exercises that test the player's sneaking, shooting, and combat skills, while the fourth category contains less conventional tests involving murder mysteries, giant genome soldiers, and flying saucers. One particular set of missions has the player controlling the Cyborg Ninja, unlocked by either completing a minigame on the PocketStation and uploading the data to the VR Disc or by achieving the Fox rank on the main game. Completing all 300 missions will unlock a concept artwork of Metal Gear RAY, a mech that would later appear in Metal Gear Solid 2: Sons of Liberty. Additional content includes preview trailers of Metal Gear Solid from trade events and a photoshoot mode where the player can take photographs of fully expressive polygonal models of Mei Ling and Dr. Naomi after completing the main game. Famitsu magazine rated Metal Gear Solid: Integral a 34 out of 40. The VR Disc from Integral was released by itself during the same year in other regions as Metal Gear Solid: VR Missions in North America on September 23 and as Metal Gear Solid: Special Missions in the PAL region on October 29. While the content of both, VR Missions and Special Missions, are virtually identical to the VR Disc, the unlocking requirements for the Ninja missions and the photoshoot mode were changed accordingly, so that save data from the main game was no longer required. The Special Missions version also adds an additional requirement in which the user must also own a copy of the original Metal Gear Solid in PAL format in order to start the game - after booting Special Missions on the console, the player will be asked to switch the disc with the first disc from Metal Gear Solid to load data before asking the player to switch back to the Special Missions disc to proceed through the rest of the game. The Windows version of Metal Gear Solid was released in North America on September 22, 2000, in the United Kingdom on October 20, 2000, and in other European and Asian territories (excluding Japan) in late 2000. This version was published by Microsoft Games and developed by Digital Dialect. It supports the use of a keyboard or a USB game controller with at least six buttons (with the manual recommending the Sidewinder Game Pad Pro). It also supports Direct3D-capable video cards, allowing for a high resolution of up to 1024x768. The Windows version is labeled Metal Gear Solid on the packaging, but the actual game uses the Metal Gear Solid: Integral logo, although it has some differences as well from the PlayStation version of Integral and lacks some of its content. The most significant change was reducing the number of discs from three to two, which was done by giving each disc two separate executable files, one for the main game (mgsi.exe) and the other for the VR training portion (mgsvr.exe), thus eliminating the need for a stand-alone third disc. One notable omission was the removal of the cutscene before the Psycho Mantis battle in which he reads the player's memory card and activates the vibration function of the player's controller if a DualShock is being used, as this scene involved the use of PlayStation-specific peripherals. The method for defeating Mantis was also changed from using the second controller to simply using the keyboard (regardless of whether the player was using a game controller or not up to that point). Other omissions include the removal of the eighth Codec frequency (140.07), which featured written commentaries by the developers, Meryl's alternate sneaking suit outfit, and the mission logs when loading a save file. However, the Windows version adds the option to toggle moving and shooting in first-person view mode at any time regardless of difficulty setting, and players can now save their progress at any point without contacting Mei-Ling through the use of quicksaves. On the VR training portion, all 300 missions, as well as the photoshoot mode, are available from the start, although the opening video and the three unlockable preview trailers from the PlayStation version have been removed. Scoring 83 on Metacritic's aggregate, the game was criticized for "graphic glitches," the aged nature of the port, and being virtually identical to the PlayStation version. A remake, Metal Gear Solid: The Twin Snakes, was developed by Silicon Knights under the supervision of KCE Japan and released for the GameCube in North America, Japan, and Europe in March 2004. Although Twin Snakes was primarily developed at Silicon Knights, its cutscenes were developed in-house at Konami and directed by Japanese film director Ryuhei Kitamura, reflecting his dynamic signature style, utilizing bullet time photography and choreographed gunplay extensively. While the storyline and settings of the game were unchanged (although a select few lines of dialog were re-written more closely resembling the original Japanese version), a variety of gameplay features from Sons of Liberty were added such as the first-person aiming and hanging from bars on walls. Another change in the English voice acting was the reduction of Mei Ling's, Naomi's and Nastasha's accents, as well as the recasting of Gray Fox from Greg Eagles, who still reprises the role of the DARPA chief, to Rob Paulsen. The graphics and play mechanics were also updated to match those of Metal Gear Solid 2. The original version of Metal Gear Solid was re-released in October 2023 for PlayStation 4, PlayStation 5, Xbox Series X/S, Nintendo Switch and Windows (via Steam) as part of a series of re-releases titled the Metal Gear Solid: Master Collection. The title is available as stand-alone download with the original Metal Gear and Metal Gear 2: Solid Snake included as bonuses, with all three titles also included as part of a compilation titled Metal Gear Solid: Master Collection Vol.1 along with re-releases of Metal Gear Solid 2 and Metal Gear Solid 3 (both available as separate downloads as well). In the Master Collection, the game runs on an emulator developed by M2, with the option to play any of the seven regional versions (Japanese, North American, European English, German, French, Italian and Spanish) of the game via additional downloads, as well as the Integral/VR Missions/Special Missions expansions. The Master Collection edition handles the Psycho Mantis' mind reading event by giving the player option to create a virtual memory card with save files from supported games in order to trigger specific lines of dialogue. There's also a new animated sequence when the player reaches the blast furnace which visually depicts the disc-swapping process. Related media A Japanese radio drama version of Metal Gear Solid, directed by Shuyo Murata and written by Motosada Mori, was produced shortly after the release of the original PlayStation game. 12 episodes were aired, from 1998 to 1999 on Konami's CLUB db program. The series was later released on CD as a two-volume series titled Drama CD Metal Gear Solid. Set after the events of the PlayStation game, Snake, Meryl, Campbell and Mei Ling (all portrayed by their original Japanese voice actors) pursue missions in hostile third world nations as FOXHOUND. The new characters introduced include Sgt. Allen Iishiba (voiced by Toshio Furukawa), a Delta Force operative who assists Snake and Meryl, Col. Mark Cortez (v.b. Osamu Saka), an old friend of Campbell who commands the fictional Esteria Army Special Forces, and Capt. Sergei Ivanovich (v.b. Kazuhiro Nakata), a former war buddy of Revolver Ocelot from his SVR days. In September 2004, IDW Publications began publishing a series of Metal Gear Solid comics, written by Kris Oprisko and illustrated by Ashley Wood. The comic was published bimonthly until 2006, lasting 12 issues fully covering the Metal Gear Solid storyline. The comic was adapted into a PlayStation Portable game, Metal Gear Solid: Digital Graphic Novel (Metal Gear Solid: Bande Dessinée in Japan). It features visual enhancements and two interactive modes designed to give further insight into the publication. Upon viewing the pages, the player can open a "scanning" interface to search for characters and items in a three-dimensional view. Discoveries are added to a database which can be traded with other players via Wi-Fi. The "mission mode" allows the player to add collected information into a library. This information must be properly connected to complete a mission. Metal Gear Solid: Digital Graphic Novel was released in North America on June 13, 2006, Japan on September 21 and the PAL region on September 22. In 2006, the game received IGN's award for Best Use of Sound on the PSP. A DVD-Video version is included with its sequel (Metal Gear Solid 2: Bande Dessinée), which was released in Japan on June 12, 2008. The DVD version features full voice acting. A novelization based on the original Metal Gear Solid was written by Raymond Benson and published by Del Rey. The American paperback edition was published on May 27, 2008, and the British Edition on June 5, 2008. A second novelization by Kenji Yano (written under the pen name Hitori Nojima), Metal Gear Solid Substance I, was published by Kadokawa Shoten in Japan on August 25, 2015. This novelization is narrated through a text file written by a young man living in Manhattan in 2009 (the present year of the Plant chapter in Metal Gear Solid 2: Sons of Liberty). The story also acknowledges certain plot elements from Metal Gear Solid V: The Phantom Pain regarding certain characters such as Liquid Snake and Psycho Mantis. Reception Prior to release, the game's demonstrations at several trade shows between 1996 and 1998 had received a positive response. This had generated significant worldwide interest in the game prior to its release in 1998. Metal Gear Solid received "universal acclaim", according to review aggregator website Metacritic. PlayStation Official Magazine – UK review called Metal Gear Solid "the best game ever made. Unputdownable and unforgettable". The review by IGN opined Metal Gear Solid came "closer to perfection than any other game in PlayStation's action genre" and called it "beautiful, engrossing, and innovative...in every conceivable category." Computer and Video Games compared it to "playing a big budget action blockbuster, only better." Arcade magazine praised it for "introducing a brand new genre: the sneak-'em-up" and said it would "herald a tidal wave" of "sneak-'em-ups." They called it a "brilliant, technically stunning, well thought through release that's sure to influence action adventure games for many years." GMR called it a "cinematic classic." GamePro called it "this season's top offering [game] and one game no self-respecting gamer should be without," but criticized the frame rate that "occasionally stalls the eye-catching graphics." GameSpot was critical of how easy it is for the player to avoid being seen, as well as the game's short length, calling it "more of a work of art than ... an actual game." Next Generation reviewed the PlayStation version of the game, rating it five stars out of five, and stated that "rest assured that this is a game no player should miss and the best reason yet to own a PlayStation." Metal Gear Solid received an Excellence Award for Interactive Art at the 1998 Japan Media Arts Festival. During the 2nd Annual Interactive Achievement Awards, the Academy of Interactive Arts & Sciences nominated Metal Gear Solid for "Game of the Year", "Console Game of the Year", "Console Action Game of the Year", and outstanding achievement in "Interactive Design", "Software Engineering", and "Character or Story Development". In 1999, Next Generation listed Metal Gear Solid as number 27 on their "Top 50 Games of All Time", commenting that, "MGS is one of the most vibrant efforts in gaming history to bring serious ideas to games." Prior to its North American release, an estimated 12 million demos for the game were distributed in 1998. Upon release, the game was a commercial success. It became one of the most rented games in the United States, and topped sales charts in the United Kingdom and Japan. PC Data, which tracked sales in the United States, reported that Metal Gear Solid sold 1.06 million copies and earned $51,834,077 (equivalent to $102,388,000 in 2025) in revenue during 1998 alone. This made it the country's fifth-best-selling PlayStation release of 1998, and the third highest-grossing PlayStation title that year. In the United Kingdom, it was the third best-selling video game of 1999. In Germany, it received a Platinum award from the Verband der Unterhaltungssoftware Deutschland (VUD) in June 1999 for sales above 200,000 copies within several months, and it became the year's second best-selling PlayStation game. In Europe, the game grossed €40,034,122 or $42,668,367 (equivalent to $82,464,333 in 2025) in 1999, adding up to more than $94,502,444 (equivalent to $186,670,303 in 2025) grossed in the United States and Europe by 1999. By early 2001, it had sold 6 million units worldwide, including 1 million units in Japan and approximately 5 million units in the United States and Europe. It went on to sell more than 6.6 million units worldwide by 2002. By 2004, the original release had sold 5.51 million and Integral had sold 1.27 million for a combined 6.78 million units worldwide. As of July 2009[update], the game had sold over seven million units worldwide. In the US, 2.81 million units were sold as of 2007[update]. Despite its high success even in sales, Kojima, during an interview with Geoff Keighley in 2014, revealed that Metal Gear Solid sales expectations were low and said: "Neither I nor anyone else expected Metal Gear Solid to sell at all. [...] I didn't think at all of how to make this game sell well, because I didn't expect it to sell." Legacy Metal Gear Solid is credited with popularizing the stealth game genre. The idea of the player being unarmed and having to avoid being seen by enemies rather than fight them has been used in many games since. It is also sometimes acclaimed as being a film as much as a game due to the lengthy cutscenes and complicated storyline. IGN called it "the founder of the stealth genre." The game is often considered one of the best games for the PlayStation and was featured in best video games lists by Computer and Video Games in 2000, by Electronic Gaming Monthly and Game Informer in 2001, by Retro Gamer in 2004, by GameFAQs and GamePro in 2005, and by Famitsu. Hyper magazine in 2001 called it "Probably the single best game on the PlayStation." In 2002, IGN ranked it as the best PlayStation game ever, stating that just the demo for the game had "more gameplay [in it] than in most finished titles." IGN also gave it the "Best Ending" and "Best Villain" awards. In 2005, in placing it 19th on their list of "Top 100 Games", they said that it was "a game that truly felt like a movie." Guinness World Records awarded Metal Gear Solid with a record for the "Most Innovative Use of a Video Game Controller" for the boss fight with Psycho Mantis in the Guinness World Records Gamer's Edition 2008 edition. In 2010, PC Magazine ranked it as seventh in the list of most influential video games of all time, citing its influence on "such stealthy titles as Assassin's Creed and Splinter Cell." In 2012, Time named it one of the 100 greatest video games of all time and G4tv ranked it as the 45th top video game of all time. According to 1UP.com, Metal Gear Solid's cinematic style continues to influence modern action games such as Call of Duty. Metal Gear Solid, along with its sequel, Metal Gear Solid 2, was featured in the Smithsonian American Art Museum's exhibition The Art of Video Games in 2012. During August 2015, Eurogamer reanalyzed the game's technical and overall impact and claimed that Metal Gear Solid had been nothing less than "the first modern video game." In September 2015, Metal Gear Solid was voted the best original PlayStation game of all time by PlayStation.Blog's users. In May 2023, GQ listed Metal Gear Solid as the seventh best video game of all time according to a team of video game journalists across the industry. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-FOOTNOTEEvans201823-1] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://techcrunch.com/video] | [TOKENS: 374] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Last update: Oct 30, 2024 TechCrunch Disrupt 2024 Latest Collection Video Latest Videos at TechCrunch All Video Collections TechCrunch returns to the Bay Area, delivering insights into the cutting-edge world of AI. Experts from OpenAI, Anthropic, Cohere, Google Cloud, and more will lead main-stage sessions, focused breakouts, and unparalleled networking opportunities. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.