text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/%C5%81%C3%B3d%C5%BA] | [TOKENS: 9309] |
Contents Łódź Łódź[a] is a city in central Poland and a former industrial centre. It is the capital of Łódź Voivodeship, and is located 120 km (75 mi) south-west of Warsaw. As of 2024,[update] Łódź has a population of 645,693, making it the country's fourth largest city. Łódź first appears in records in the 14th century. It was granted town rights in 1423 by the Polish King Władysław II Jagiełło and it remained a private town of the Kuyavian bishops and clergy until the late 18th century. In the Second Partition of Poland in 1793, Łódź was annexed to Prussia before becoming part of the Napoleonic Duchy of Warsaw; the city joined Congress Poland, a Russian client state, at the 1815 Congress of Vienna. The Second Industrial Revolution (from 1850) brought rapid growth in textile manufacturing and in population owing to the inflow of migrants, a sizable part of which were Jews and Germans. Ever since the industrialization of the area, the city had been multinational and struggled with social inequalities, as documented in the novel The Promised Land by Nobel Prize–winning author Władysław Reymont. The contrasts greatly reflected on the architecture of the city, where luxurious mansions coexisted with red-brick factories and dilapidated tenement houses. The industrial development and demographic surge made Łódź one of the largest cities in Poland. During the interwar period, Łódź became an important center for the Polish artistic avant-garde. Founded in 1931, Muzeum Sztuki became the first museum in Europe dedicated to collecting and showcasing modern art. Under the German occupation during World War II, the city's population was persecuted and its large Jewish minority was forced into a walled zone known as the Litzmannstadt Ghetto, after the Nazi German renaming of the city, from where they were sent to German concentration and extermination camps. The city became Poland's temporary seat of power in 1945. Łódź experienced a sharp demographic and economic decline after 1989. It was only in the 2010s that the city began to experience revitalization of its neglected downtown area. Łódź is ranked by the Globalization and World Cities Research Network on the "Sufficiency" level of global influence. The city is internationally known for its National Film School, a cradle for the most renowned Polish actors and directors, including Andrzej Wajda and Roman Polański. In 2017, the city was inducted into the UNESCO Creative Cities Network and named UNESCO City of Film. Name and toponymy There is no consensus on the origin of the city's name. The Polish word łódź means 'boat', but popular theories link it with the medieval village of Łodzia and the now-canalised River Łódka on which the modern city was founded. It may also be related to łoza 'willow tree' or the Old Polish personal name Włodzisław. History Łódź first appears in a 1332 written record issued by Władysław the Hunchback, Duke of Łęczyca, which transferred the village of Łodzia to the Bishopric of Włocławek. The document enumerated the privileges of its inhabitants, notably the right to graze land, establish pastures and engage in logging. In 1423, King of Poland Władysław II Jagiełło officially granted town rights to the village under Magdeburg Law. For centuries, it remained a small remote settlement situated among woodlands and marshes, which was privately held by the Kuyavian bishops. It was administratively located in the Brzeziny County in the Łęczyca Voivodeship in the Greater Poland Province of the Kingdom of Poland. The economy was predominantly driven by agriculture and farming until the 19th century. The earliest two versions of the coat of arms appeared on seal emblems in 1535 and 1577, with the latter illustrating a boat-like vessel and a turned oar. With the Second Partition of Poland in 1793, Łódź was annexed by Prussia. In 1798, the Kuyavian bishops' ownership over the region was formally revoked during the secularisation of church property. The town, governed by a burgomaster (burmistrz), at the time had only 190 residents, 44 occupied dwellings, a church and a prison. In 1806, Łódź was incorporated into the Napoleonic Duchy of Warsaw. In the aftermath of the 1815 Congress of Vienna, the duchy was dissolved and the town became part of the Congress Kingdom of Poland, a client state of the Russian Empire. In 1820, the government of the Congress Kingdom designated Łódź and its rural surroundings for centrally planned industrial development. Rajmund Rembieliński, head of the Administrative Council and prefect of Masovia, became the president of a commission that subdivided the works two major phases; the first (1821–23) comprised the creation of a new city centre with an octagonal square (contemporary plac Wolności; Liberty Square) and arranged housing allotments on greenfield land situated south of the old marketplace; the second stage (1824–28) involved the establishment of cotton mill colonies and a linear street system along with an arterial north–south thoroughfare, Piotrkowska. Many of the early dwellings were timber cottages built for housing weavers (domy tkaczy). During this time, a sizeable number of German craftsmen settled in the city, encouraged by exemptions from tax obligations. Their settlement in Poland was encouraged by renowned philosopher and statesman Stanisław Staszic, who acted as the director of the Department of Trade, Crafts and Industry. In 1851, the Russian imperial authorities abolished a customs barrier which was imposed on Congress Poland following the failed November Uprising (1830–1831). The suppression of tariffs allowed the city to freely export its goods to Russia, where the demand for textiles was high. Poland's first steam-powered loom commenced operations at Ludwik Geyer's White Factory in 1839. During the first weeks of the January Uprising (1863–1864), a unit of 300 Polish insurgents entered the city without resistance and seized weapons, and later on, there were also clashes between Polish insurgents and Russian troops in the city. In 1864, the inhabitants of adjacent villages were permitted to settle in Łódź without restrictions. The development of railways in the region was also instrumental in expanding the textile industry; in 1865 the Łódź–Koluszki line, a branch of the Warsaw–Vienna railway, was opened, thus providing a train connection to larger markets. In 1867, the city was incorporated into the Piotrków Governorate, a local province. The infrastructure and edifices of Łódź were built at the expense of industrialists and business magnates, chiefly Karl Wilhelm Scheibler and Izrael Poznański, who sponsored schools, hospitals, orphanages, and places of worship. From 1872 to 1892, Poznański established a major textile manufactory composed of twelve factories, power plants, worker tenements, a private fire station, and a large eclectic palace. By the end of the century, Scheibler's Księży Młyn became one of Europe's largest industrial complexes, employing 5,000 workers within a single facility. The years 1870–1890 saw the most intense industrialisation, which was marked by social inequalities and dire working conditions. Łódź soon became a notable centre of the socialist movement and the so-called Łódź rebellion(pl) in May 1892 was quelled by a military intervention. The turn of the 20th century coincided with cultural and technological progress; in 1899, the first stationary cinema in Poland (Gabinet Iluzji) was opened in Łódź. In the same year, Józef Piłsudski, the future Marshal of Poland, settled in the city and began printing the Robotnik (The Worker; p. 1894–1939), an underground newspaper published by the Polish Socialist Party. During the June Days (1905), approximately 100,000 unemployed labourers went on a mass strike, barricaded the streets and clashed with troops. Officially, 151 demonstrators were killed and thousands were wounded. In 1912, the Archcathedral of St. Stanislaus Kostka was completed and its tower at 104 metres (341 ft) is one of the tallest in Poland. Despite the impending crisis preceding World War I, Łódź grew exponentially and was one of the world's most densely populated industrial cities, with a population density of 13,200 inhabitants per square kilometre (34,000/sq mi) by 1914. In the aftermath of the Battle of Łódź (1914), the city came under Imperial German occupation on 6 December. With Polish independence restored in November 1918, the local population disarmed the German army. Subsequently, the textile industry of Łódź stalled and its population briefly decreased as ethnic Germans left the city. Despite its large population and economic output, Łódź did not serve as the seat of its province until the 20th century. Following the establishment of the Second Polish Republic, it became the capital of the Łódź Voivodeship in 1919. The early interwar period was characterised by considerable economic hardship and industrial stagnation. The Great Depression and the German–Polish customs war closed western markets to Polish textiles while the Bolshevik Revolution and the Civil War in Russia put an end to the most profitable trade with the East. Because of rapid and, consequently, chaotic development in the previous century, Łódź did not possess the adequate infrastructure and living standards for its inhabitants. Pollution was acute, sanitary conditions were poor and the authorities did not invest in a sewage treatment system until the 1920s. From 1918 to 1939, many cultural, educational and scientific institutions were created, including elementary schools, museums, art galleries and public libraries which prior to the First World War did not exist. Łódź also began developing an entertainment scene, with 34 movie theatres opened by 1939. On 13 September 1925, the city's first airport, Lublinek, commenced operations. In 1930, the first radio transmission from a newly founded broadcasting station took place. The ideological orientation of Łódź was strongly left-wing and the city was a notable centre of socialist, communist and bundist activity in Polish politics during the interbellum. During the invasion of Poland in September 1939, the Polish forces of General Juliusz Rómmel's Army Łódź defended the city against the German assault by forming a line of resistance between Sieradz and Piotrków Trybunalski. The attack was conducted by the 8th Army of Johannes Blaskowitz, who encircled the city with the X Army Corps. After fierce resistance, a Polish delegation surrendered to the Germans on 8 September, and the first Wehrmacht troops entered in the early hours of 9 September. The German Einsatzgruppe III paramilitary death squad entered the city on 12 September. Arthur Greiser incorporated Łódź into a new administrative subdivision of Nazi Germany called Reichsgau Wartheland on 9 November 1939, and on 11 April 1940 the city was renamed to Litzmannstadt after German general and NSDAP member Karl Litzmann. The city became subjected to immediate Germanisation, with Polish and Jewish establishments closed, and Polish-language press banned. Low-wage forced labour was imposed on the city's inhabitants aged 16 to 60; many were subsequently deported to Germany. As part of the Intelligenzaktion, Polish intellectuals from the city and region were imprisoned at Radogoszcz and then either sent to concentration camps or murdered in the forests of Łagiewniki and the village of Lućmierz-Las. Polish children were forcibly taken from their parents, and from 1942 to 1945 the German Sicherheitspolizei operated a camp for kidnapped Polish children from various regions in Łódź. The German authorities established the Łódź Ghetto (Ghetto Litzmannstadt) in the city and populated it with more than 200,000 Jews from the region, who were systematically sent to German extermination camps. It was the second-largest ghetto in occupied Europe, and the last major ghetto to be liquidated, in August 1944. The Polish resistance movement (Żegota) operated in the city and aided the Jewish people throughout its existence. However, only 877 Jews were still alive by 1945. Of the 223,000 Jews in Łódź before the invasion, 10,000 survived the Holocaust in other places. The Germans also created camps for non-Jews, including the Romani people deported from abroad, who were ultimately murdered at Chełmno, as well as a penal forced labour camp, four transit camps for Poles expelled from the city and region, and a racial research camp. Following liberation by Soviet forces on 19 January 1945, and the end of the World War II, Łódź informally and temporarily took over the functions of Poland's capital, and most of the government and country administration resided in the city prior to the reconstruction of Warsaw. Łódź also experienced an influx of refugees from Kresy. Many migrated into the suburbs and occupied the empty properties. Under the Polish People's Republic, the city's industry and private companies were subject to nationalisation. On 24 May 1945, the University of Łódź was inaugurated. On 8 March 1948, the National Film School was opened, later becoming Poland's primary academy of drama and cinema. The spatial and urban planning after World War II was conducted in accordance with the Athens Charter, where the population from the old core was relocated into new residential areas. However, as a result, the inner-city and historical areas fell in significance and degenerated into a slum. A number of extensive panel block housing estates were constructed, including Retkinia, Teofilów, Widzew, Radogoszcz, and Chojny. These block housing estates were constructed between 1960 and 1990, covering an area of almost 30 square kilometres (12 sq mi) and accommodating a large part of the populace. In mid-1981 Łódź became famous for its massive hunger demonstration of local mothers and their children. After 1989 the textile industry in Łódź collapsed and the city suffered from social and economic decline. The city's industrial heritage and examples of Polish Art Nouveau became an early tourist attraction. In the 2000s the city's main street, the Piotrkowska Street, was revitalized, providing space for shops and restaurants. By 2011 the city hosted around 60 festivals per year. The local government's efforts to transform the former industrial city into a thriving urban environment and tourist destination formed the basis for the city's failed bid to organise the 2022 International EXPO exhibition on the subject of urban renewal. Geography Łódź covers an area of approximately 293 square kilometres (113 sq mi) and is located in the centre of Poland. The city lies in the lowlands of the Central European Plain, not exceeding 300 metres in elevation. Topographically, the Łódź region is generally characterised by a flat landscape, with only several highlands which do not exceed 50 metres above the terrain level. The soil is predominantly sandy (62%) followed by clay (24%), silt (8%), and organogenic formations (6%) from regional wetlands. The forest cover (equivalent to 4.2% of the whole country) is considerably low compared to other cities, regions, and provinces of Poland. Łódź has a humid continental climate (Dfb in the Köppen climate classification). The lowest temperature was recorded in January 1987. Administration The city's governance is executed by Urząd Miasta Łodzi, a local council or town hall, currently based at Juliusz Heinzl Palace. The power is divided between the President of Łódź (Prezydent Łodzi), a title held by the mayor, and the Rada Miejska assembly comprising 37 elected deputies. The term in office for deputies is 5 years. Łódź also acts as a city with powiat rights, exercising the powers and duties of a local powiat county. Łódź is the capital of Łódź Voivodeship, one of Poland's 16 provinces, and hosts the voivodeship sejmik – a regional assembly.[page needed] The city is also the seat of the voivode, the province's governor who is the representative of the Polish Council of Ministers in the voivodeship, is the head of the combined government administration, acts as supervisory authority over local government units and as a higher-level authority within the meaning of the provisions on administrative proceedings.[page needed] In medieval times, the town was governed by the burgomaster, who began his term as early as 1470. The first individual who held the title of "president" was Karol Tangermann, a close aide of Rajmund Rembieliński, when it was still a part of Congress Poland. The first president of Łódź under the independent Second Polish Republic was Leopold Skulski (1917–1919), who subsequently became the prime minister of Poland.[page needed] The incumbent president since 2010 is Hanna Zdanowska from the Civic Coalition party. Łódź was previously divided into 5 major boroughs (dzielnica) – Bałuty, Górna, Polesie, Śródmieście, and Widzew. In January 1993, the system of boroughs was abolished and the city became a single entity with no real subdivisions. In April 2000, a system of 36[b] neighbourhoods or dependent units (osiedle) was imposed by the City Council for administrative purposes only; these units have no local governing or regulatory authority. Demographics According to Statistics Poland (GUS), Łódź was inhabited by 672,185 people and had a population density of 2,292 persons per square kilometre (5,940/sq mi), as of December 2020.[update] Approximately 55.7 per cent of inhabitants are of working age (18–64 years), which is a considerable decrease from 64.1 per cent in 2010. An estimated 29.1 per cent is of post-working age compared to 21.8 per cent ten years earlier. In 2020, 54.39 per cent (365,500) of all residents were women. Łódź has one of the highest feminisation rates among Poland's major cities, a legacy of the city's industrial past, when the textile factories attracted large numbers of female employees. At its peak in 1988 the population was around 854,000; however, this has since declined due to low fertility rates, outward migration and a lower life expectancy than in other parts of Poland. Łódź was the country's second largest city until 2007, when it lost its position to Kraków. A major contributing factor was the abrupt transition from socialist to market-based economy after 1989 and the resulting economic crisis, but the economic growth which followed has not reversed the trend. Depopulation and ageing are major impediments for the future development of the city, putting strain on social infrastructure and medical services. As a result of the continuing demographic crisis and rapid population loss, Łódź was overtaken by Wrocław and dropped to become the country's fourth-largest city in 2022. Historically, Łódź was multi-ethnic and its diverse population comprised migrants from other regions of Europe. In 1839, approximately 78 per cent (6,648) of the total population was German. In 1913, Łódź had a population of 506,100 people, of whom 251,700 (49.7%) were Poles, 171,900 (34%) were Jews, 75,000 (14.8%) were Germans, and 6,300 (1.3%) were Russians. According to the 1931 Polish census, the total population of 604,000 included 375,000 (59%) Poles, 192,000 (32%) Jews and 54,000 (9%) Germans. By 1939, the Jewish minority had grown to well over 200,000. The majority of believers in Łódź adhere to Roman Catholicism, the largest religious denomination in Poland. The first Catholic bishopric was established in December 1920 and has been elevated to the Roman Catholic Archdiocese of Łódź in 1992 by Pope John Paul II. The primary church for Catholic worship is the Basilica of St. Stanislaus Kostka, which is often reserved for special occasions or during religious holidays. Constructed in 1912 in the Gothic Revival style, it is the tallest building in the city and one of Poland's tallest churches since the completion of the tower in 1927. The Feast of Corpus Christi is widely celebrated and annual marches take place on Piotrkowska Street, in front of the cathedral. Despite this, church attendance in Łódź is one of the lowest in Poland; mass attendance was estimated at 26% in 2013 and fell to 17% by 2023. Statistics also show that the city and its environs have one of the highest concentration of atheists in Poland. Historically, Łódź had a strong and influential Protestant population (11% in 1921, 9.2% in 1931) that had its origins with the migration of German-speaking weavers and textile workers throughout the 19th century. The Evangelical Church of the Augsburg Confession representing Lutherans is the largest of the Protestant denominations. The city falls under the Lutheran Diocese of Warsaw, though the congregation is headquartered at the Church of St. Peter and St. Paul in Pabianice. The only active Lutheran church in Łódź is the historic St. Matthew's Church, which seasonally serves as a concert hall. There is also a parish of the Polish Reformed Church (Calvinist), dating back to 1888, as well as Methodist and Evangelical temples. Łódź is considered to be one of the centres of Jehovah's Witnesses' activity in Poland. Judaism was once the city's second largest denomination (33.4% in 1931), with up to 250 synagogues and shtiebels in existence prior to 1939 and a strong cultural output. The Stara Synagogue, commonly known as Alte Szil, and Ezras Israel Synagogue were the primary places of worship for Orthodox Jews. The Great Synagogue, the largest of its kind, served the Reformed Jewish community. All were destroyed during the Second World War, except for the defunct 19th-century Synagoga Reicherów. The Union of Jewish Religious Communities in Poland (ZGWŻ) manages the Łódź municipality; the local base is situated at a newer synagogue on Pomorska Street where the Community maintains kosher facilities and a mikveh. Łódź is the seat of a Mariavite Church diocese, initially created in 1910. The Marivites are followers of Old Catholicism and a considerable minority; there are only three Mariavite dioceses across the country. Economy and infrastructure Before 1990, the economy of Łódź was heavily reliant on the textile industry, which had developed in the city in the nineteenth century owing to the abundance of rivers used to power the industry's fulling mills, bleaching plants and other machinery. Because of the growth in this industry, the city has sometimes been called the "Polish Manchester" and the "lingerie capital of Poland". As a result, Łódź grew from a population of 13,000 in 1840 to over 500,000 in 1913. By the time right before World War I Łódź had become one of the most densely populated industrial cities in the world, with 13,280 inhabitants per km2, and also one of the most polluted. The textile industry declined dramatically in 1990 and 1991, and no major textile company survives in Łódź. However, countless small companies still provide a significant output of textiles, mostly for export. Łódź is no longer a significant industrial centre, but it has become a major hub for the business services sector in Poland owing to the availability of highly skilled workers and active cooperation between local universities and the business sector. The city benefits from its central location in Poland. A number of firms have located their logistics centres in the vicinity. Two motorways, A1 spanning from the north to the south of Poland, and A2 going from the east to the west, intersect northeast of the city. As of 2012,[update] the A2 is complete to Warsaw and the northern section of A1 is largely completed. With these connections, the advantages of the city's central location should increase even further. Work has also begun on upgrading the railway connection with Warsaw, which reduced the 2-hour travel time to make the 137 km (85 mi) journey 1.5 hours in 2009. As of 2018,[update] travel time from Łódź to Warsaw is around 1.2 hours with the modern Pesa Dart trains. Recent years have seen many foreign companies opening and establishing their offices in Łódź. The Indian IT company Infosys has one of its centres in the city. In January 2009 Dell announced that it will shift production from its plant in Limerick, Ireland to its plant in Łódź. The city's investor friendly policies have attracted 980 foreign investors by January 2009. Foreign investment was one of the factors which decreased the unemployment rate in Łódź to 6.5 per cent in December 2008, from 20 per cent four years earlier. Łódź is situated near the geographical centre of Poland, only a short distance away from the motorway junction in Stryków where the two main north–south (A1) and east–west (A2) Polish transport corridors meet, which positions the city on two of the ten major trans-European routes: from Gdańsk to Žilina and Brno and from Berlin to Moscow via Warsaw. It is also part of the New Silk Road, a regular cargo rail connection with the Chinese city of Chengdu operating since 2013. Łódź is served by the national motorway network, an international airport, and long-distance and regional railways. It is at the centre of a regional and commuter rail network operating from the city's various train stations. Bus and tram services are operated by a municipal public transport company. There are 193 km (120 mi) of bicycle routes throughout the city (as in January 2019). Major roads include: The city has an international airport: Łódź Władysław Reymont Airport located 6 kilometres (4 miles) from the city centre. Flights connect the city with destinations in Europe including Turkey. In 2014 the airport handled 253,772 passengers. It is the 8th largest airport in Poland.[circular reference] The Municipal Transport Company – Łódź (Miejskie Przedsiębiorstwo Komunikacyjne – Łódź), owned by the Łódź City Government, is responsible for operating 58 bus routes and 19 tram lines. The tram network is one of the longest in the country and was the first electrified cable tramway in Congress Poland, beginning its operation on 23 December 1898. The regional tramway network also connects Łódź with the adjacent cities of Pabianice (since 2023) and Konstantynów Łódzki (since 2024), which are within the Łódź Agglomeration. The rolling stock largely comprises older but modernised wagons by Konstal and newer Polish-manufactured types such as Pesa Swing and Moderus Gamma. Among the popular models for buses are Mercedes Conecto LF and Solaris Urbino 18. Łódź has a number of long distance and local railway stations. There are two main stations in the city, but with no direct rail connection between them—a legacy of 19th-century railway network planning. Originally constructed in 1866, the centrally located Łódź Fabryczna was a terminus station for a branch line of the Warsaw–Vienna railway, whereas Łódź Kaliska was built more than thirty years later on the central section of the Warsaw-Kalisz railway. For this reason most intercity train traffic goes to this day through Łódź Kaliska station, despite its relative distance from the city centre, and Łódź Fabryczna serves mainly as a terminal station for trains to Warsaw. The situation will be remedied in 2026 after the construction of a tunnel connecting the two, which is likely to make Łódź Poland's main railway hub. The tunnel will additionally serve Łódź Commuter Railway, providing a rapid transit system for the city, dubbed the Łódź Metro by the media and local authorities. Three new stations are being constructed on the underground line, one serving the needs of the Manufaktura complex, another one serving Koziny neighbourhood and the third one located in the area of Piotrkowska Street. In December 2016, a few years after the demolition of the old building of Łódź Fabryczna station, a new underground station was opened. It is considered to be the largest and most modern of all train stations in Poland and is designed to handle increased traffic after the construction of the tunnel. It also serves as a multimodal transport hub, featuring an underground intercity bus station, and is integrated with a new transport interchange serving taxis and local trams and buses. The construction of the new Łódź Fabryczna station was part of a broader project of urban renewal known as Nowe Centrum Łodzi (New Centre of Łódź). The third-largest train station in Łódź is Łódź Widzew. There are also many other stations and train stops in the city, many of which were upgraded as part of the Łódzka Kolej Aglomeracyjna commuter rail project. The rail service, founded as part of a major regional rail upgrade and owned by Łódź Voivodeship, operates on routes to Kutno, Sieradz, Skierniewice, Łowicz, and on selected days to Warsaw, with plans for further expansion after the construction of the tunnel. Education Łódź is a thriving center of academic life. Łódź hosts three major state-owned universities, six higher education establishments operating for more than a half of the century, and a number of smaller schools of higher education. The tertiary institutions with the most students in Łódź include: In the 2018 general ranking of state-owned tertiary education institutions in Poland, the University of Łódź came 20th (6th place among universities) and Lodz University of Technology 12th (6th place among technical universities). The Medical University of Łódź was ranked 5th among Polish medical universities. Leading courses taught in Łódź include administration (3rd place), law (4th) and biology (4th). There is also a number of private-owned institutions of higher learning in Łódź. The largest of these are the University of Social Sciences (Społeczna Akademia Nauk) and the University of Humanities and Economics in Łódź (Akademia Humanistyczno-Ekonomiczna w Łodzi). In the 2018 ranking of private universities in Poland the former was ranked 9th, and the latter 23rd. The Leon Schiller National Higher School of Film, Television and Theatre in Łódź (Państwowa Wyższa Szkoła Filmowa, Telewizyjna i Teatralna im. Leona Schillera w Łodzi) is the most notable academy for future actors, directors, photographers, camera operators and TV staff in Poland. It was founded on 8 March 1948 and was initially planned to be moved to Warsaw as soon as the city was rebuilt following the Warsaw Uprising. However, in the end the school remained in Łódź and became one of the best-known institutions of higher education in the city. At the end of the Second World War Łódź was the only large Polish city besides Kraków which war had not destroyed. The creation of the National Film School gave Łódź a role of greater importance from a cultural viewpoint, which before the war had belonged exclusively to Warsaw and Kraków. Early students of the School include the directors Andrzej Munk, Roman Polanski, Andrzej Wajda, Kazimierz Karabasz (one of the founders of the so-called Black Series of Polish Documentary) and Janusz Morgenstern, who at the end of the 1950s became famous as one of the founders of the Polish Film School of Cinematography. Culture The most notable and recognizable landmark of the city is Piotrkowska Street, which remains the high-street and main tourist attraction in the city, runs north to south for a little over five kilometres (3.1 miles). This makes it one of the longest commercial streets in the world. Most of the building façades, many of which date back to the 19th century, have been renovated. It is the site of most restaurants, bars and cafes in Łódź's city centre. Important monuments of architecture along Piotrkowska Street are the Old Town Hall, the Descent Of The Holy Spirit Church, the Łódź Catholic Cathedral and the St. Matthew's Lutheran Church. Other important churches in the city center include the Alexander Nevsky Orthodox Cathedral and the Karol Scheibler's Chapel, Lutheran part of Ogrodowa Street Cemetery. Many neglected tenement houses and factories throughout the entire city centre have been renovated in recent years as part of the ongoing revitalization project run by the local authorities. The best example of urban regeneration in Łódź is the Manufaktura complex, occupying a large area of a former cotton factory dating back to the nineteenth century. The site, which was the heart of Izrael Poznański's industrial empire, hosts a shopping mall, numerous restaurants, 4-star hotel, multiplex cinema, factory museum, bowling and fitness facilities and a science exhibition centre. Opened in 2006, it quickly became a centre of cultural entertainment and shopping, as well as a recognizable city landmark attracting both domestic and foreign tourists. Another example is the former factory of Karl Scheibler on Księży Młyn, which was turned into a mixed-use complex of offices and housing. Łódź also provides plenty of green spaces for recreation. Woodland areas cover 9.61% of the city, with parks taking up an additional 2.37% of the area of Łódź (as of 2014).[update] Las Łagiewnicki ('Łagiewnicki Forest') is recognized as the largest forested area within the administrative borders of any city in Europe. It has an area of 1,245 ha and is cut across by a number of hiking trails that traverse the hilly landscape on the western edge of Łódź Hills Landscape Park. A "natural complex which has remained nearly intact as oak-hornbeam and oak woodland," the forest is also rich in history, and its attractions include a Franciscan friary dating back to the early 18th century and two 17th-century wooden chapels. Out of a total of 44 parks in Łódź (as of 2014),[update] 11 have historical status, the oldest of them dating back to the middle of the 19th century. The largest of these, Józef Piłsudski Park (188.21 hectares (0.7267 sq mi)), is located near the Łódź Zoo and the city's botanical garden, and together with them it comprises an extensive green complex known as Zdrowie serving the recreational needs of the city. Another notable park located in Łódź is the Józef Poniatowski Park. The Jewish Cemetery at Bracka Street, one of the largest of its kind in Europe, was established in 1892. After the invasion of Poland by Nazi Germany in 1939, this cemetery became a part of Łódź's eastern territory known as the enclosed Łódź ghetto (Ghetto Field). Between 1940 and 1944, approximately 43,000 burials took place within the grounds of this rounded-up cemetery. In 1956, a monument by Muszko in memory of the victims of the Łódź Ghetto was erected at the cemetery. It features a smooth obelisk, a menorah, and a broken oak tree with leaves stemming from the tree (symbolizing death, especially death at a young age). As of 2014,[update] the cemetery has an area of 39.6 hectares (98 acres). It contains approximately 180,000 graves, approximately 65,000 labelled tombstones, ohels and mausoleums. Many of these monuments have significant architectural value; 100 of these have been declared historical monuments and have been in various stages of restoration. The mausoleum of Izrael and Eleanora Poznański is perhaps the largest Jewish tombstone in the world and the only one decorated with mosaics. Łódź has one of the best museums of modern art in Poland. Muzeum Sztuki has three branches, two of which (ms1 and ms2) display collections of 20th and 21st-century art. The newest addition to the museum, ms2 was opened in 2008 in the Manufaktura complex. The unique collection of the Museum is presented in an unconventional way. Instead of a chronological lecture on the development of art, works of art representing various periods and movements are arranged into a story touching themes and motifs important for the contemporary public. The third branch of Muzeum Sztuki, located in one of the city's many industrial palaces, also has more traditional art on display, presenting works by European and Polish masters such as Stanisław Wyspiański and Henryk Rodakowski. Among the 14 registered museums to be found in Łódź, there is the independent Book Art Museum, awarded the American Printing History Association's Institutional Award for 2015 for its outstanding contribution to the study, recording, preservation, and dissemination of printing history in Poland over the last 35 years. Other notable museums include the Central Museum of Textiles with its open-air display of wooden architecture, the Cinematography Museum, located in Scheibler Palace, and the Museum of Independence Traditions, occupying the building of a historical Tsarist prison from the late 19th century. A more unusual establishment, the Dętka museum offers tourists a chance to visit the municipal sewer designed in the early years of the 20th century by the British engineer William Heerlein Lindley. Three major novels depict the development of industrial Łódź: Władysław Reymont's The Promised Land (1898), Joseph Roth's Hotel Savoy (1924) and Israel Joshua Singer's The Brothers Ashkenazi (1937). Roth's novel depicts the city on the eve of a workers' riot in 1919. Reymont's novel was made into a film by Andrzej Wajda in 1975. In the 1990 film Europa Europa, Solomon Perel's family flees pre-World War II Berlin and settles in Łódź. Paweł Pawlikowski's film Ida was partially shot in Łódź. Much of David Lynch's film Inland Empire was shot in Łódź. Chava Rosenfarb's Yiddish trilogy "The Tree of Life" (1972; English translation 1985) portrays life within the Łódź Ghetto. Among the traditional dishes of Łódź and the Łódź Voivodeship are zalewajka – a sour cereal and potato soup, often served with mushrooms, kielbasa sausage and bread – and cabbage soup (kapuśniak) served with potato dumplings and pork cracklings. These were once the staples of the working-class population employed in textile factories. Popular breads and baked goods include the angielka baguette roll and żulik bun with raisins. Aspic in various forms (galareta, zimne nóżki or drygle) was once a well-established comfort and party food in the city. Łódź and the surrounding region is also known for having a strong preference for mushroom soup over barszcz (borscht) for the Polish Wigilia Christmas Eve supper. Major food venues are primarily located at Piotrkowska Street, for example the OFF Piotrkowska, a mixed-use development complex situated in a heritage-listed red brick factory. Food trucks are a common sight around the city centre and several neighbourhoods. The city has experience as a host for international sporting events such as the 2009 EuroBasket, the 2011 EuroBasket Women, the 2014 FIVB Volleyball Men's World Championship and the 2019 FIFA U-20 World Cup, with the opening and final of the latter taking place at Widzew Stadium. Łódź will also host the sixth edition of the European Universities Games in 2022. Under communism it was common for clubs to participate in many different sports for all ages and sexes. Many of these traditional clubs still survive. Originally they were owned directly by a public body, but have become independently operated by clubs or private companies. However, they get public support through the cheap rent of land and other subsidies from the city. Some of their sections have gone professional and separated from the clubs as private companies. For example, Budowlani S.A is a private company that owns the only professional rugby team in Łódź, while Klub Sportowy Budowlani remains a community amateur club. In Ekstraklasa of Polish beach soccer Łódź have three professional clubs: Grembach, KP and BSCC [pl]. Łódź bid for the Specialized Expo 2022/2023 but lost out to Buenos Aires, Argentina. Łódź was planned to host the Horticultural Expo in 2024. However, multiple Expo events were delayed due to the COVID-19 pandemic, a Horticultural Expo in Doha, Qatar from 2021/22 to 23/24 among them. As a result, the Horticultural Expo in Łódź has been rescheduled to 2029 to maintain a required time interval between them. Notable residents International relations Łódź is home to fourteen foreign consulates, i.e. honorary consulates general of Hungary and Turkey, and honorary consulates of Albania, Armenia, Austria, Democratic Republic of the Congo, Czech Republic, Denmark, France, Lithuania, Luxembourg, Malta, Moldova and Ukraine. Łódź is twinned with: Łódź belongs also to the Eurocities network. After the Russian invasion of Ukraine, Łódź terminated the partnership with Russian cities Ivanovo and Kaliningrad, and with Minsk, the capital of Belarus on 2 March 2022. See also Explanatory notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#Networking_and_the_Internet] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_note-3] | [TOKENS: 1856] |
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/20/why-investors-are-going-gaga-over-solid-state-transformers/] | [TOKENS: 1828] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Why investors are going gaga over solid-state transformers It’s no secret that the electrical grid is aging, but one part stands out from the rest. Transformers haven’t changed much since Thomas Edison made his first light bulb. Now, a string of startups are working to modernize the transformer, replacing it with modern power electronics that promise to give grid operators more control over how and where electricity flows. “It becomes a very powerful device, equivalent to your internet router,” Subhashish Bhattacharya, co-founder and CTO of DG Matrix, told TechCrunch. Three startups recently raised sizable rounds to scale up production of their solid-state transformer technologies. This week, DG Matrix raised a $60 million Series A and Heron Power raised $140 million in a Series B round. In November, Amperesand raised $80 million to chase after the burgeoning data center market. Existing transformers are reliable and efficient, but that’s about it. They’re relatively crude instruments, made largely of copper and iron. They react passively to changes on the grid and are capable of tackling only one task per device. “An old-school steel, copper, and oil transformer doesn’t have any monitoring, doesn’t have any control,” Drew Baglino, founder and CEO of Heron Power, told TechCrunch. In instances where electricity surges or a power plant trips offline, that can be a liability. The devices can incorporate power from a range of difference sources — including traditional power plants, renewables, and batteries — and transform that electricity into either alternating current (AC) or direct current (DC) at a number of different voltages, allowing them to replace several devices. For data centers, solid-state transformers offer an appealing alternative, allowing them to shrink the footprint of their power systems while giving them finer control over where and how electricity is directed. Solid-state transformers are poised to arrive at a time when existing transformers are aging and demand for new ones is surging — a classic tech supercycle. Most transformers on the grid today are several decades old, according to the National Laboratory of the Rockies (NLR; formerly the National Renewable Energy Laboratory). As demand from data centers, EV chargers, and other parts of the grid rises, the NLR expects the amount of power flowing through transformers to double by 2050. While data centers are the the first market those companies are chasing, they also have their sights set on the electrical grid, which in the U.S. alone hosts as many as 80 million transformers. “All of the distribution transformers are ultimately going to need to be replaced. Over 50% of them are 35 years old. There’s a big need for an upgrade,” Baglino said. Because they’re made from silicon-based materials, they’re flexible, controllable, and software-updatable. They’re also immune from price fluctuations that rock the copper market. “Power semiconductors keep getting cheaper. Steel, copper, and oil, unfortunately, is not in that situation,” Baglino said. “Commodity prices can move all over the place, and they generally move up.” In an old-style transformer, power flows into the transformer through copper wires wound around one side of an O-shaped iron core. As the electricity flows, it induces a magnetic field in the core. On the other side of the core, the magnetic field induces electricity in another set of copper windings. If the wires wrap around the core more times on the input side than the output side, the voltage decreases on the output side. If the ratio inverts, the output voltage increases. Solid-state transformers eschew the copper windings in favor of semiconductors, using materials like silicon carbide or gallium nitride to handle frequency conversion. They can come in a range of configurations, with the most comprehensive setup consisting of three basic parts: a rectifier that converts alternating current to direct current, a converter that changes the voltage of the direct current, and an inverter that changes the direct current back into alternating current. Unlike iron-core transformers, solid-state transformers can handle power that flows in both directions, making them useful in places that need backup power, like data centers. In a data center, a solid-state transformer can replace several different pieces of equipment, not just the transformer that steps voltage down from the grid. Every data center uses backup power, which requires a string of devices to bring power into the facility. Solid-state transformers can handle all of those duties in one box. The technology also allows data centers to more easily integrate so-called behind-the-meter power, where generating capacity is connected directly to the data center, not the grid. Those typically require another set of transformers. And when coupled with grid-scale batteries, solid-state transformers can eliminate uninterruptible power supplies (UPS), too, freeing up space inside the data center for more racks. “If you add up the cost of everything we’ve taken out, we’re 60% to 70% of that cost,” Haroon Inam, co-founder and CEO of DG Matrix, told TechCrunch. DG Matrix has been focusing on its Interport technology, which can route power from multiple sources to multiple loads of differing voltages, a setup the company holds multiple patents on. Heron Power, meanwhile, is working on transforming medium-voltage power in data centers, solar farms, and grid-scale battery installations. In a data center, its Heron Link transformers can provide racks with 30 seconds of power while backup sources come online. Altogether, Heron Link occupy 70% less space than existing parts. At a solar farm, Heron Power’s transformers can perform the duties of an inverter and a transformer for the same price. In a head-to-head comparison, solid state transformers still command a cost premium over iron-core transformers. For that reason, they’re unlikely to replace the giant humming boxes at grid substations in the very near future. But in data centers and at EV charging hubs, where solid-state transformers take the place of several pieces of equipment, they’ll start making inroads. When they finally hit the grid in bigger numbers, they have the potential to cut down on transmission and distribution costs, one of the biggest contributors to utility bill inflation. Because today’s transformers are passive, unable to react to fluctuations, distribution networks have been built with a significant amount of spare capacity, Baglino said. Solid-state transformers, though, can respond to changing conditions, allowing grid operators to send more power through the same lines. “You can actually make the infrastructure more affordable because you’re putting more kilowatt-hours through the same poles and wires,” he said. “That’s where intelligence, in place of passive mechanical objects that were designed 100 years ago, can make a big difference.” Topics Senior Reporter, Climate Tim De Chant is a senior climate reporter at TechCrunch. He has written for a wide range of publications, including Wired magazine, the Chicago Tribune, Ars Technica, The Wire China, and NOVA Next, where he was founding editor. De Chant is also a lecturer in MIT’s Graduate Program in Science Writing, and he was awarded a Knight Science Journalism Fellowship at MIT in 2018, during which time he studied climate technologies and explored new business models for journalism. He received his PhD in environmental science, policy, and management from the University of California, Berkeley, and his BA degree in environmental studies, English, and biology from St. Olaf College. You can contact or verify outreach from Tim by emailing tim.dechant@techcrunch.com. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Apparent_magnitude] | [TOKENS: 3626] |
Contents Apparent magnitude Apparent magnitude (m) is a measure of the brightness of a star, astronomical object or other celestial objects like artificial satellites. Its value depends on its intrinsic luminosity, its distance, and any extinction of the object's light caused by interstellar dust or atmosphere along the line of sight to the observer. Unless stated otherwise, the word magnitude in astronomy usually refers to a celestial object's apparent magnitude. The magnitude scale likely dates to before the ancient Roman astronomer Claudius Ptolemy, whose star catalog popularized the system by listing stars from 1st magnitude (brightest) to 6th magnitude (dimmest). The modern scale was mathematically defined to closely match this historical system by Norman Pogson in 1856. The scale is reverse logarithmic: the brighter an object is, the lower its magnitude number. A difference of 1.0 in magnitude corresponds to the brightness ratio of 100 5 {\displaystyle {\sqrt[{5}]{100}}} , or about 2.512. For example, a magnitude 2.0 star is 2.512 times as bright as a magnitude 3.0 star, 6.31 times as magnitude 4.0, and 100 times as magnitude 7.0. The brightest astronomical objects have negative apparent magnitudes: for example, Venus at −4.2 or Sirius at −1.46. The faintest stars visible with the naked eye on the darkest night have apparent magnitudes of about +6.5, though this varies depending on a person's eyesight and with altitude and atmospheric conditions. The apparent magnitudes of known objects range from −26.832 (the Sun) to objects in deep Hubble Space Telescope images of magnitude +31.5. The measurement of apparent magnitude is called photometry. Photometric measurements are made in the ultraviolet, visible, or infrared wavelength bands using standard passband filters belonging to photometric systems such as the UBV system or the Strömgren uvbyβ system. Measurement in the V-band may be referred to as the apparent visual magnitude. Absolute magnitude is a related quantity which measures the luminosity that a celestial object emits, rather than its apparent brightness when observed, and is expressed on the same reverse logarithmic scale. Absolute magnitude is defined as the apparent magnitude that a star or object would have if it were observed from a distance of 10 parsecs (33 light-years; 3.1×1014 kilometres; 1.9×1014 miles). Therefore, it is of greater use in stellar astrophysics since it refers to a property of a star regardless of how close it is to Earth. But in observational astronomy and popular stargazing, references to "magnitude" are understood to mean apparent magnitude. Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. This can be useful as a way of monitoring the spread of light pollution. Apparent magnitude is technically a measure of illuminance, which can also be measured in photometric units such as lux. History (Vega, Canopus, Alpha Centauri, Arcturus) The scale used to indicate magnitude originates in the Hellenistic practice of dividing stars visible to the naked eye into six magnitudes. The brightest stars in the night sky were said to be of first magnitude (m = 1), whereas the faintest were of sixth magnitude (m = 6), which is the limit of human visual perception (without the aid of a telescope). Each grade of magnitude was considered twice the brightness of the following grade (a logarithmic scale), although that ratio was subjective as no photodetectors existed. This rather crude scale for the brightness of stars was popularized by Ptolemy in his Almagest and is generally believed to have originated with Hipparchus. This cannot be proved or disproved because Hipparchus's original star catalogue is lost. The only preserved text by Hipparchus himself (a commentary to Aratus) clearly documents that he did not have a system to describe brightness with numbers: He always uses terms like "big" or "small", "bright" or "faint" or even descriptions such as "visible at full moon". In 1856, Norman Robert Pogson formalized the system by defining a first magnitude star as a star that is 100 times as bright as a sixth-magnitude star, thereby establishing the logarithmic scale still in use today. This implies that a star of magnitude m is about 2.512 times as bright as a star of magnitude m + 1. This figure, the fifth root of 100, became known as Pogson's Ratio. The 1884 Harvard Photometry and 1886 Potsdamer Durchmusterung star catalogs popularized Pogson's ratio, and eventually it became a de facto standard in modern astronomy to describe differences in brightness. Defining and calibrating what magnitude 0.0 means is difficult, and different types of measurements which detect different kinds of light (possibly by using filters) have different zero points. Pogson's original 1856 paper defined magnitude 6.0 to be the faintest star the unaided eye can see, but the true limit for faintest possible visible star varies depending on the atmosphere and how high a star is in the sky. The Harvard Photometry used an average of 100 stars close to Polaris to define magnitude 5.0. Later, the Johnson UVB photometric system defined multiple types of photometric measurements with different filters, where magnitude 0.0 for each filter is defined to be the average of six stars with the same spectral type as Vega. This was done so the color index of these stars would be 0. Although this system is often called "Vega normalized", Vega is slightly dimmer than the six-star average used to define magnitude 0.0, meaning Vega's magnitude is normalized to 0.03 by definition. With the modern magnitude systems, brightness is described using Pogson's ratio. In practice, magnitude numbers rarely go above 30 before stars become too faint to detect. While Vega is close to magnitude 0, there are four brighter stars in the night sky at visible wavelengths (and more at infrared wavelengths) as well as the bright planets Venus, Mars, and Jupiter, and since brighter means smaller magnitude, these must be described by negative magnitudes. For example, Sirius, the brightest star of the celestial sphere, has a magnitude of −1.4 in the visible. Negative magnitudes for other very bright astronomical objects can be found in the table below. Astronomers have developed other photometric zero point systems as alternatives to Vega normalized systems. The most widely used is the AB magnitude system, in which photometric zero points are based on a hypothetical reference spectrum having constant flux per unit frequency interval, rather than using a stellar spectrum or blackbody curve as the reference. The AB magnitude zero point is defined such that an object's AB and Vega-based magnitudes will be approximately equal in the V filter band. However, the AB magnitude system is defined assuming an idealized detector measuring only one wavelength of light, while real detectors accept energy from a range of wavelengths. Measurement Precision measurement of magnitude (photometry) requires calibration of the photographic or (usually) electronic detection apparatus. This generally involves contemporaneous observation, under identical conditions, of standard stars whose magnitude using that spectral filter is accurately known. Moreover, as the amount of light actually received by a telescope is reduced due to transmission through the Earth's atmosphere, the airmasses of the target and calibration stars must be taken into account. Typically one would observe a few different stars of known magnitude which are sufficiently similar. Calibrator stars close in the sky to the target are favoured (to avoid large differences in the atmospheric paths). If those stars have somewhat different zenith angles (altitudes) then a correction factor as a function of airmass can be derived and applied to the airmass at the target's position. Such calibration obtains the brightness as would be observed from above the atmosphere, where apparent magnitude is defined.[citation needed] The apparent magnitude scale in astronomy reflects the received power of stars and not their amplitude. Newcomers should consider using the relative brightness measure in astrophotography to adjust exposure times between stars. Apparent magnitude also integrates over the entire object, regardless of its focus, and this needs to be taken into account when scaling exposure times for objects with significant apparent size, like the Sun, Moon and planets. For example, directly scaling the exposure time from the Moon to the Sun works because they are approximately the same size in the sky. However, scaling the exposure from the Moon to Saturn would result in an overexposure if the image of Saturn takes up a smaller area on your sensor than the Moon did (at the same magnification, or more generally, f/#). Calculations The dimmer an object appears, the higher the numerical value given to its magnitude, with a difference of 5 magnitudes corresponding to a brightness factor of exactly 100. Therefore, the magnitude m, in the spectral band x, would be given by m x = − 5 log 100 ( F x F x , 0 ) , {\displaystyle m_{x}=-5\log _{100}\left({\frac {F_{x}}{F_{x,0}}}\right),} which is more commonly expressed in terms of common (base-10) logarithms as m x = − 2.5 log 10 ( F x F x , 0 ) , {\displaystyle m_{x}=-2.5\log _{10}\left({\frac {F_{x}}{F_{x,0}}}\right),} where Fx is the observed irradiance using spectral filter x, and Fx,0 is the reference flux (zero-point) for that photometric filter. Since an increase of 5 magnitudes corresponds to a decrease in brightness by a factor of exactly 100, each magnitude increase implies a decrease in brightness by the factor 100 5 ≈ 2.512 {\displaystyle {\sqrt[{5}]{100}}\approx 2.512} (Pogson's ratio). Inverting the above formula, a magnitude difference m1 − m2 = Δm implies a brightness factor of F 2 F 1 = 100 Δ m 5 = 10 0.4 Δ m ≈ 2.512 Δ m . {\displaystyle {\frac {F_{2}}{F_{1}}}=100^{\frac {\Delta m}{5}}=10^{0.4\Delta m}\approx 2.512^{\Delta m}.} What is the ratio in brightness between the Sun and the full Moon? The apparent magnitude of the Sun is −26.832 (brighter), and the mean magnitude of the full moon is −12.74 (dimmer). Difference in magnitude: x = m 1 − m 2 = ( − 12.74 ) − ( − 26.832 ) = 14.09. {\displaystyle x=m_{1}-m_{2}=(-12.74)-(-26.832)=14.09.} Brightness factor: v b = 10 0.4 x = 10 0.4 × 14.09 ≈ 432 513. {\displaystyle v_{b}=10^{0.4x}=10^{0.4\times 14.09}\approx 432\,513.} The Sun appears to be approximately 400000 times as bright as the full Moon. Sometimes one might wish to add brightness. For example, photometry on closely separated double stars may only be able to produce a measurement of their combined light output. To find the combined magnitude of that double star knowing only the magnitudes of the individual components, this can be done by adding the brightness (in linear units) corresponding to each magnitude. 10 − m f × 0.4 = 10 − m 1 × 0.4 + 10 − m 2 × 0.4 . {\displaystyle 10^{-m_{f}\times 0.4}=10^{-m_{1}\times 0.4}+10^{-m_{2}\times 0.4}.} Solving for m f {\displaystyle m_{f}} yields m f = − 2.5 log 10 ( 10 − m 1 × 0.4 + 10 − m 2 × 0.4 ) , {\displaystyle m_{f}=-2.5\log _{10}\left(10^{-m_{1}\times 0.4}+10^{-m_{2}\times 0.4}\right),} where mf is the resulting magnitude after adding the brightnesses referred to by m1 and m2. While magnitude generally refers to a measurement in a particular filter band corresponding to some range of wavelengths, the apparent or absolute bolometric magnitude (mbol) is a measure of an object's apparent or absolute brightness integrated over all wavelengths of the electromagnetic spectrum (also known as the object's irradiance or power, respectively). The zero point of the apparent bolometric magnitude scale is based on the definition that an apparent bolometric magnitude of 0 mag is equivalent to a received irradiance of 2.518×10−8 watts per square metre (W·m−2). While apparent magnitude is a measure of the brightness of an object as seen by a particular observer, absolute magnitude is a measure of the intrinsic brightness of an object. Flux decreases with distance according to an inverse-square law, so the apparent magnitude of a star depends on both its absolute brightness and its distance (and any extinction). For example, a star at one distance will have the same apparent magnitude as a star four times as bright at twice that distance. In contrast, the intrinsic brightness of an astronomical object, does not depend on the distance of the observer or any extinction. The absolute magnitude M, of a star or astronomical object is defined as the apparent magnitude it would have as seen from a distance of 10 parsecs (33 ly). The absolute magnitude of the Sun is 4.83 in the V band (visual), 4.68 in the Gaia satellite's G band (green) and 5.48 in the B band (blue). In the case of a planet or asteroid, the absolute magnitude H rather means the apparent magnitude it would have if it were 1 astronomical unit (150,000,000 km) from both the observer and the Sun, and fully illuminated at maximum opposition (a configuration that is only theoretically achievable, with the observer situated on the surface of the Sun). Standard reference values The magnitude scale is a reverse logarithmic scale. A common misconception is that the logarithmic nature of the scale is because the human eye itself has a logarithmic response. In Pogson's time this was thought to be true (see Weber–Fechner law), but it is now believed that the response is a power law (see Stevens' power law). Magnitude is complicated by the fact that light is not monochromatic. The sensitivity of a light detector varies according to the wavelength of the light, and the way it varies depends on the type of light detector. For this reason, it is necessary to specify how the magnitude is measured for the value to be meaningful. For this purpose the UBV system is widely used, in which the magnitude is measured in three different wavelength bands: U (centred at about 350 nm, in the near ultraviolet), B (about 435 nm, in the blue region) and V (about 555 nm, in the middle of the human visual range in daylight). The V band was chosen for spectral purposes and gives magnitudes closely corresponding to those seen by the human eye. When an apparent magnitude is discussed without further qualification, the V magnitude is generally understood. Because cooler stars, such as red giants and red dwarfs, emit little energy in the blue and UV regions of the spectrum, their power is often under-represented by the UBV scale. Indeed, some L and T class stars have an estimated magnitude of well over 100, because they emit extremely little visible light, but are strongest in infrared. Measures of magnitude need cautious treatment and it is extremely important to measure like with like. On early 20th century and older orthochromatic (blue-sensitive) photographic film, the relative brightnesses of the blue supergiant Rigel and the red supergiant Betelgeuse irregular variable star (at maximum) are reversed compared to what human eyes perceive, because this archaic film is more sensitive to blue light than it is to red light. Magnitudes obtained from this method are known as photographic magnitudes, and are now considered obsolete. For objects within the Milky Way with a given absolute magnitude, 5 is added to the apparent magnitude for every tenfold increase in the distance to the object. For objects at very great distances (far beyond the Milky Way), this relationship must be adjusted for redshifts and for non-Euclidean distance measures due to general relativity. For planets and other Solar System bodies, the apparent magnitude is derived from its phase curve and the distances to the Sun and observer. List of apparent magnitudes Some of the listed magnitudes are approximate. Telescope sensitivity depends on observing time, optical bandpass, and interfering light from scattering and airglow. See also Notes References External links |
======================================== |
[SOURCE: https://he.wikipedia.org/wiki/פלייסטיישן_3] | [TOKENS: 4473] |
תוכן עניינים פלייסטיישן 3 פלייסטיישן 3 (באנגלית: PlayStation 3 או בקיצור PS3) היא קונסולת המשחקים הנייחת השלישית של סוני אינטראקטיב אנטרטיינמנט. הקונסולה שייכת לסדרת קונסולות הפלייסטיישן המצליחה. חברת סוני הציגה לראשונה את קונסולת הפלייסטיישן 3 בתערוכת E3, התערוכה הבינלאומית למשחקים אלקטרונים, שהתקיימה במאי 2005 בלוס אנג'לס. סוני דחתה פעמים רבות את מועד ההשקה של הקונסולה עקב קשיים בייצור ובפיתוח. מכירת הקונסולה החלה לבסוף ב-17 בנובמבר 2006. במאי 2017 הודיעה סוני על הפסקת ייצור פלייסטיישן 3. בכל שנות קיומה יוצרו כ-87 מיליון יחידות של הקונסולה. פרטים טכניים המעבד של הפלייסטיישן 3 מכונה "Cell" והוא תוצר של פיתוח משותף של החברות "יבמ", "סוני" ו"טושיבה. הוא מורכב מליבת עיבוד אחת הנקראת PPE ומשמונה יחידות עיבוד משניות הקרויות SPE, כאשר רק שבע מהן פעילות - אחת למערכת ההפעלה ושש למשחקים. כל אחת מהן רצה במהירות של 3.2 ג'יגה הרץ. ה-PPE הוא מעבד RISC שני נימים 64 ביט המבוסס על ארכיטקטורת ה-PowerPC של יבמ המיועד לעיבוד כללי, כאשר יחידות ה-SPE הן מעבדי 128 ביט SIMD המיועדות לעיבוד מקבילי. המעבד הגרפי של הפלייסטיישן 3 תוכנן על ידי חברת NVIDIA, אחת משתי יצרניות המעבדים הגרפיים העיקריות בעולם (השנייה, ATI, היא מאז 2006 חטיבה ב-AMD), ביחד עם סוני. המעבד נקרא "RSX" והוא רץ בקצב שעון של 550 מגהרץ. ה-RSX יכול להציג גרפיקת תלת־ממד ברזולוציה הגבוהה ביותר של הפרדה גבוהה - 1080 שורות על 1920 קווים (רזולוציה גבוהה יותר משל ה-Xbox360). בעלי מסכי ה-LCD בעלי כושר הצגת תוכני HD יוכלו ליהנות ממיטב האיכות הגרפית של המעבד. ה-RSX יכול לעבד עצמים תלת-ממדיים במורכבות של 300 מיליון פוליגונים בשנייה. לעומתו, המעבד של פלייסטיישן 2 יכול היה לעבד רק 66 מיליון פוליגונים בשנייה ולעומת ה-Xbox 360 מצד המתחרה מיקרוסופט, שיכול לעבד כ-290 מיליון פוליגונים בשנייה. ל-RSX יש זיכרון של 256 מגהבייט מסוג "GDDR-3" שקצב השעון שלו הוא 650 מגהרץ. לפלייסטיישן 3 עצמו יש 256 מגהבייט זיכרון נוספים, מסוג "XDR DRAM" שקצב השעון שלו הוא 400 מגהרץ, אך מעביר 8 ביט במקביל, כלומר קצב נתונים של 3.2 גיגה ביט בשנייה. יחד יש ל-RSX ולמעבד 512 זיכרון שמחולק לשניים המיועדים לדברים שונים. בעדכוני התוכנה השונים נוספו אפשרויות חדשות ומשופרות. דוגמה לעדכון שכזה נמצא בגרסת התוכנה 1.82, בו הופיעה האפשרות להעתיק את הדיסק בכונן לתוך הדיסק הקשיח המובנה במכשיר. בשנת 2011 נכתב אמולטור RPCS3 שתואם כ-2000 משחקים (נכון לאוקטובר 2021). השוואת מאפיינים לבן קרמי Gigabit Ethernet, תאימות לגרסה הראשונה של פלייסטיישן באמצעות תוכנת אמולציה וגימור מבריק. גרסת ה- Slim ב-18 באוגוסט 2009, סוני הציגה את הגרסה הדקה של הפלייסטיישן 3 ה-PS3 Slim. בתערוכת המשחקים Gamescom בעיר קלן שבגרמניה, בדומה למהלך של הפלייסטיישן 2. המכשיר יתפוס 32% פחות מקום (נפח) וידרוש 34% פחות אנרגיה אך לא יהיו הבדלים בין האפליקציות של המכשיר הדק והעבה. המפרט הטכני נותר זהה והיתרון המרכזי הוא בדחיסת החלקים למארז קטן ונוח יותר. הבדל נוסף הוא שסוני הסירה את האפשרות להתקנת מערכת הפעלה מכוון שמעטים המשתמשים שניצלו אופציה זו. גרסת ה- Super Slim ב-12 בספטמבר 2012 הציגה סוני גרסה דקה פלייסטיישן 3, ה- "Super Slim". גרסה זו הציגה שינוי ניכר בעיצוב המכשיר שבין השאר כולל הקטנה משמעותית של המכשיר וכונן Blu-Ray שנפתח באופן ידני, בדומה לזה של ה-פלייסטיישן 2. הגרסה הדקה שווקה ב-3 ערכות עיקריות ברחבי העולם: כמו כן, גרסה זו של הפלייסטיישן 3 שווקה גם בערכות מיוחדות הכוללות את הצבעים כחול, אדום ולבן. עם יציאתו של המשחק האחרון ביותר בסדרת המשחקים "God Of War: Ascension" -"God Of War" החלה מכירתה של ערכה מיוחדת הכוללת משחק, שלט מעוצב אחד, וקונסולה המעוצבת בסגנון בדמותו של גיבור המשחק. תחרות שתי המתחרות הגדולות בפלטפורמת הפלייסטיישן 3 הן ה-Xbox 360 של מיקרוסופט ו-Wii של נינטנדו. יכולות העיבוד של ה-Wii נמוכות יותר משל הקונסולות האחרות ומחירה הרבה יותר נמוך ובהכללה היא פונה ללקוחות צעירים יותר שאינם דורשים ביצועים גרפיים מרשימים. הפלייסטיישן 3 וה-Xbox 360 מתחרות על כיסם של לקוחות דומים, והתחרות בינן עזה. נכון לפברואר 2013 נמכרו כ-75 מיליון יחידות פלייסטיישן 3. קצב המכירות גדל משמעותית עם השקתם של משחקים בלעדיים ועלייה נוספת במכירות נרשמה עם השקת ה-Move, בקר התנועה של סוני. ההשקה לאחר דחיות רבות הוציאה סוני את הקונסולה לשוק ב-17 בנובמבר 2006. הדחיות נוצרו עקב בעיות קשות בתהליך היצור של הקונסולה שנגרמו עקב ניסיונותיה של סוני להוזיל את התהליך. הדבר גרם לעיכוב רב ותאריך ההשקה נדחה מאביב 2006 עד לתאריך הסופי. הפלייסטיישן 3 יצאה לשוק בארצות הברית כשנה לאחר Xbox 360 המתחרה מבית מיקרוסופט ובמקביל ל-Wii מבית נינטנדו. בימים הראשונים המוצר היווה הצלחה, אך לעומת שתי הקונסולות המתחרות, אשר במשך חודשים נמכרו בכמויות רבות מאוד, הפלייסטיישן 3 סבל ממכירות בהיקף מצומצם יותר, אך עדיין מכירותיו היו מצליחות למדיי. לפי נתוני ה-NPD, עד ינואר 2008 מכירות הקונסולה בארצות הברית היו נמוכות משל שתי מתחרותיה לעיל. המכירות היו גורם להתקוטטויות בין לקוחות בשל המחסור במוצר. בארצות הברית נרשמו מספר מקרי ירי על תורי הממתינים. הרצת Linux על פלייסטיישן 3 על מכשיר הפלייסטיישן 3 (גרסה עבה), ניתן היה להריץ את מערכת ההפעלה לינוקס, בגרסת Yellow Dog 5.0, אך בעדכון תוכנה 3.21 סוני חסמה את האפשרות הזו בכל הדגמים, מכיוון שבאמצעות מערכת ההפעלה ניתן היה לפרוץ את מערכת האבטחה של המכשיר, כפי שגילה האקר גרמני. ראו גם קישורים חיצוניים הערות שוליים |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Special:RecentChangesLinked/Elon_Musk] | [TOKENS: 57] |
Related changes Enter a page name to see changes on pages linked to or from that page. (To see members of a category, enter Category:Name of category). Changes to pages on your Watchlist are shown in bold with a green bullet. See more at Help:Related changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Birthday_party] | [TOKENS: 3511] |
Contents Party A party is a gathering of people who have been invited by a host for the purposes of socializing, conversation, recreation, or as part of a festival or other commemoration or celebration of a special occasion. A party will often feature food and beverages, and generally conversation, music, dancing, or other forms of entertainment. Some parties are held in honor of a specific person, day, or event, such as a birthday party, a Super Bowl party, or a St. Patrick's Day party. Parties of this kind are often called celebrations. A party is not necessarily a private occasion. Public parties are sometimes held in parks, restaurants, pubs, beer gardens, nightclubs, or bars, and people attending such parties may be charged an admission fee by the host. Large parties in public streets may celebrate events such as Mardi Gras or the signing of a peace treaty ending a long war. Types A birthday party is a celebration of the anniversary of the birth of the person who is being honored. While there is historical precedent for birthday parties for the rich and powerful throughout history, the tradition extended to middle-class Americans around the nineteenth century and took on more modern norms and traditions in the twentieth century. Birthday parties are now a feature of many cultures. In Western cultures, birthday parties include a number of common rituals. The guests may be asked to bring a gift for the honored person. Party locations are often decorated with colorful decorations, such as balloons and streamers. A birthday cake is usually served with lit candles that are to be blown out after a "birthday wish" has been made. The person being honored will be given the first piece of cake. While the birthday cake is being brought to the table, the song "Happy Birthday to You" or some other birthday song is sung by the guests. At parties for children, time is often taken for the "gift opening" wherein the individual whose birthday is celebrated opens each of the gifts brought. It is also common at children's parties for the host to give parting gifts to the attendees in the form of "goodie bags". Children and even adults sometimes wear colorful cone-shaped party hats. Birthday parties are often larger and more extravagant if they celebrate someone who has reached what is regarded in the culture as a milestone age, such as the transition from childhood to adulthood. Examples of traditional coming of age celebrations include the North American sweet sixteen party and the Latin American quinceañera. Since medieval times, children have dressed specially for birthday parties; there is evidence to suggest historical birthday parties existed in Germany as kinderfeste. A children's party or kids' party is a party for children such as a birthday party or tea party. Since medieval times, children have dressed specially for such occasions. A surprise party is a party that is not made known beforehand to the person in whose honor it is being held. Birthday surprise parties are the most common kind of surprise party. At most such parties, the guests will arrive an hour or so before the honoree arrives. Often, a friend in on the surprise will lead the honoree to the location of the party without letting on anything. The guests might even conceal themselves from view, and then when the honoree enters the room, they leap from hiding and all shout, "Surprise!". For some surprise birthday parties, it is considered to be a good tactic to shock the honoree. Streamers, silly string, and balloons may be used for this purpose. Evidence of a party, such as decorations and balloons, are not made visible from the exterior of the home or party venue, so that the honoree will suspect nothing. A dinner party is a social gathering at which people eat dinner together, usually in the host's home. At the most formal dinner parties, the dinner is served on a dining table with place settings. Dinner parties are often preceded by a cocktail hour in a living room or bar, where guests drink cocktails while mingling and conversing. Wine is usually served throughout the meal, often with a different wine accompanying each course. At less formal dinner parties, a buffet is provided. Guests choose food from the buffet and eat while standing up and conversing. Women guests may wear cocktail dresses; men may wear blazers. At some informal dinner parties, the host may ask guests to bring food or beverages (a main dish, a side dish, a dessert, or appetizers). A party of this type is called a potluck or potluck dinner. In the United States, potlucks are very often held in churches and community centers. A garden party is a party in a park or a garden. An event described as a garden party is usually more formal than other outdoor gatherings, which may be called simply parties, picnics, barbecues, etc. A garden party can be a prestigious event. For example, invitations by the British Sovereign to garden parties at Buckingham Palace or at the Palace of Holyroodhouse (in Scotland) are considered an honor. The President of France holds a garden party at the Palais de l'Elysée in Paris on Bastille Day. A cocktail party is a party at which cocktails are served. It is sometimes called a "cocktail reception". Women who attend a cocktail party may wear a cocktail dress. A cocktail hat is sometimes worn as a fashion statement. In Anglo-American culture, a tea party is a formal gathering for afternoon tea. These parties were traditionally attended only by women, but men may also be invited. Tea parties are often characterized by the use of prestigious tableware, such as bone china and silver. The table, whatever its size or cost, is made to look its prettiest, with cloth napkins and matching cups and plates. In addition to tea, larger parties may serve punch or, in cold weather, hot chocolate. The tea is accompanied by a variety of easily managed foods. Thin sandwiches such as cucumber or tomato, bananas, cake slices, buns, and cookies are all common choices. Formal receptions are parties that are designed to receive a large number of guests, often at prestigious venues such as Buckingham Palace, the White House, or Government Houses of the British Empire and Commonwealth. The hosts and any guests of honor form a receiving line in order of precedence near the entrance. Each guest is announced to the host who greets each one in turn as he or she arrives. Each guest properly speaks little more than his name (if necessary) and a conventional greeting or congratulation to each person in the receiving line. In this way, the line of guests progresses steadily without unnecessary delay. After formally receiving each guest in this fashion, the hosts may mingle with the guests. Somewhat less formal receptions are common in academic settings, sometimes to honor a guest lecturer, or to celebrate a special occasion such as retirement of a respected member of staff. Receptions are also common in symposium or academic conference settings, as an environment for attendees to mingle and interact informally. These gatherings may be accompanied by a sit-down dinner, or more commonly, a stand-up informal buffet meal. Receptions are also held to celebrate exhibition openings at art galleries or museums. The featured artist or artists are often present, as well as the curators who organized the exhibition. In addition or instead, a celebratory reception may be held partway through or at the end of an exhibition run. This alternative scheduling allows guests more time to see the exhibition in depth at their own pace, before meeting the featured guests. Some food is often served, as in academic gatherings. Refreshments at a reception may be as minimal, such as coffee or lemonade, or as elaborate as those at a state dinner. In the 18th century, in France and England, it became fashionable for wealthy, well married ladies who had a residence "in town" to invite accomplished guests to visit their home in the evening, to partake of refreshments and cultural conversation. Soirées often included refined musical entertainment, and the term is still sometimes used to define a certain sophisticated type of evening party. Society hostesses included actresses or other women with an influential reputation. The character of the hostess determined the character of the soirée and the choice of guests. Famous soirée hostesses include Hester Thrale and Madame de Staël. A dance is a social gathering at which the guests dance. It may be a casual, informal affair, or a structured event, such as a school dance or a charity ball. Dances usually take place during the evening. An afternoon dance is formally known as a tea dance. Some dances feature specific kinds of dancing, such as square dancing. A ball is a large formal party that features ballroom dancing. Women guests wear ball gowns; men wear evening dress. A block party is a public party that is attended by the residents of a specific city block or neighborhood. These parties are typically held in a city street that has been closed to traffic to accommodate the party. At some block parties, attendees are free to pass from house to house, socializing, and often drinking alcoholic beverages. At a masquerade ball, guests wear masks to conceal their identities. Guests at a costume party or a fancy dress party wear costumes. These parties are sometimes associated with holiday events, such as Halloween and Mardi Gras. In English and American culture during the Christmas season, it is traditional to have a Christmas caroling party. People go from door to door in a neighborhood and sing Christmas carols. Some popular Christmas carols are "We Wish You a Merry Christmas", "Deck the Halls", "The Twelve Days of Christmas", "Frosty the Snowman", "Jingle Bells", "Silver Bells", "Santa Claus Is Comin' to Town", and "O Holy Night". In Spain, this type of party is called El Aguinaldo. It is the same as in England and the United States, but the only difference is that the children who sing the carols are given tips. Christmas songs are called villancicos in Spain; they are mainly sung by children at small parties. Dance parties are gatherings in bars or community centers where the guests dance to house music, techno music, or disco. The music for dance parties is usually selected and played by a disc jockey. A spin-off of dance parties, the rave involves dancing to loud house music, techno music, or industrial music. Rave parties may be attended by as few as a score of people in a basement or, more likely, by a few hundred people in a club, to as many as thousands in a large warehouse, field, or even tens of thousands in a sporting arena, amusement park, or other large space. Raves are associated with illegal drugs such as ecstasy and psychedelic drugs. A house party is a party where a large group of people get together at a private home to socialize. House parties that involve the drinking of beer pumped from a keg are called keg parties or "keggers." These parties are popular in North America, the United Kingdom, and Australia and are often attended by people under the legal drinking age. Sometimes, even older party-goers run afoul of the law for having provided alcoholic beverages to minors. Arrests may also be made for violating a noise ordinance, for disorderly conduct, and even for operating a "blind pig", an establishment that illegally sells alcoholic beverages. On college campuses, parties are often hosted by fraternities. Outdoor parties include bush parties and beach parties. Bush parties (also called "field parties") are held in a secluded area of a forest ("bush"), where friends gather to drink and talk. These parties are often held around a bonfire. Beach parties are held on a sandy shoreline of a lake, river, or sea, and also often feature a bonfire. School-related parties for teenagers and young adults include proms and graduation parties, which are held in honor of someone who has recently graduated from a school or university. A pool party is a party in which the guests swim in a swimming pool. A singles dance party and mixer is a party which is organized for people who are not married and who want to find a partner for friendship, dating, or sex. Usually a "mixer game" is played, to make it easy for people to meet each other. For example, each guest may be given a card with an inspiring quotation on it. The game is to find a potential partner who has the same quotation. Couples who have matching cards may be given a small prize. These parties are sponsored by various organizations, both non-profit and for-profit. A fundraising party, or fundraiser, is a party that is held for the purpose of collecting money that will be given to some person or to some institution, such as a school, charity, business, or political campaign. These parties are usually formal and consist of a dinner followed by speeches or by a presentation extolling whatever the money is being raised for. It is very common to charge an admission fee for parties of this kind. This fee may be as high as several thousand dollars, especially if money is being raised for a political campaign. In some places, parties to celebrate graduation from school, college, or university are popular. A graduation party may be held on campus or external, and transportation is provided when location is far away. A shower is a party whose primary purpose is to give gifts to the guest of honor. Traditionally, a bridal shower is a way for an engaged woman to be "showered" with gifts for her upcoming married life (see hope chest). Guests are expected to bring a small gift related to the upcoming life event. Themed games are a frequent sight at this sort of party. A new twist on the baby shower for a pregnant woman is the gender reveal party, made possible by modern ultrasound technology. A housewarming party may be held when a family, couple, or person moves into a new house or apartment. It is an occasion for the hosts to show their new home to their friends. Housewarming parties are typically informal and do not include any planned activities other than a tour of the new house or apartment. Invited family members and friends may bring gifts for the new home. A welcome party is held for the purpose of welcoming a newcomer, such as a new club member, a new employee, or a family's new baby. In many cultures, it is customary to throw a farewell party in honor of someone who is moving away or departing on a long trip (often called a "going away party" and sometimes called a bon voyage party). Retirement parties for departing co-workers fall into this category. Several are described in Japan in Shusaku Endo's 1974 novel When I Whistle. A cast party is a celebration following the final performance of a theatrical event, such as a play, a musical, or an opera. A party of this kind may also be held following the end of shooting for a motion picture (called a "wrap party") or after the season's final episode of a television series. Cast parties are traditionally held for most theater performances, both professional and amateur. Invited guests are usually restricted to performers, crew members, and a few others who did not participate in the performance, such as sponsors and donors who have helped fund the production. A pre-party is a party that is held immediately before a school dance, a wedding, a birthday party, and a bar mitzvah. These parties are usually of short duration and sometimes involve getting ready for the event (e.g., the guests may put on makeup or costumes). Guests usually leave at the same time and arrive at the event together. Often people engage in pregaming or drinking before an event or a night out, especially if the event lacks access to alcohol. An after-party is a party that is held after a play, wedding, school dance, or other more formal event. Legal obstacles to having mixed-sex parties In 2023, the Iranian government arrested 300 people for going to a party that was not sex-segregrated in Semnan. In Yazd a citizen was jailed for five years for hosting a party that was not sex-segregrated. In Karaj, 11 people were arrested for organizing a party that was not sex-segregrated. Authorities also closed down hundreds of cafes. It also changed the name of "Yalda night" and Chaharshanbe Suri night to day of respecting host and day of respecting neighbors. In 2021, Iranian ambassador was dismissed for hosting a party. The government detains and whips people that go to birthdays, Yalda night, private parties or closes down businesses. Tirgan is a crime just as well. In December 2023, government arrested several people for sorting parties on the internet. In the entire Mashhad, public partying and celebration of Yalda night was illegalized. In 2024, three Europeans and 250 people were arrested in a rave. Parties on special days International Australia Canada France Germany India Many other regional festivals, mostly different Hindu festivals in every state. Iran and other Persified societies Ireland Israel Mexico New Zealand Pakistan Scotland Sweden United Kingdom United States Uruguay Parties associated with religious events Christian Islamic Jewish Notable parties Miscellaneous parties Gallery See also References Bibliography |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Social_network&action=edit] | [TOKENS: 1447] |
Editing Social network Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} Wikidata entities used in this page Pages transcluded onto the current version of this page (help): This page is a member of 6 hidden categories (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Embedded_lists] | [TOKENS: 5866] |
Contents Wikipedia:Manual of Style/Lists Lists are commonly used in Wikipedia to organize information. Lists may be found within the body of a prose article, in appendices such as a "Publications" or "Works" section, or as a stand-alone article. This guideline explains when and how to use lists appropriately. Types of lists Wikipedia differentiates between articles that consist primarily of lists (generally called "lists" or "stand-alone lists") and articles that consist primarily of prose (called "articles"). Articles are intended to consist primarily of prose, though they may contain some lists. List articles are encyclopedia pages consisting of introductory material in the lead section followed by a list, possibly arranged in sub-sections. The items on these lists might include links to specific articles or other information, and must be supported with references like any article. The titles of stand-alone lists typically begin with the type of list (List of, Index of, etc.), followed by the article's subject, e.g., List of vegetable oils. They can be organised alphabetically, by subject classification or by topics in a flat or hierarchical structure. The title and bullet style, or vertical style, is common for stand-alone lists. These Wikipedia articles follow the Wikipedia:Stand-alone lists style guideline. Embedded lists are lists used within articles that supplement the article's prose content. They are included in the text itself or appended, and may be in table format. Wikipedia uses several standard appendices, usually in list format, as well as navigational templates. Embedded lists should be used only when appropriate; sometimes the information in a list is better presented as prose. Presenting too much statistical data in list format may contravene policy. It can be appropriate to use a list style when the items in a list are "children" of the paragraphs that precede them. Such "children" logically qualify for indentation beneath their parent description. In this case, indenting the paragraphs in list form may make them easier to read, especially if the paragraphs are very short. The following example works both with and without the bullets: The city's skyline has been composed of numerous and varied skyscrapers, many of which are icons of 20th-century architecture. The Flatiron Building, standing 285 ft (87 meters) high, was one of the tallest buildings in the city upon its completion in 1902, made possible by its steel skeleton. It was one of the first buildings designed with a steel framework, and to achieve this height with other construction methods of that time would have been very difficult. The Woolworth Building, a neo-Gothic "Cathedral of Commerce" overlooking City Hall, was designed by Cass Gilbert. At 792 feet (241 meters), it became the world's tallest building upon its completion in 1913, an honor it retained until 1930, when it was overtaken by 40 Wall Street. That same year, the Chrysler Building took the lead as the tallest building in the world, scraping the sky at 1,046 feet (319 m). More impressive than its height is the building's design, by William Van Alen. An art deco masterpiece with an exterior crafted of brick, the Chrysler Building continues to be a favorite of New Yorkers to this day. Lists of works of individuals or groups, such as bibliographies, discographies, filmographies, album personnel and track listings are typically presented in simple list format, though it is expected that the information will be supported elsewhere in the article by prose analysis of the main points, and that if the lists become unwieldy, they are split off into stand-alone lists per WP:Summary style. Timelines and chronologies can be useful supplements to prose descriptions of real-world histories. The content of a list is governed by the same content policies as prose, including principles of due weight and avoiding original research. Ensure that list items have the same importance to the subject as would be required for the item to be included in the text of the article, according to Wikipedia policies and guidelines (including WP:Trivia sections). Consider whether prose is more appropriate. Specific advice regarding timelines is given in Wikipedia:Timeline standards. "See also" lists and "Related topics" lists are valuable navigational tools that assist users in finding related Wikipedia articles. When deciding what articles and lists of articles to append to any given entry, it is useful to try to put yourself inside the mind of readers: Ask yourself where would a reader likely want to go after reading the article. Typically this will include three types of links: There is some controversy over how many links to articles and links to lists that should be put in any article. Some people separate the "links to articles" (put in the "See also" section) from the "links to lists" (put in the "Related topics" section), but this is not necessary unless there are too many links for one section alone. Some feel the optimum number of links to lists that should be included at the end of any given article is zero, one, or two. Others feel that a more comprehensive set of lists would be useful. In general, when deciding what list to include, the same criteria used to decide what articles to include in the "See also" section should be used. Editors should try to put themselves in the readers' frame of mind and ask "Where will I likely want to go after reading this article?". As a general rule, the "See also" section should not repeat links that appear in the article's body. Reference lists show information sources outside of Wikipedia. The two most common types are: Wikipedia is not a link collection and articles with only external links are actively discouraged, but it is appropriate to reference more detailed material from the Internet. This is particularly the case when you have used a web site as an important source of information. Most lists on Wikipedia are item lists, but not all. Specialized types of lists include: Purposes of lists Lists have three main purposes: The list may be a valuable information source. This is particularly the case for a structured list. Examples would include lists organized chronologically, grouped by theme, or annotated lists. Lists which contain internally linked terms (i.e., wikilinks) serve, in aggregate, as natural tables of contents and indexes of Wikipedia. If users have some general idea of what they are looking for but do not know the specific terminology, they could browse the lists of basic topics and more comprehensive lists of topics, which in turn lead to most if not all of Wikipedia's lists, which in turn lead to related articles. Lists are also provided in portals to assist in navigating their subjects, and lists are often placed in articles via the use of series boxes and other navigation templates. Some lists are useful for Wikipedia development purposes. The lists of related topics give an indication of the state of Wikipedia, the articles that have been written, and the articles that have yet to be written. However, as Wikipedia is optimized for readers over editors, any lists which exist primarily for development or maintenance purposes (such as a list that consists entirely of red links and does not serve an informational purpose; especially a list of missing topics) should be in either the project or user space, not the main space. Redundancy of lists and categories is beneficial because the two formats work together; the principle is covered in the guideline Wikipedia:Categories, lists, and navigation templates. Like categories, lists can be used for keeping track of changes in the listed pages, using the Related Changes feature. Unlike a category, a list also allows keeping a history of its contents; lists also permit a large number of entries to appear on a single page. List naming For a stand-alone list, the list's title is the page name. For an embedded list, the list's title is usually a section title (for instance, Latin Emperor § Latin emperors of Constantinople, 1204–1261), but it can be shorter. The list title should not be misleading and should normally not include abbreviations. Additionally, an overly precise list title can be less useful and can make the list difficult to find; the precise inclusion criteria for the list should be spelled out in the lead section (see below), not the title. For instance, words like complete and notable are normally excluded from list titles. Instead, the lead makes clear whether the list is complete or whether it is limited to widely-known or notable members (i.e., those that merit articles). Note that the word "famous" is considered an unnecessary "peacock" embellishment and should not be used. Sorting a list Lists may be sorted alphabetically (e.g. for people: by surname, given name, initials), chronologically (by date, usually oldest first), or occasionally by other criteria. To suggest that a list in an article or section should be sorted, use {{Unsorted list}}. List layout Prefer prose where a passage is understood easily as regular text that appears in its ordinary form, without metrical structure or line breaks. Prose is preferred in articles because it allows the presentation of detail and clarification of context in a way that a simple list may not. It is best suited to articles because their purpose is to explain. {{prose}} can be used to indicate a list which may be better-written as prose. Many stub articles can be improved by converting unnecessary lists into encyclopedic prose. See also: WP:Manual of Style/Trivia sections. Use proper markup: Employ careful wiki markup- or template-based list code (see Help:List for many pointers). Especially do not leave blank lines between items in a list, since this causes the MediaWiki software to misinterpret each item as beginning a new list. (There are HTML techniques to insert linebreaks or additional paragraphs into a list item.) Avoid misuse of list markup in articles for visual styling of non-list material. To float pictures to the right of the list, one should put the image markup before the first item in most cases, see the example "A". Inserting the image markup as a separate line within the list (as in example "B") once again will split it into two half-lists. Should the length of the list items or the topical relevance of said image discourage display at the top corner, consider placing it after the asterisk of the first list-item it illustrates (as in example "C") to avoid breaking continuity of the unordered list (<ul>) element. Note: When floating images to the left of a list, use the {{flowlist}} template to prevent disrupting the indentation of the bullet-points. Use a bulleted (unordered) list by default, especially for long lists. Use a numbered (ordered) list only if there is a need to refer to items by number, the sequence of items is important, or the numbering exists in the real world (e.g., tracks on an album). List items should be formatted consistently in a list. Unless there is a good reason to use different list types in the same page, consistency throughout an article is also desirable. Use sentence case by default for list items, whether they are complete sentences or not. Sentence case is used for around 99% of lists on Wikipedia. Title case (as used for book titles) is not used for list entries. Lowercase is best reserved for: Use the same grammatical form for all items in a list – avoid mixing sentences and sentence fragments as items. A list item should not end with a full stop unless it consists of a complete sentence or is the end of a list that forms one. When elements contain (or are) titles of works or other proper names, these retain their original capitalization, regardless how the rest of the list is formatted. A list title in a section heading provides a direct edit point, if one enables section editing. It also enables the automatic table of contents to detect the list. It is not required, however, and should not be used for a list that is not the focus of a section, or for lists in an article that uses a lot of short lists and which is better arranged by more topical headings that group related lists. Lists should have introductory material; for stand-alone lists, this should be the lead section. This introductory material should make clear the scope of the list. It should also provide explanation for non-obvious characteristics of the list, such as the list's structure. Stand-alone lists may place non-obvious characteristics in a separate introductory section (e.g. List of compositions by Johann Sebastian Bach § Listing Bach's compositions). Lists and their supporting material must comply with standard Wikipedia content policies and guidelines, including Wikipedia:Neutral point of view and should not create content forks. Exercise caution when self-referencing Wikipedia, to ensure any self-reference is acceptable. For example, notability is often a criterion used for stand-alone lists (and sometimes embedded ones), but many other self-references create problems. To include a self-reference, format it with {{Self-reference link}}. Some information, such as "Notable people" or "Alumni", which may be read for context or scanned for content, may be formatted with a section lead and a descriptive, bulleted list, or as prose, depending on size. If the list is long, is unable to be summarised, but is not appropriate for splitting out, then a section lead, with a descriptive, bulleted list may be more appropriate than a long prose section. Although lists may be organized in different ways, they must always be organized. The most basic form of organization is alphabetical or numerical (such as List of Star Wars starfighters), though if items have specific dates a chronological format is sometimes preferable (List of Belarusian Prime Ministers). When using a more complex form of organization, (by origin, by use, by type, etc.), the criteria for categorization must be clear and consistent. Just as a reader or editor could easily assume that the headings A, B, C would be followed by D (rather than 1903), more complex systems should be just as explicit. If a list of Australians in international prisons contains the headings Argentina and Cambodia (organization by country), it would be inappropriate for an editor to add the heading Drug trafficking (organization by offense). If a list entry logically belongs in two or more categories (e.g., an Australian in an Argentine prison for drug trafficking), this suggests that the list categorization might be flawed, and should be re-examined. Lists should never contain "Unsorted" or "Miscellaneous" headings, as all items worthy of inclusion in the list can be sorted by some criteria, although it is entirely possible that the formatting of the list would need to be revamped to include all appropriate items. Not-yet-sorted items may be included on the list's talk page while their categorization is determined. Keep lists and tables as short as feasible for their purpose and scope: material within a list should relate to the article topic without going into unnecessary detail; and statistical data kept to a minimum per policy. Some material may not be appropriate for reducing or summarizing using the summary style method. An embedded list may need to be split off entirely into a list article, leaving a {{See}} template which produces: In some cases, a list style may be preferable to a long sequence within a sentence, compare: Lists, whether they are stand-alone lists (also called list articles) or embedded lists, are encyclopedic content just as paragraph-only articles or sections are. Therefore, all individual items on the list must follow Wikipedia's content policies: the core content policies of Verifiability (through good sources in the item's one or more references), No original research, and Neutral point of view, plus the other content policies as well. Content should be sourced where it appears with inline citations if the content contains any of the four kinds of material absolutely required to have citations. Although the format of a list might require less detail per topic, Wikipedia policies and procedures apply equally to both a list of similar things as well as to any related article to which an individual thing on the list might be linked. It is important to be bold in adding or editing items on a list, but also to balance boldness with being thoughtful, a balance which all content policies are aimed at helping editors achieve. Edits of uncertain quality can be first discussed on the talk page for feedback from other editors. Besides being useful for such feedback, a talk page discussion is also a good review process for reaching consensus before adding an item that is difficult or contentious, especially those items for which the definition of the topic itself is disputed. Note that, as with other policies and processes mentioned in this section, this process can be used for any type of difficult or contentious encyclopedic content on Wikipedia. Reaching consensus on the talk page before editing the list itself not only saves time in the long run, but also helps make sure that each item on the list is well referenced and that the list as a whole represents a neutral point of view. When an item meets the requirements of the Verifiability policy, readers of the list can check an item's reference to see that the information comes from a reliable source. For information to be verifiable, it also means that Wikipedia does not publish original research: its content is determined by information previously published in a good source, rather than the beliefs or experiences of its editors, or even the editor's interpretation beyond what the source actually says. Even if you're sure that an item is relevant to the list's topic, you must find a good source that verifies this knowledge before you add it to the list (although you can suggest it on the talk page), and add that source in a reference next to the item. In lists that involve living persons, the Biographies of living persons policy applies. When reliable sources disagree, the policy of keeping a neutral point of view requires that competing views be described without endorsing any in particular. Editors should simply present what the various sources say, giving each side its due weight through coverage balanced according to the prominence of each viewpoint in the published reliable sources. When adding to a stand-alone list with links to other articles, follow the established format when adding your item, and then see if you can link that item to an article focusing on that item's topic. If so, then consider if the list's format allows room for all the details of competing views in the list item or if those details should only be covered in the linked, main article on the topic. Either way, make sure to add them to the main article if they are not already there. You can add one or more suitable subcategories of Category:Lists at the bottom of the page containing a list that may be of independent encyclopedic interest. If there is a redirect for the list (e.g., from "List of Presidents of Elbonia" to "President of Elbonia#List of Elbonian Presidents") put list categories on the "List"-named redirect instead. Use a sort key to sort alphabetically by topic. List styles There are several ways of presenting lists on Wikipedia. This is the most common list type on Wikipedia. Bullets are used to discern, at a glance, the individual items in a list, usually when each item in the list is a simple word, phrase or single line of text, for which numeric ordering is not appropriate, or lists that are extremely brief, where discerning the items at a glance is not an issue. They are not appropriate for large paragraphs. Simple bulleted lists are created by starting a line with * and adding the text of a list item, one item per * line. List items should be formatted consistently. Summary: Do not insert blank lines between vertical list items. Instead, use the {{pb}} template or <p> HTML markup. Blank lines effectively break the list into several smaller lists for users of screen readers and interfere with machine-parseability of the content for reuse. Moreover, in most browsers, the extra white-space between a list and the next can have a visually jarring effect. HTML formatting can be used to create rich lists, including items with internal paragraph breaks. Using images with lists requires some care. In infoboxes, a bulleted list can be converted to unbulleted or horizontal style with simple templates, to suppress both the large bullets and the indentation. For lists without bullets, use a {{Plainlist}} or {{Unbulleted list}} template. Typical uses are in infobox fields, and to replace pseudo-lists of lines separated with <br />. The templates emit the correct HTML markup, and hide the bullets with CSS (see Template:Plainlist § Technical details). A benefit of {{Plainlist}} is that it can be wrapped around an already-existing bullet list. A feature of {{Unbulleted list}} is that, for a short list, it can be put on a single line: {{Unbulleted list|Example 1|Example 2|Example 3}}. Use a numbered (ordered) list only if any of the following apply: Use a # symbol at the start of a line to generate a numbered list item (excepted as detailed in this section, this works the same as * for bulleted lists, above). List items should be formatted consistently. Summary: HTML formatting can be used to create rich lists, including items with internal paragraph breaks; some basics are illustrated below. Using images with lists also requires some care. Editors can use raw HTML to achieve more complex results, such as ordered lists using indexes other than numbers, and ordered lists not starting from 1. Valid values for the list type are: The start value can be negative, but only if the list uses numbers as indexes. Otherwise, bizarre results are achieved. A description list contains groups of "... terms and definitions, metadata topics and values, questions and answers, or any other groups of name-value data". On Wikipedia, the most common use of a description list is for a glossary, where it is preferable to other styles. Wikipedia has special markup for description lists: The source can also be laid out with the descriptive value on the next line after the term, like so: This still keeps the names and values within a single description list, and the alternation of typically short names and longer values makes the separate components easy to spot while editing. The resulting layout and HTML are identical to that generated by the single-line syntax. Both of these wikitext markup styles are functionality-limited and easily broken. A major weakness of both variants is that they are easily broken by later editors attempting to create multi-line values. These issues are most-prominent in lengthy description lists. As a solution, there are templates for producing description lists such as glossaries, in ways that provide for richer, more complex content, including multiple paragraphs, block quotations, sub-lists, etc. (For full details on the problems with colon-delimited list markup, see WP:Manual of Style/Glossaries/DD bug test cases.) The basic format of a template-structured description list is: {{glossary}} {{term|name 1}} {{defn|value 1}} {{term |name 2}} {{defn |value 2}} {{term |name 3}} {{defn |value 3}} {{glossary end}} Use either wikitext or templates as above for description lists instead of other, made-up formats, as other formats may be unexpected for reader and editor alike, hamper reusability of Wikipedia content, make automated processing more difficult, and introduce usability and accessibility problems. (Other formats may take less vertical space, but will be more difficult for the reader to scan.) That said, a list of items whose descriptions contain more than one paragraph may present better as sections in a stand-alone list article, while tables are better-suited to associating content than description lists, especially when there are multiple values for each item. As with unordered (bulleted) and ordered (numbered) lists, items in description lists should not have blank lines between them, as it causes each entry to be its own bogus "list" in the output, obviating the point of putting the entries in list markup to begin with. When wiki markup colons are used just for visual indentation, they too are rendered in HTML as description lists, but without ;-delimited terms to which the :-indented material applies, nor with the list start and end tags, which produces broken markup (see WP:Manual of Style/Accessibility § Indentation for details). More accessible indentation templates can be used, e.g., {{in5}} or one of its variants for one line, and {{block indent}} for more than one line (even if misuse of description list markup on talk pages is too ingrained to change at this point). Many of the considerations at WP:Manual of Style § Section headings also apply to description list terms; even though description list terms are not headings, they act like headings in some ways. In at least one regard however, they are not: description list term wikitext (;) should not be used to subdivide large sections or otherwise to obtain a boldfacing effect. Use a subheading instead (e.g. === ... === markup) where one is appropriate, or normal boldface ('''...''') otherwise. (For more detail and examples, see WP:Manual of Style/Accessibility § Pseudo-headings.) A disease is any abnormal condition that impairs normal function, especially infectious diseases, which are clinically evident diseases that result from the presence of pathogenic microbial agents. Illness or sickness are usually synonyms for disease, except when used to refer specifically to the patient's personal experience of their disease. Medical condition is a broad term that includes all diseases and disorders, but can also include injuries and normal health situations, such as pregnancy, that might affect a person's health, benefit from medical assistance, or have implications for medical treatments. Tables are a way of presenting links, data, or information in rows and columns. They are a complex form of list and are useful especially when more than 2 pieces of information are of interest to each list item. Tables require a more-complex notation, and should be scrutinized for their accessibility. Consideration may be given to collapsing tables which consolidate information covered in the prose. Tables might be used for presenting mathematical data such as multiplication tables, comparative figures, or sporting results. They might also be used for presenting equivalent words in two or more languages, for awards by type and year, and complex discographies. In situations such as infoboxes, horizontal lists may be useful. Examples: Note the capitalization of only the first word in this list ("Entry 1 ..."), regardless of coding style. Words that are normally capitalized, like proper names, would of course still be capitalized. A benefit of {{Flatlist}} is that it can be wrapped around an already-existing bullet list. A feature of {{Hlist}} is that, for a short list, it can be put on a single line. For lists of dated events, or timelines, use one instance of {{Timeline-event}} per event, thus: to render as: (note optional df=y (date first) parameter – date formatting should be consistent within individual articles). Chronological lists, such as timelines, should be in earliest-to-latest chronological order. See Wikipedia:Stand-alone lists § Chronological ordering. cake cheese chocolate This "pseudo-list" method is deprecated, as it does not meet Web standards and can cause accessibility problems. Instead, use one of more formatted list styles defined above. Boilerplate text Directly before an incomplete list, insert {{incomplete list}}, which will transclude the following onto the page: Several topic-specific variations of this template are also available within Category:Incomplete list maintenance templates. Only one of {{incomplete list}} or its variations should be added, unless the topic is significantly related to more than one of the subcategories. Do not add both {{incomplete list}} AND a variation to any list. Pro and con lists These are lists of arguments for and against a particular contention or position. They include lists of Advantages and disadvantages of a technology or proposal (such as Wi-Fi) and lists of Criticisms and defenses of a political position or other view, such as libertarianism or evolution. Pro and con lists can encapsulate or bracket neutrality problems in an article by creating separate spaces in which different points of view can be expressed. An alternative method is to thread different points of view into running prose. Either method needs careful judgment as to whether and how it should be used. In particular, pro and con lists can fragment the presentation of facts, create a binary structure where a more nuanced treatment of the spectrum of facts is preferable, encourage oversimplification, and require readers to jump back and forth between the two sides of the list. See also Notes |
======================================== |
[SOURCE: https://techcrunch.com/author/tim-de-chant/] | [TOKENS: 294] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Tim De Chant Senior Reporter, Climate, TechCrunch Tim De Chant is a senior climate reporter at TechCrunch. He has written for a wide range of publications, including Wired magazine, the Chicago Tribune, Ars Technica, The Wire China, and NOVA Next, where he was founding editor. De Chant is also a lecturer in MIT’s Graduate Program in Science Writing, and he was awarded a Knight Science Journalism Fellowship at MIT in 2018, during which time he studied climate technologies and explored new business models for journalism. He received his PhD in environmental science, policy, and management from the University of California, Berkeley, and his BA degree in environmental studies, English, and biology from St. Olaf College. You can contact or verify outreach from Tim by emailing tim.dechant@techcrunch.com. Latest from Tim De Chant © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://www.fast.ai/posts/2016-10-08-curriculum.html] | [TOKENS: 566] |
What We Will Cover in the First Deep Learning Certificate Jeremy Howard & Rachel Thomas October 8, 2016 On this page For those of you considering joining our deep learning certificate, I’m sure you’d like to hear more about what we will be covering. This first course if part 1 of a two part series, which have the following high level goals: Here’s what we’re planning to cover in part 1 of the course: We’ll be covering these topics in a very different way to what you’ll be used to if you’ve taken any university level math or CS courses in the past. We’ll be telling you all about our teaching philosophy in our next post. Our approach will be code-heavy and math-light, so we do ask that participants already have at least a year or two of solid coding experience. We’ll be using Python (via the wonderful Jupyter Notebook) for our examples, so if you’re not already familiar with Python, we’d strongly suggest going through a quick introduction to Python and to Jupyter (formerly known as IPython) Here’s some more detail on what topics we will be covering. For convolutional neural networks (CNNs), primarily used for image classification, we will teach: To learn more, you may be interested this great visual explanation of image kernels For recursive neural networks (RNNs), used for natural language processing (NLP) and times series data, we will cover: To find out more now, you can read this excellent post by Andrej Karpathy. One of our primary goals for this course is to teach you practical techniques for training better models such as: Check out this helpful advice on babysitting your learning process and Chris Olah’s illuminating visualizations of language representations There is a dangerous myth that you need huge data sets to effectively use deep learning. This is false, and we will teach you to deal with data shortages, such as through: To participate, you should either have some familiarity with matrix multiplication, basic differentiation, and the chain rule, or be willing to study them before the course starts. If you need a refresher on these concepts, we recommend the Khan Academy videos on matrix multiplication and the chain rule). We will make significant use of list comprehensions in Python - here is a useful introduction. It would also be very helpful to know your way around the basic python data science tools: numpy, scipy, scikit-learn, pandas, jupyter notebook, and matplotlib. The best guide I know of to these tools is Python For Data Analysis. For those with no python experience, you may want to prepare by reading Learn Python The Hard Way. Read the official USF Data Institute description of our upcoming deep learning course on Monday evenings and send your resume to [email protected] by Oct 12 to apply. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File:Python-logo-notext.svg] | [TOKENS: 354] |
File:Python-logo-notext.svg Summary Licensing This work is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or any later version. This work is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose. See version 2 and version 3 of the GNU General Public License for more details.http://www.gnu.org/licenses/gpl.htmlGPLGNU General Public Licensetruetrue Original upload log Notes from Python Software Foundation The image on Wikimedia uses lower precision in its SVG coordinates than the original image at https://www.python.org/community/logos/. The rendering of this image to raster PNGs may distort elements of the logo shape in ways that do not permit general use. For a reproducible version, please use the original SVG and raster renderings available from the python.org link given; please contact the Python Software Foundation Trademarks Working Group at psf-trademarks@python.org for questions about permitted uses of the Python logo. VPE File history Click on a date/time to view the file as it appeared at that time. File usage More than 100 pages use this file. The following list shows the first 100 pages that use this file only. A full list is available. View more links to this file. Global file usage The following other wikis use this file: View more global usage of this file. Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/20/meta-metaverse-leaves-vr-horizon-worlds-mobile/] | [TOKENS: 870] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Meta’s metaverse leaves virtual reality Meta announced a major update for its immersive virtual world, Horizon Worlds, on Thursday that will see it leave the metaverse behind. The tech giant said it’s shifting focus for Horizon Worlds to be “almost exclusively mobile” and that it’s “explicitly separating” its Quest VR platform from the virtual world. Meta’s Reality Labs division for VR and smart glasses development has lost nearly $80 billion since 2020. The update to Horizon Worlds, and other recent moves, signals that Meta is significantly rethinking its VR ambitions. Last month, the company reportedly laid off roughly 1,500 employees from its Reality Labs division — about 10% of the unit’s staff — and shut down several VR game studios. Additionally, it was reported that the VR fitness app Supernatural, which Meta acquired in 2023, will no longer produce new content and will move into “maintenance mode.” Horizon Worlds originally launched in 2021 as a VR platform and later rolled out to the web and mobile. Meta said Thursday that to “truly change the game and tap into a much larger market, we’re going all-in on mobile.” By going mobile-first, Horizon Worlds is positioning itself to compete with popular platforms like Roblox and Fortnite. “We’re in a strong position to deliver synchronous social games at scale, thanks to our unique ability to connect those games with billions of people on the world’s biggest social networks,” Samantha Ryan, Reality Labs’ VP of content, said in the blog post. “You saw this strategy start to unfold in 2025, and now, it’s our main focus.” Ryan went on to note that Meta is still focused on VR hardware. “We have a robust roadmap of future VR headsets that will be tailored to different audience segments as the market grows and matures,” Ryan wrote. Meta’s metaverse ambitions have effectively been abandoned in favor of AI. After shifting its Reality Labs investments away from the metaverse, Meta is now focused on developing AI wearables and advancing its own AI models. During Meta’s latest earnings call last month, Meta CEO Mark Zuckerberg said, “It’s hard to imagine a world in several years where most glasses that people wear aren’t AI glasses.” The exec also stated that sales of Meta’s glasses tripled within the last year, calling them “some of the fastest-growing consumer electronics in history.” Topics Consumer News Reporter Aisha is a consumer news reporter at TechCrunch. Prior to joining the publication in 2021, she was a telecom reporter at MobileSyrup. Aisha holds an honours bachelor’s degree from University of Toronto and a master’s degree in journalism from Western University. You can contact or verify outreach from Aisha by emailing aisha@techcrunch.com or via encrypted message at aisha_malik.01 on Signal. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/List_of_programming_languages#D] | [TOKENS: 115] |
List of programming languages This is an index to notable programming languages, in current or historical use. Dialects of BASIC (which have their own page), esoteric programming languages, and markup languages are not included. A programming language does not need to be imperative or Turing-complete, but must be executable and so does not include markup languages such as HTML or XML, but does include domain-specific languages such as SQL and its dialects. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Pun] | [TOKENS: 5008] |
Contents Pun A pun, also known as a paronomasia in the context of linguistics, is a form of word play that exploits multiple meanings of a term, or of similar-sounding words, for an intended humorous or rhetorical effect. These ambiguities can arise from the intentional use of homophonic, homographic, metonymic, or figurative language. A pun differs from a malapropism in that a malapropism is an incorrect variation on a correct expression, while a pun involves expressions with multiple (correct or fairly reasonable) interpretations. Puns may be regarded as in-jokes or idiomatic constructions, especially as their usage and meaning are usually specific to a particular language or its culture. Puns have a long history in writing. For example, the Roman playwright Plautus was famous for his puns and word games. Types of puns A homophonic pun is one that uses word pairs which sound alike (homophones) but are not synonymous. Walter Redfern summarized this type with his statement "To pun is to treat homonyms as synonyms." For example, in George Carlin's phrase "atheism is a non-prophet institution", the word prophet is put in place of its homophone profit, altering the common phrase "non-profit institution". Similarly, the Cold War era joke "Question: Why do we still have troops in Germany? Answer: To keep the Russians in Czech" relies on the aural ambiguity of the homophones check and Czech. Often, puns are not strictly homophonic, but play on words of similar, not identical, sound as in the example from the Pinky and the Brain cartoon film series: "I think so, Brain, but if we give peas a chance, won't the lima beans feel left out?" which plays with the similar—but not identical—sound of peas and peace in the anti-war slogan "Give Peace a Chance". A homographic pun exploits words that are spelled the same (homographs) but possess different meanings and sounds. Because of their origin, they rely on sight more than hearing, contrary to homophonic puns. They are also known as heteronymic puns. Examples in which the punned words typically exist in two different parts of speech often rely on unusual sentence construction, as in the anecdote: "When asked to explain his large number of children, the pig answered simply: 'The wild oats of my sow gave us many piglets.'" An example that combines homophonic and homographic punning is Douglas Adams's line "You can tune a guitar, but you can't tuna fish. Unless of course, you play bass." The phrase uses the homophonic qualities of tune a and tuna, as well as the homographic pun on bass, in which ambiguity is reached through the identical spellings of /beɪs/ (a string instrument), and /bæs/ (a kind of fish). Homographic puns do not necessarily need to follow grammatical rules and often do not make sense when interpreted outside the context of the pun. Homonymic puns, another common type, arise from the exploitation of words that are both homographs and homophones. The statement "Being in politics is just like playing golf: you are trapped in one bad lie after another" puns on the two meanings of the word lie as "a deliberate untruth" and as "the position in which something rests". An adaptation of a joke repeated by Isaac Asimov gives us "Did you hear about the little moron who strained himself while running into the screen door?" playing on strained as "to give much effort" and "to filter". A homonymic pun may also be polysemic, in which the words must be homonymic and also possess related meanings, a condition that is often subjective. However, lexicographers define polysemes as listed under a single dictionary lemma (a unique numbered meaning) while homonyms are treated in separate lemmata. A compound pun is a statement that contains two or more puns. In this case, the wordplay cannot go into effect by utilizing the separate words or phrases of the puns that make up the entire statement. For example, a complex statement by Richard Whately includes four puns: "Why can a man never starve in the Great Desert? Because he can eat the sand which is there. But what brought the sandwiches there? Why, Noah sent Ham, and his descendants mustered and bred." This pun uses sand which is there/sandwiches there, Ham/ham, mustered/mustard, and bred/bread. Similarly, the phrase "piano is not my forte" links two meanings of the words forte and piano, one for the dynamic markings in music and the second for the literal meaning of the sentence, as well as alluding to "pianoforte", the older name of the instrument. Compound puns may also combine two phrases that share a word. For example, "Where do mathematicians go on weekends? To a Möbius strip club!" puns on the terms Möbius strip and strip club. A recursive pun is one in which the second aspect of a pun relies on the understanding of an element in the first. For example, the statement "π is only half a pie" (π radians is 180 degrees, or half a circle, and a pie is a complete circle). Another example is "Infinity is not in finity", which means infinity is not in finite range. Another example is "a Freudian slip is when you say one thing but mean your mother". The recursive pun "Immanuel doesn't pun, he Kant" is attributed to Oscar Wilde. Visual puns are sometimes used in logos, emblems, insignia, and other graphic symbols, in which one or more of the pun aspects is replaced by a picture. In European heraldry, this technique is called canting arms. Visual and other puns and word games are also common in Dutch gable stones as well as in some cartoons, such as Lost Consonants and The Far Side. Another type of visual pun exists in languages that use non-phonetic writing. For example, in Chinese, a pun may be based on a similarity in shape of the written character, despite a complete lack of phonetic similarity in the words punned upon. Mark Elvin describes how this "peculiarly Chinese form of visual punning involved comparing written characters to objects". Visual puns on the bearer's name are used extensively as forms of heraldic expression, they are called canting arms. They have been used for centuries across Europe and have even been used recently by members of the British royal family, such as on the arms of Queen Elizabeth The Queen Mother and of Princess Beatrice of York. The arms of U.S. Presidents Theodore Roosevelt and Dwight D. Eisenhower are also canting.[citation needed] In the context of non-phonetic texts, 4 Pics 1 Word, is an example of visual paronomasia where the players are supposed to identify the word in common from the set of four images. Paronomasia is the formal term for punning, playing with words to create humorous or rhetorical effect. Paronomastic puns often manipulate well-known idioms, proverbs, or phrases to deliver a punned twist. The classic structure of a joke, with a setup leading to a punchline, is a common format for paronomastic puns, where the punchline alters the expected phrase in a way that plays on multiple meanings of a word. For instance, in the sentence "I used to be a baker, but I couldn't make enough dough", the word "dough" is used paronomastically to refer both to the substance used to make bread and to slang for money. This type of pun is frequently used in advertisements, comedy, and literature to provide a clever and memorable message. One notable example comes from an advertising slogan for a moving company: "We don't charge an arm and a leg. We want your tows." Here, the familiar phrase "an arm and a leg" is paronomastically punned upon with "tows", playing on the phonetic similarity to "toes" while referring to the company's service of towing belongings. Metonymic puns exploit the metonymic relationship between words – where a word or phrase is used to represent something it's closely associated with. In such puns, one term is substituted for another term with which it's closely linked by a concept or idea. The humor or wit of the pun often comes from the unexpected yet apt connection made between the two concepts. For instance, consider a hypothetical news headline: "The White House loses its balance." In this case, "The White House" is used metonymically to represent the U.S. government, and "balance" could be interpreted both as physical stability (as if the building itself is tipping over) or fiscal balance (as in the budget), thereby creating a pun. While metonymic puns may not be as widely recognized as a specific category of pun, they represent a sophisticated linguistic tool that can bring an additional layer of nuance to wordplay. Syllepsis, or heteronymy, is a form of punning where a single word simultaneously affects the rest of the sentence, while it changes the meaning of the idiom it is used in. This form of punning uses the word in its literal and metaphorical senses at once, creating a surprising and often humorous effect. An example of a sylleptic pun is in the sentence "She lowered her standards by raising her glass, her courage, her eyes and his hopes." In this case, "raising" applies in different ways to each of the items listed, creating a series of linked puns. This type of punning can often be seen in literature, particularly in works that play extensively with language. (She razed his self-esteem in how she raised the children.) Notable practitioners of the sylleptic pun include authors such as P. G. Wodehouse, who once wrote: "If not actually disgruntled, he was far from being gruntled", playing on the dichotomy of "disgruntled" and "gruntled", where the latter is not typically used. Antanaclasis is a type of pun where a single word or phrase is repeated, but the meaning changes each time. The humor or wit derives from the surprising shift in meaning of a familiar word or phrase. This form of punning often relies on homophones, homonyms, or simply the contextual flexibility of a word or phrase. A classic example is Benjamin Franklin's statement "We must, indeed, all hang together or, most assuredly, we shall all hang separately." In this quote, the word "hang" is first used to mean "stay" or "work together", but then, it is repeated with the meaning "be executed". This punning style is prevalent in both humorous and serious contexts, adding layers of complexity to the language by highlighting the multifaceted nature of words. Such puns are frequently used in literature, speeches, and advertising to deliver memorable and impactful lines. Richard J. Alexander notes two additional forms that puns may take: graphological (sometimes called visual) puns, such as concrete poetry; and morphological puns, such as portmanteaux. Morphological puns may make use of rebracketing, where for instance distressed is parsed as dis-tressed (having hair cut off), or in the self-referential pun "I entered ten puns in a pun competition hoping one would win, but no pun in ten did" (parsed as "no pun intended"). Use Puns are a common source of humour in jokes and comedy shows. They are often used in the punch line of a joke, where they typically give a humorous meaning to a rather perplexing story. These are also known as feghoots. The following example comes from the movie Master and Commander: The Far Side of the World, though the punchline stems from far older Vaudeville roots. The final line puns on the stock phrase "the lesser of two evils". After Aubrey offers his pun (to the enjoyment of many), Dr. Maturin shows a disdain for the craft with his reply "One who would pun would pick-a-pocket." Captain Aubrey: "Do you see those two weevils, Doctor?... Which would you choose?" Dr. Maturin: "Neither. There's not a scrap of difference between them. They're the same species of Curculio." Captain Aubrey: "If you had to choose. If you were forced to make a choice. If there were no other option." Dr. Maturin: "Well, then, if you're going to push me. I would choose the right-hand weevil. It has significant advantage in both length and breadth." Captain Aubrey: "There, I have you!...Do you not know that in the Service, one must always choose the lesser of the two weevils." Not infrequently, puns are used in the titles of comedic parodies[citation needed]. A parody of a popular song, movie, etc., may be given a title that hints at the title of the work being parodied, replacing some of the words with ones that sound or look similar. For example, collegiate a cappella groups are often named after musical puns to attract fans through attempts at humor. Such a title can immediately communicate both that what follows is a parody and also that work is about to be parodied, making any further "setup" (introductory explanation) unnecessary. Sometimes called "books never written" or "world's greatest books", these are jokes that consist of fictitious book titles with authors' names that contain a pun relating to the title. Perhaps the best-known example is: "Tragedy on the Cliff by Eileen Dover", which according to one source was devised by humourist Peter De Vries. It is common for these puns to refer to taboo subject matter, such as "What Boys Love by E. Norma Stitts". Pun competitions 2014 saw the inaugural UK Pun Championships, at the Leicester Comedy Festival, hosted by Lee Nelson. The winner was Darren Walsh. Walsh went on to take part in the O. Henry Pun-Off World Championships in Austin, Texas. In 2015 the UK Pun Champion was Leo Kearse. Other pun competitions include Minnesota’s Pundamonium, Orlando Punslingers, the Almost Annual Pun-Off in Eureka, and Brooklyn’s Punderdome, led by Jo Firestone and her father, Fred Firestone. In Away with Words: An Irreverent Tour Through the World of Pun Competitions, Joe Berkowitz deems Austin's O. Henry Pun-Off the "Olympics" of pun competitions, and Brooklyn's Punderdome the "X Games". GQ described the crowd at Brooklyn's Punderdome as "passionate, to a level that feels dangerous". Non-humorous puns were and are a standard poetic device in English literature. Puns and other forms of wordplay have been used by many famous writers, such as Alexander Pope, James Joyce, Vladimir Nabokov, Robert Bloch, Lewis Carroll, John Donne, and William Shakespeare. In the poem "A Hymn to God the Father", John Donne, whose wife's name was Anne More, puns repeatedly: "Son/sun" in the second quoted line, and two compound puns on "Done/done" and "More/more". All three are homophonic, with the puns on "more" being both homographic and capitonymic. The ambiguities introduce several possible meanings into the verses. When Thou hast done, Thou hast not done / For I have more. that at my death Thy Son / Shall shine as he shines now, and heretofore And having done that, Thou hast done; / I fear no more. Alfred Hitchcock stated: "Puns are the highest form of literature." Shakespeare is estimated to have used over 3,000 puns in his plays. Even though many of the puns were bawdy, Elizabethan literature considered puns and wordplay to be a "sign of literary refinement" more so than humor. This is evidenced by the deployment of puns in serious or "seemingly inappropriate" scenes, like when a dying Mercutio quips "Ask for me tomorrow, and you shall find me a grave man" in Romeo and Juliet. Shakespeare was also noted for his frequent play with less serious puns, the "quibbles" of the sort that made Samuel Johnson complain: "A quibble is to Shakespeare what luminous vapours are to the traveller! He follows it to all adventures; it is sure to lead him out of his way, sure to engulf him in the mire. It has some malignant power over his mind, and its fascinations are irresistible." Elsewhere, Johnson disparagingly referred to punning as the lowest form of humour. Puns can function as a rhetorical device, where the pun serves as a persuasive instrument for an author or speaker. Although puns are sometimes perceived as trite or silly, if used responsibly a pun "can be an effective communication tool in a variety of situations and forms". A major difficulty in using puns in this manner is that the meaning of a pun can be interpreted very differently according to the audience's background with the possibility of detracting from the intended message. Like other forms of wordplay, paronomasia is occasionally used for its attention-getting or mnemonic qualities, making it common in titles and the names of places, characters, and organizations, and in advertising and slogans. Many restaurant and shop names use puns: Cane & Able mobility healthcare, Sam & Ella's Chicken Palace, Tiecoon tie shop, Planet of the Grapes wine and spirits, Curl Up and Dye hair salon, as do books such as Pies and Prejudice, webcomics like (YU+ME: dream) and feature films such as (Good Will Hunting). The Japanese anime Speed Racer's original Japanese title, Mach GoGoGo! refers to the English word itself, the Japanese word for five (the Mach Five's car number), and the name of the show's main character, Go Mifune. This is also an example of a multilingual pun, full understanding of which requires knowledge of more than one language on the part of the listener. Names of fictional characters also often carry puns, such as Ash Ketchum, the protagonist of the anime series Pokémon, and Goku ("Kakarrot"), the protagonist of the manga series Dragon Ball. Both franchises are known for including second meanings in the names of characters. A recurring motif in the Austin Powers films repeatedly puns on names that suggest male genitalia. In the science fiction television series Star Trek, "B-4" is used as the name of one of four androids models constructed "before" the android Data, a main character. A librarian in another Star Trek episode was named "Mr. Atoz" (A to Z). The parallel sequel The Lion King 1½ advertised with the phrase "You haven't seen the 1/2 of it!". Wyborowa Vodka employed the slogan "Enjoyed for centuries straight", while Northern Telecom used "Technology the world calls on". On 1 June 2015 the BBC Radio 4 You and Yours included a feature on "Puntastic Shop Titles". Entries included a Chinese Takeaway in Ayr town centre called "Ayr's Wok", a kebab shop in Ireland called "Abra Kebabra" and a tree-surgeon in Dudley called "Special Branch". The winning entry, selected by Lee Nelson, was a dry cleaner's in Fulham and Chelsea called "Starchy and Starchy", a pun on Saatchi & Saatchi. In the media Paronomasia has found a strong foothold in the media. William Safire of The New York Times suggests that "the root of this pace-growing [use of paronomasia] is often a headline-writer's need for quick catchiness, and has resulted in a new tolerance for a long-despised form of humor". It can be argued that paronomasia is common in media headlines, to draw the reader's interest. The rhetoric is important because it connects people with the topic. A notable example is the New York Post headline "Headless Body in Topless Bar". New York Post headlines for sex scandal articles have included "Cloak and Shag Her" (General Petraeus), "Obama Beats Weiner" (Congressman Weiner), and "Bezos Exposes Pecker". Paronomasia is prevalent orally as well. Salvatore Attardo believes that puns are verbal humor. He talks about Pepicello and Weisberg's linguistic theory of humor and believes the only form of linguistic humor is limited to puns. This is because a pun is a play on the word itself. Attardo believes that only puns are able to maintain humor and this humor has significance. It is able to help soften a situation and make it less serious, it can help make something more memorable, and using a pun can make the speaker seem witty. Paronomasia is strong in print media and oral conversation so it can be assumed that paronomasia is strong in broadcast media as well. Examples of paronomasia in media are sound bites. They could be memorable because of the humor and rhetoric associated with paronomasia, thus making the significance of the soundbite stronger. Confusion and alternative uses There exist subtle differences between paronomasia and other literary techniques, such as the double entendre. While puns are often simple wordplay for comedic or rhetorical effect, a double entendre alludes to a second meaning that is not contained within the statement or phrase itself, often one that purposefully disguises the second meaning. As both exploit the use of intentional double meanings, puns can sometimes be double entendres, and vice versa. Puns also bear similarities with paraprosdokian, syllepsis, and eggcorns. In addition, homographic puns are sometimes compared to the stylistic device antanaclasis, and homophonic puns to polyptoton. Puns can be used as a type of mnemonic device to enhance comprehension in an educational setting. Used discreetly, puns can effectively reinforce content and aid in the retention of material. Some linguists have encouraged the creation of neologisms to decrease the instances of confusion caused by puns. History and global usage Puns were found in ancient Egypt, where they were heavily used in the development of myths and interpretation of dreams. In China, Shen Dao (ca. 300 BC) used "shi", meaning "power", and "shi", meaning "position" to say that a king has power because of his position as king. In ancient Mesopotamia around 2500 BC, punning was used by scribes to represent words in cuneiform. The Hebrew-language version of the Old Testament contains well over one hundred puns on proper names. For example, Genesis 9:27 says "May God expand Japheth", which is a pun on the Hebrew words 'yapt' (to expand) and 'yepet' (Japheth). The Greek-language version of the New Testament also includes several puns on proper names: Christ as "Anointed", Legion as "Many", Jerusalem as "City of Peace", Barnabas as "Son of Exhortation", Peter as "Rock", Onesimus as "Useful", and so forth. The Maya are known for having used puns in their hieroglyphic writing, and for using them in their modern languages. In Japan, "graphomania" was one type of pun. More commonly, wordplay in modern Japan is known as dajare. In Tamil, "Sledai" is the word used to mean pun in which a word with two different meanings. This is also classified as a poetry style in ancient Tamil literature. Similarly, in Telugu, "Slesha" is the equivalent word and is one of several poetry styles in Telugu literature. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Word_play] | [TOKENS: 734] |
Contents Wordplay Wordplay (also: play-on-words) is a literary technique and a form of wit in which words used become the main subject of the work, primarily for the purpose of intended effect or amusement. Examples of wordplay include puns, phonetic mix-ups such as spoonerisms, obscure words and meanings, clever rhetorical excursions, oddly formed sentences, double entendres, and telling character names (such as in the play The Importance of Being Earnest, Ernest being a given name that sounds exactly like the adjective earnest). Wordplay is quite common in oral cultures as a method of reinforcing meaning. Examples of text-based (orthographic) wordplay are found in languages with or without alphabet-based scripts, such as homophonic puns in Mandarin Chinese. Techniques Examples Most writers engage in wordplay to some extent, but certain writers are particularly committed to, or adept at, wordplay as a major feature of their work. Shakespeare's "quibbles" have made him a noted punster. Similarly, P.G. Wodehouse was hailed by The Times as a "comic genius recognized in his lifetime as a classic and an old master of farce" for his own acclaimed wordplay. James Joyce, author of Ulysses, is another noted word-player. For example, in his Finnegans Wake Joyce's phrase "they were yung and easily freudened" clearly implies the more conventional "they were young and easily frightened"; however, the former also makes an apt pun on the names of two famous psychoanalysts, Jung and Freud. An epitaph, probably unassigned to any grave, demonstrates use in rhyme. Crossword puzzles often employ wordplay to challenge solvers. Cryptic crosswords especially are based on elaborate systems of wordplay. An example of modern wordplay can be found on line 103 of Childish Gambino's "III. Life: The Biggest Troll". H2O plus my D, that's my hood, I'm living in it Rapper Milo uses a play on words in his verse on "True Nen". A farmer says, "I got soaked for nothing, stood out there in the rain bang in the middle of my land, a complete waste of time. I'll like to kill the swine who said you can win the Nobel Prize for being out standing in your field!" The Mario Party series is known for its mini-game titles that usually are puns and various plays on words; for example: "Shock, Drop, and Roll", "Gimme a Brake", and "Right Oar Left". These mini-game titles are also different depending on regional differences and take into account that specific region's culture. Many of the books the character Gromit in the Wallace & Gromit series reads or the music Gromit listens to are plays on words, such as "Pup Fiction" (Pulp Fiction), "Where Beagles Dare" (Where Eagles Dare), "Red Hot Chili Puppies" (Red Hot Chili Peppers) and "The Hound of Music" (The Sound of Music). Related phenomena Wordplay can enter common usage as neologisms. Wordplay is closely related to word games; that is, games in which the point is manipulating words. See also language game for a linguist's variation. Wordplay can cause problems for translators: e.g., in the book Winnie-the-Pooh a character mistakes the word "issue" for the noise of a sneeze, a resemblance which disappears when the word "issue" is translated into another language. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#Professions_and_organizations] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#See_also] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#Sources] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Elon_Musk&action=info] | [TOKENS: 47] |
Contents Information for "Elon Musk" Basic information Page protection Edit history Page properties This page is a member of 28 hidden categories (help): Pages transcluded onto the current version of this page (help): Lint errors External tools |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Messier_object] | [TOKENS: 999] |
Contents Messier object The Messier objects are a set of 110 astronomical objects catalogued by the French astronomer Charles Messier in his Catalogue des Nébuleuses et des Amas d'Étoiles [fr] (Catalogue of Nebulae and Star Clusters). Because Messier was interested only in finding comets, he created a list of those non-comet objects that frustrated his hunt for them. This list, which Messier created in collaboration with his assistant Pierre Méchain, is now known as the Messier catalogue. The Messier catalogue is one of the most famous lists of astronomical objects, and many objects on the list are still referenced by their Messier numbers. The catalogue includes most of the astronomical deep-sky objects that can be easily observed from Earth's Northern Hemisphere; many Messier objects are popular targets for amateur astronomers. A preliminary version of the catalogue first appeared in 1774 in the Memoirs of the French Academy of Sciences for the year 1771. The first version of Messier's catalogue contained 45 objects, which were not numbered. Eighteen of the objects were discovered by Messier; the rest had been previously observed by other astronomers. By 1780 the catalogue had increased to 70 objects. The final version of the catalogue containing 103 objects was published in 1781 in the Connaissance des Temps for the year 1784. However, due to what was thought for a long time to be the incorrect addition of Messier 102, the total number remained 102. Other astronomers, using side notes in Messier's texts, eventually expanded the list to 110 objects. The catalogue consists of a diverse range of astronomical objects, from star clusters and nebulae to galaxies. For example, Messier 1 is a supernova remnant, known as the Crab Nebula, and the great spiral Andromeda Galaxy is M31. Further inclusions followed. Lists and editions The first edition of 1774 covered 45 objects (M1 to M45). The total list published by Messier in 1781 contained 103 objects, but the list was expanded through successive additions by other astronomers, motivated by notes in Messier's and Méchain's texts indicating that at least one of them knew of the additional objects. The first such addition came from Nicolas Camille Flammarion in 1921, who added Messier 104 after finding a note Messier made in a copy of the 1781 edition of the catalogue. M105 to M107 were added by Helen Sawyer Hogg in 1947, M108 and M109 by Owen Gingerich in 1960, and M110 by Kenneth Glyn Jones in 1967. M102 was observed by Méchain, who communicated his notes to Messier. Méchain later concluded that this object was simply a re-observation of M101, though some sources suggest that the object Méchain observed was the galaxy NGC 5866 and identify that as M102. Messier's final catalogue was included in the Connaissance des Temps pour l'Année 1784 [Knowledge of the Times for the Year 1784], the French official yearly publication of astronomical ephemerides. Messier lived and conducted his astronomical work at the Hôtel de Cluny (now the Musée national du Moyen Âge), in Paris, France. The list he compiled contains only objects found in the sky area he could observe from the north celestial pole to a celestial latitude of about −35.7°. He did not observe or list objects visible only from farther south, such as the Large and Small Magellanic Clouds. Observations The Messier catalogue comprises nearly all of the most spectacular examples of the five types of deep-sky object – diffuse nebulae, planetary nebulae, open clusters, globular clusters, and galaxies – visible from European latitudes. Furthermore, almost all of the Messier objects are among the closest to Earth in their respective classes, which makes them heavily studied with professional class instruments that today can resolve small and visually significant details in them. A summary of the astrophysics of each Messier object can be found in the Concise Catalog of Deep-sky Objects. Since these objects could be observed visually with the relatively small-aperture refracting telescope (approximately 100 mm ≈ 4 inches) used by Messier to study the sky from downtown Paris, they are among the brightest and thus most attractive astronomical objects (of the class popularly called deep-sky objects) observable from Earth, and are popular targets for visual study and astrophotography available to modern amateur astronomers using larger aperture equipment. In early spring, astronomers sometimes gather for "Messier marathons", when all of the objects can be viewed over a single night. Messier objects Star chart of Messier objects See also References External links |
======================================== |
[SOURCE: https://techcrunch.com/author/aisha-malik/] | [TOKENS: 225] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Aisha Malik Consumer News Reporter, TechCrunch Aisha is a consumer news reporter at TechCrunch. Prior to joining the publication in 2021, she was a telecom reporter at MobileSyrup. Aisha holds an honours bachelor’s degree from University of Toronto and a master’s degree in journalism from Western University. You can contact or verify outreach from Aisha by emailing aisha@techcrunch.com or via encrypted message at aisha_malik.01 on Signal. Latest from Aisha Malik © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/20/indias-sarvam-launches-indus-ai-chat-app-as-competition-heats-up/] | [TOKENS: 837] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us India’s Sarvam launches Indus AI chat app as competition heats up Sarvam, an Indian AI startup focused on building models for local languages and users, on Friday launched its Indus chat app for web and mobile users, entering a fast-growing market dominated by global players including OpenAI, Anthropic, and Google. The launch comes as India has become a key battleground for generative AI adoption. Recently, OpenAI CEO Sam Altman said ChatGPT has more than 100 million weekly active users in India, while Anthropic said India accounts for 5.8% of total Claude usage, second only to the U.S. Indus serves as a chat interface for its newly announced Sarvam 105B model, the company’s 105-billion-parameter large language model. The app’s launch comes two days after Bengaluru-based Sarvam unveiled its 105B and 30B models at the India AI Impact Summit in New Delhi earlier this week. At the summit, the startup also outlined enterprise initiatives and hardware plans and announced partnerships with companies including HMD to bring AI to Nokia feature phones and Bosch for AI-enabled automotive applications. Currently available in beta on iOS, Android, and the web, the Indus app allows users to type or speak queries and receive responses in text and audio. Users can sign in using their phone number, Google or Microsoft account, or Apple ID, though the service appears to be limited to India for now. The app currently comes with some limitations. Users cannot delete their chat history without deleting their account, and there is no option to turn off the app’s reasoning feature, which can sometimes slow response times. Sarvam has also warned that access may be restricted as it gradually expands its compute capacity. “We’re gradually rolling out Indus on a limited compute capacity, so you may hit a waitlist at first. We will expand access over time,” Sarvam co-founder Pratyush Kumar wrote on X, adding that the company is seeking feedback from users. Founded in 2023, Sarvam has raised $41 million to date from investors, including Lightspeed Venture Partners, Peak XV Partners, and Khosla Ventures as it builds large language models tailored for India. Sarvam is one of a small but growing group of Indian startups attempting to build domestic alternatives to global artificial intelligence platforms as India seeks greater control over its AI infrastructure. Topics Reporter Jagmeet covers startups, tech policy-related updates, and all other major tech-centric developments from India for TechCrunch. He previously worked as a principal correspondent at NDTV. You can contact or verify outreach from Jagmeet by emailing mail@journalistjagmeet.com. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Middle_East#cite_note-Akrotiri_and_Dhekelia-2] | [TOKENS: 6152] |
Contents Middle East The Middle East[b] is a geopolitical region encompassing the Arabian Peninsula, Egypt, Iran, Iraq, the Levant, and Turkey. The term came into widespread usage by Western European nations in the early 20th century as a replacement of the term Near East (both were in contrast to the Far East). The term "Middle East" has led to some confusion over its changing definitions. Since the late 20th century, it has been criticized as being too Eurocentric. The region includes the vast majority of the territories included in the closely associated definition of West Asia, but without the South Caucasus. It also includes all of Egypt (not just the Sinai region) and all of Turkey (including East Thrace). Most Middle Eastern countries (13 out of 18) are part of the Arab world. The three most populous countries in the region are Egypt, Iran, and Turkey, while Saudi Arabia is the largest Middle Eastern country by area. The history of the Middle East dates back to ancient times, and it was long considered the "cradle of civilization". The geopolitical importance of the region has been recognized and competed for during millennia. The Abrahamic religions (Judaism, Christianity, and Islam) have their origins in the Middle East. Arabs constitute the main ethnic group in the region, followed by Turks, Persians, Kurds, Jews, and Assyrians. The Middle East generally has a hot, arid climate, especially in the Arabian and Egyptian regions. Several major rivers provide irrigation to support agriculture in limited areas here, such as the Nile Delta in Egypt, the Tigris and Euphrates watersheds of Mesopotamia, and the basin of the Jordan River that spans most of the Levant. These regions are collectively known as the Fertile Crescent, and comprise the core of what historians had long referred to as the cradle of civilization; multiple regions of the world have since been classified as also having developed independent, original civilizations. Conversely, the Levantine coast and most of Turkey have relatively temperate climates typical of the Mediterranean, with dry summers and cool, wet winters. Most of the countries that border the Persian Gulf have vast reserves of petroleum. Monarchs of the Arabian Peninsula in particular have benefitted economically from petroleum exports. Because of the arid climate and dependence on the fossil fuel industry, the Middle East is both a major contributor to climate change and a region that is expected to be severely adversely affected by it. Other concepts of the region exist, including the broader Middle East and North Africa (MENA), which includes states of the Maghreb and the Sudan. The term the "Greater Middle East" also includes Afghanistan, Mauritania, Pakistan, as well as parts of East Africa, and sometimes Central Asia and the South Caucasus. Terminology The term "Middle East" may have originated in the 1850s in the British India Office. However, it became more widely known when United States naval strategist Alfred Thayer Mahan used the term in 1902 to "designate the area between Arabia and India". During this time the British and Russian empires were vying for influence in Central Asia, a rivalry that would become known as the Great Game. Mahan realized not only the strategic importance of the region, but also of its center, the Persian Gulf. He labeled the area surrounding the Persian Gulf as the Middle East. He said that, beyond Egypt's Suez Canal, the Gulf was the most important passage for Britain to control in order to keep the Russians from advancing towards British India. Mahan first used the term in his article "The Persian Gulf and International Relations", published in September 1902 in the National Review, a British journal. The Middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Persian Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and in case of disaster, of security. The British Navy should have the facility to concentrate in force if occasion arise, about Aden, India, and the Persian Gulf. Mahan's article was reprinted in The Times and followed in October by a 20-article series entitled "The Middle Eastern Question", written by Sir Ignatius Valentine Chirol. During this series, Sir Ignatius expanded the definition of Middle East to include "those regions of Asia which extend to the borders of India or command the approaches to India." After the series ended in 1903, The Times removed quotation marks from subsequent uses of the term. Until World War II, it was customary to refer to areas centered on Turkey and the eastern shore of the Mediterranean as the "Near East", while the "Far East" centered on China, India and Japan. The Middle East was then defined as the area from Mesopotamia to Burma; namely, the area between the Near East and the Far East. This area broadly corresponds to South Asia. In the late 1930s, the British established the Middle East Command, which was based in Cairo, for its military forces in the region. After that time, the term "Middle East" gained broader usage in Europe and the United States. Following World War II, for example, the Middle East Institute was founded in Washington, D.C. in 1946. The corresponding adjective is Middle Eastern and the derived noun is Middle Easterner. While non-Eurocentric terms such as "Southwest Asia" or "Swasia" have been sparsely used, the classification of the African country, Egypt, among those counted in the Middle East challenges the usefulness of using such terms. The description Middle has also led to some confusion over changing definitions. Before the First World War, "Near East" was used in English to refer to the Balkans and the Ottoman Empire, while "Middle East" referred to the Caucasus, Persia, and Arabian lands, and sometimes Afghanistan, India and others. In contrast, "Far East" referred to the countries of East Asia (e.g. China, Japan, and Korea). With the collapse of the Ottoman Empire in 1918, "Near East" largely fell out of common use in English, while "Middle East" came to be applied to the emerging independent countries of the Islamic world. However, the usage "Near East" was retained by a variety of academic disciplines, including archaeology and ancient history. In their usage, the term describes an area identical to the term Middle East, which is not used by these disciplines (see ancient Near East).[citation needed] The first official use of the term "Middle East" by the United States government was in the 1957 Eisenhower Doctrine, which pertained to the Suez Crisis. Secretary of State John Foster Dulles defined the Middle East as "the area lying between and including Libya on the west and Pakistan on the east, Syria and Iraq on the North and the Arabian peninsula to the south, plus the Sudan and Ethiopia." In 1958, the State Department explained that the terms "Near East" and "Middle East" were interchangeable, and defined the region as including only Egypt, Syria, Israel, Lebanon, Jordan, Iraq, Saudi Arabia, Kuwait, Bahrain, and Qatar. Since the late 20th century, scholars and journalists from the region, such as journalist Louay Khraish and historian Hassan Hanafi have criticized the use of "Middle East" as a Eurocentric and colonialist term. The Associated Press Stylebook of 2004 says that Near East formerly referred to the farther west countries while Middle East referred to the eastern ones, but that now they are synonymous. It instructs: Use Middle East unless Near East is used by a source in a story. Mideast is also acceptable, but Middle East is preferred. European languages have adopted terms similar to Near East and Middle East. Since these are based on a relative description, the meanings depend on the country and are generally different from the English terms. In German the term Naher Osten (Near East) is still in common use (nowadays the term Mittlerer Osten is more and more common in press texts translated from English sources, albeit having a distinct meaning). In the four Slavic languages, Russian Ближний Восток or Blizhniy Vostok, Bulgarian Близкия Изток, Polish Bliski Wschód or Croatian Bliski istok (terms meaning Near East are the only appropriate ones for the region). However, some European languages do have "Middle East" equivalents, such as French Moyen-Orient, Swedish Mellanöstern, Spanish Oriente Medio or Medio Oriente, Greek is Μέση Ανατολή (Mesi Anatoli), and Italian Medio Oriente.[c] Perhaps because of the political influence of the United States and Europe, and the prominence of Western press, the Arabic equivalent of Middle East (Arabic: الشرق الأوسط ash-Sharq al-Awsaṭ) has become standard usage in the mainstream Arabic press. It comprises the same meaning as the term "Middle East" in North American and Western European usage. The designation, Mashriq, also from the Arabic root for East, also denotes a variously defined region around the Levant, the eastern part of the Arabic-speaking world (as opposed to the Maghreb, the western part). Even though the term originated in the West, countries of the Middle East that use languages other than Arabic also use that term in translation. For instance, the Persian equivalent for Middle East is خاورمیانه (Khāvar-e miyāneh), the Hebrew is המזרח התיכון (hamizrach hatikhon), and the Turkish is Orta Doğu. Countries and territory Traditionally included within the Middle East are Arabia, Asia Minor, East Thrace, Egypt, Iran, the Levant, Mesopotamia, and the Socotra Archipelago. The region includes 17 UN-recognized countries and one British Overseas Territory. Various concepts are often paralleled to the Middle East, most notably the Near East, Fertile Crescent, and Levant. These are geographical concepts, which refer to large sections of the modern-day Middle East, with the Near East being the closest to the Middle East in its geographical meaning. Due to it primarily being Arabic speaking, the Maghreb region of North Africa is sometimes included. "Greater Middle East" is a political term coined by the second Bush administration in the first decade of the 21st century to denote various countries, pertaining to the Muslim world, specifically Afghanistan, Iran, Pakistan, and Turkey. Various Central Asian countries are sometimes also included. History The Middle East lies at the juncture of Africa and Eurasia and of the Indian Ocean and the Mediterranean Sea (see also: Indo-Mediterranean). It is the birthplace and spiritual center of religions such as Christianity, Islam, Judaism, Manichaeism, Yezidi, Druze, Yarsan, and Mandeanism, and in Iran, Mithraism, Zoroastrianism, Manicheanism, and the Baháʼí Faith. Throughout its history the Middle East has been a major center of world affairs; a strategically, economically, politically, culturally, and religiously sensitive area. The region is one of the regions where agriculture was independently discovered, and from the Middle East it was spread, during the Neolithic, to different regions of the world such as Europe, the Indus Valley and Eastern Africa. Prior to the formation of civilizations, advanced cultures formed all over the Middle East during the Stone Age. The search for agricultural lands by agriculturalists, and pastoral lands by herdsmen meant different migrations took place within the region and shaped its ethnic and demographic makeup. The Middle East is widely and most famously known as the cradle of civilization. The world's earliest civilizations, Mesopotamia (Sumer, Akkad, Assyria and Babylonia), ancient Egypt and Kish in the Levant, all originated in the Fertile Crescent and Nile Valley regions of the ancient Near East. These were followed by the Hittite, Greek, Hurrian and Urartian civilisations of Asia Minor; Elam, Persia and Median civilizations in Iran, as well as the civilizations of the Levant (such as Ebla, Mari, Nagar, Ugarit, Canaan, Aramea, Mitanni, Phoenicia and Israel) and the Arabian Peninsula (Magan, Sheba, Ubar). The Near East was first largely unified under the Neo Assyrian Empire, then the Achaemenid Empire followed later by the Macedonian Empire and after this to some degree by the Iranian empires (namely the Parthian and Sassanid Empires), the Roman Empire and Byzantine Empire. The region served as the intellectual and economic center of the Roman Empire and played an exceptionally important role due to its periphery on the Sassanid Empire. Thus, the Romans stationed up to five or six of their legions in the region for the sole purpose of defending it from Sassanid and Bedouin raids and invasions. From the 4th century CE onwards, the Middle East became the center of the two main powers at the time, the Byzantine Empire and the Sassanid Empire. However, it would be the later Islamic Caliphates of the Middle Ages, or Islamic Golden Age which began with the Islamic conquest of the region in the 7th century AD, that would first unify the entire Middle East as a distinct region and create the dominant Islamic Arab ethnic identity that largely (but not exclusively) persists today. The 4 caliphates that dominated the Middle East for more than 600 years were the Rashidun Caliphate, the Umayyad caliphate, the Abbasid caliphate and the Fatimid caliphate. Additionally, the Mongols would come to dominate the region, the Kingdom of Armenia would incorporate parts of the region to their domain, the Seljuks would rule the region and spread Turko-Persian culture, and the Franks would found the Crusader states that would stand for roughly two centuries. Josiah Russell estimates the population of what he calls "Islamic territory" as roughly 12.5 million in 1000 – Anatolia 8 million, Syria 2 million, and Egypt 1.5 million. From the 16th century onward, the Middle East came to be dominated, once again, by two main powers: the Ottoman Empire and the Safavid dynasty. The modern Middle East began after World War I, when the Ottoman Empire, which was allied with the Central Powers, was defeated by the Allies and partitioned into a number of separate nations, initially under British and French Mandates. Other defining events in this transformation included the establishment of Israel in 1948 and the eventual departure of European powers, notably Britain and France by the end of the 1960s. They were supplanted in some part by the rising influence of the United States from the 1970s onwards. In the 20th century, the region's significant stocks of crude oil gave it new strategic and economic importance. Mass production of oil began around 1945, with Saudi Arabia, Iran, Kuwait, Iraq, and the United Arab Emirates having large quantities of oil. Estimated oil reserves, especially in Saudi Arabia and Iran, are some of the highest in the world, and the international oil cartel OPEC is dominated by Middle Eastern countries. During the Cold War, the Middle East was a theater of ideological struggle between the two superpowers and their allies: NATO and the United States on one side, and the Soviet Union and Warsaw Pact on the other, as they competed to influence regional allies. Besides the political reasons there was also the "ideological conflict" between the two systems. Moreover, as Louise Fawcett argues, among many important areas of contention, or perhaps more accurately of anxiety, were, first, the desires of the superpowers to gain strategic advantage in the region, second, the fact that the region contained some two-thirds of the world's oil reserves in a context where oil was becoming increasingly vital to the economy of the Western world [...] Within this contextual framework, the United States sought to divert the Arab world from Soviet influence. Throughout the 20th and 21st centuries, the region has experienced both periods of relative peace and tolerance and periods of conflict particularly between Sunnis and Shiites. Geography In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The Middle East region is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Economy Middle Eastern economies range from being very poor (such as Gaza and Yemen) to extremely wealthy nations (such as Qatar and UAE). According to the International Monetary Fund, the three largest Middle Eastern economies in nominal GDP in 2023 were Saudi Arabia ($1.06 trillion), Turkey ($1.03 trillion), and Israel ($0.54 trillion). For nominal GDP per person, the highest ranking countries are Qatar ($83,891), Israel ($55,535), the United Arab Emirates ($49,451) and Cyprus ($33,807). Turkey ($3.6 trillion), Saudi Arabia ($2.3 trillion), and Iran ($1.7 trillion) had the largest economies in terms of GDP PPP. For GDP PPP per person, the highest-ranking countries are Qatar ($124,834), the United Arab Emirates ($88,221), Saudi Arabia ($64,836), Bahrain ($60,596) and Israel ($54,997). The lowest-ranking country in the Middle East, in terms of GDP nominal per capita, is Yemen ($573). The economic structure of Middle Eastern nations are different because while some are heavily dependent on export of only oil and oil-related products (Saudi Arabia, the UAE and Kuwait), others have a highly diverse economic base (such as Cyprus, Israel, Turkey and Egypt). Industries of the Middle Eastern region include oil and oil-related products, agriculture, cotton, cattle, dairy, textiles, leather products, surgical instruments, defence equipment (guns, ammunition, tanks, submarines, fighter jets, UAVs, and missiles). Banking is an important sector, especially for UAE and Bahrain. With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions. Since the end of the COVID pandemic however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies. Unemployment is high in the Middle East and North Africa region, particularly among people aged 15–29, a demographic representing 30% of the region's population. The total regional unemployment rate in 2025 is 10.8%, and among youth is as high as 28%. Demographics Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic peoples (Turkish, Azeris, Syrian Turkmen, and Iraqi Turkmen). Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Greek Cypriots, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas. European ethnic groups that form a diaspora in the region include Albanians, Bosniaks, Circassians (including Kabardians), Crimean Tatars, Greeks, Franco-Levantines, Italo-Levantines, and Iraqi Turkmens. Among other migrant populations are Chinese, Filipinos, Indians, Indonesians, Pakistanis, Pashtuns, Romani, and Afro-Arabs. "Migration has always provided an important vent for labor market pressures in the Middle East. For the period between the 1970s and 1990s, the Arab states of the Persian Gulf in particular provided a rich source of employment for workers from Egypt, Yemen and the countries of the Levant, while Europe had attracted young workers from North African countries due both to proximity and the legacy of colonial ties between France and the majority of North African states." According to the International Organization for Migration, there are 13 million first-generation migrants from Arab nations in the world, of which 5.8 reside in other Arab countries. Expatriates from Arab countries contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. In Somalia, the Somali Civil War has greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for Middle Eastern countries as well as Europe and North America. Non-Arab Middle Eastern countries such as Turkey, Israel and Iran are also subject to important migration dynamics. A fair proportion of those migrating from Arab nations are from ethnic and religious minorities facing persecution and are not necessarily ethnic Arabs, Iranians or Turks.[citation needed] Large numbers of Kurds, Jews, Assyrians, Greeks and Armenians as well as many Mandeans have left nations such as Iraq, Iran, Syria and Turkey for these reasons during the last century. In Iran, many religious minorities such as Christians, Baháʼís, Jews and Zoroastrians have left since the Islamic Revolution of 1979. The Middle East is very diverse when it comes to religions, many of which originated there. Islam is the largest religion in the Middle East, but other faiths that originated there, such as Judaism and Christianity, are also well represented. Christian communities have played a vital role in the Middle East, and they represent 78% of Cyprus population, and 40.5% of Lebanon, where the Lebanese president, half of the cabinet, and half of the parliament follow one of the various Lebanese Christian rites. There are also important minority religions like the Baháʼí Faith, Yarsanism, Yazidism, Zoroastrianism, Mandaeism, Druze, and Shabakism, and in ancient times the region was home to Mesopotamian religions, Canaanite religions, Manichaeism, Mithraism and various monotheist gnostic sects. The six top languages, in terms of numbers of speakers, are Arabic, Persian, Turkish, Kurdish, Modern Hebrew and Greek. About 20 minority languages are also spoken in the Middle East. Arabic, with all its dialects, is the most widely spoken language in the Middle East, with Literary Arabic being official in all North African and in most West Asian countries. Arabic dialects are also spoken in some adjacent areas in neighbouring Middle Eastern non-Arab countries. It is a member of the Semitic branch of the Afro-Asiatic languages. Several Modern South Arabian languages such as Mehri and Soqotri are also spoken in Yemen and Oman. Another Semitic language is Aramaic and its dialects are spoken mainly by Assyrians and Mandaeans, with Western Aramaic still spoken in two villages near Damascus, Syria. There is also an Oasis Berber-speaking community in Egypt where the language is also known as Siwa. It is a non-Semitic Afro-Asiatic sister language. Persian is the second most spoken language. While it is primarily spoken in Iran and some border areas in neighbouring countries, the country is one of the region's largest and most populous. It belongs to the Indo-Iranian branch of the family of Indo-European languages. Other Western Iranic languages spoken in the region include Achomi, Daylami, Kurdish dialects, Semmani, Lurish, amongst many others. The close third-most widely spoken language, Turkish, is largely confined to Turkey, which is also one of the region's largest and most populous countries, but it is present in areas in neighboring countries. It is a member of the Turkic languages, which have their origins in East Asia. Another Turkic language, Azerbaijani, is spoken by Azerbaijanis in Iran. The fourth-most widely spoken language, Kurdish, is spoken in the countries of Iran, Iraq, Syria and Turkey, Sorani Kurdish is the second official language in Iraq (instated after the 2005 constitution) after Arabic. Hebrew is the official language of Israel, with Arabic given a special status after the 2018 Basic law lowered its status from an official language prior to 2018. Hebrew is spoken and used by over 80% of Israel's population, the other 20% using Arabic. Modern Hebrew only began being spoken in the 20th century after being revived in the late 19th century by Elizer Ben-Yehuda (Elizer Perlman) and European Jewish settlers, with the first native Hebrew speaker being born in 1882. Greek is one of the two official languages of Cyprus, and the country's main language. Small communities of Greek speakers exist all around the Middle East; until the 20th century it was also widely spoken in Asia Minor (being the second most spoken language there, after Turkish) and Egypt. During the antiquity, Ancient Greek was the lingua franca for many areas of the western Middle East and until the Muslim expansion it was widely spoken there as well. Until the late 11th century, it was also the main spoken language in Asia Minor; after that it was gradually replaced by the Turkish language as the Anatolian Turks expanded and the local Greeks were assimilated, especially in the interior. English is one of the official languages of Akrotiri and Dhekelia. It is also commonly taught and used as a foreign second language, in countries such as Egypt, Jordan, Iran, Iraq, Qatar, Bahrain, United Arab Emirates and Kuwait. It is also a main language in some Emirates of the United Arab Emirates. It is also spoken as native language by Jewish immigrants from Anglophone countries (UK, US, Australia) in Israel and understood widely as second language there. French is taught and used in many government facilities and media in Lebanon, and is taught in some primary and secondary schools of Egypt and Syria. Maltese, a Semitic language mainly spoken in Europe, is used by the Franco-Maltese diaspora in Egypt. Due to widespread immigration of French Jews to Israel, it is the native language of approximately 200,000 Jews in Israel. Armenian speakers are to be found in the region. Georgian is spoken by the Georgian diaspora. Russian is spoken by a large portion of the Israeli population, because of emigration in the late 1990s. Russian today is a popular unofficial language in use in Israel; news, radio and sign boards can be found in Russian around the country after Hebrew and Arabic. Circassian is also spoken by the diaspora in the region and by almost all Circassians in Israel who speak Hebrew and English as well. The largest Romanian-speaking community in the Middle East is found in Israel, where as of 1995[update] Romanian is spoken by 5% of the population.[d] Bengali, Hindi and Urdu are widely spoken by migrant communities in many Middle Eastern countries, such as Saudi Arabia (where 20–25% of the population is South Asian), the United Arab Emirates (where 50–55% of the population is South Asian), and Qatar, which have large numbers of Pakistani, Bangladeshi and Indian immigrants. Culture The Middle East has recently become more prominent in hosting global sport events due to its wealth and desire to diversify its economy. The South Asian diaspora is a major backer of cricket in the region. See also Notes References Further reading External links 29°N 41°E / 29°N 41°E / 29; 41 |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.